WorldWideScience

Sample records for automatable method extract

  1. Automatic segmentation of brain images: selection of region extraction methods

    Science.gov (United States)

    Gong, Leiguang; Kulikowski, Casimir A.; Mezrich, Reuben S.

    1991-07-01

    In automatically analyzing brain structures from a MR image, the choice of low level region extraction methods depends on the characteristics of both the target object and the surrounding anatomical structures in the image. The authors have experimented with local thresholding, global thresholding, and other techniques, using various types of MR images for extracting the major brian landmarks and different types of lesions. This paper describes specifically a local- binary thresholding method and a new global-multiple thresholding technique developed for MR image segmentation and analysis. The initial testing results on their segmentation performance are presented, followed by a comparative analysis of the two methods and their ability to extract different types of normal and abnormal brain structures -- the brain matter itself, tumors, regions of edema surrounding lesions, multiple sclerosis lesions, and the ventricles of the brain. The analysis and experimental results show that the global multiple thresholding techniques are more than adequate for extracting regions that correspond to the major brian structures, while local binary thresholding is helpful for more accurate delineation of small lesions such as those produced by MS, and for the precise refinement of lesion boundaries. The detection of other landmarks, such as the interhemispheric fissure, may require other techniques, such as line-fitting. These experiments have led to the formulation of a set of generic computer-based rules for selecting the appropriate segmentation packages for particular types of problems, based on which further development of an innovative knowledge- based, goal directed biomedical image analysis framework is being made. The system will carry out the selection automatically for a given specific analysis task.

  2. Automatic extraction of candidate nomenclature terms using the doublet method

    Directory of Open Access Journals (Sweden)

    Berman Jules J

    2005-10-01

    nomenclature. Results A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms that could be added to the reference nomenclature. Conclusion The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.

  3. Methods and Prospects of Road and Linear Structure Automatic Extraction from Remote Sensing Images

    Institute of Scientific and Technical Information of China (English)

    LIU Zhengrong

    2003-01-01

    Automatic extraction of road and linear structure from remote sensing images is a very important problem. This paper analyses several existing methods of the automatic road and linear structure extraction by using some multi-spectral remote sensing images acquired from different spatial resolutions, districts and road characteristics. Their advantages and disadvantages have been generalized.

  4. An automatic abrupt information extraction method based on singular value decomposition and higher-order statistics

    International Nuclear Information System (INIS)

    One key aspect of local fault diagnosis is how to effectively extract abrupt features from the vibration signals. This paper proposes a method to automatically extract abrupt information based on singular value decomposition and higher-order statistics. In order to observe the distribution law of singular values, a numerical analysis to simulate the noise, periodic signal, abrupt signal and singular value distribution is conducted. Based on higher-order statistics and spectrum analysis, a method to automatically choose the upper and lower borders of the singular value interval reflecting the abrupt information is built. And the selected singular values derived from this method are used to reconstruct abrupt signals. It is proven that the method is able to obtain accurate results by processing the rub-impact fault signal measured from the experiments. The analytical and experimental results indicate that the proposed method is feasible for automatically extracting abrupt information caused by faults like the rotor–stator rub-impact. (paper)

  5. A Semi-automatic Method Based on Statistic for Mandarin Semantic Structures Extraction in Specific Domains

    Institute of Scientific and Technical Information of China (English)

    熊英; 朱杰; 孙静

    2004-01-01

    This paper proposed a new method of semi-automatic extraction for semantic structures from unlabelled corpora in specific domains. The approach is statistical in nature. The extracted structures can be used for shallow parsing and semantic labeling. By iteratively extracting new words and clustering words, we get an inital semantic lexicon that groups words of the same semantic meaning together as a class. After that, a bootstrapping algorithm is adopted to extract semantic structures. Then the semantic structures are used to extract new key words and augment the semantic lexicon. The resultant semantic structures are interpreted by persons and are amenable to hand-editing for refinement. In this experiment, the semi-automatically extracted structures SSA provide recall rate of 84.5%.

  6. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    Science.gov (United States)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  7. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  8. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  9. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  10. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  11. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  12. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  13. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  14. Automatic Vehicle Extraction from Airborne LiDAR Data Using an Object-Based Point Cloud Analysis Method

    Directory of Open Access Journals (Sweden)

    Jixian Zhang

    2014-09-01

    Full Text Available Automatic vehicle extraction from an airborne laser scanning (ALS point cloud is very useful for many applications, such as digital elevation model generation and 3D building reconstruction. In this article, an object-based point cloud analysis (OBPCA method is proposed for vehicle extraction from an ALS point cloud. First, a segmentation-based progressive TIN (triangular irregular network densification is employed to detect the ground points, and the potential vehicle points are detected based on the normalized heights of the non-ground points. Second, 3D connected component analysis is performed to group the potential vehicle points into segments. At last, vehicle segments are detected based on three features, including area, rectangularity and elongatedness. Experiments suggest that the proposed method is capable of achieving higher accuracy than the exiting mean-shift-based method for vehicle extraction from an ALS point cloud. Moreover, the larger the point density is, the higher the achieved accuracy is.

  15. A multi-scale method for automatically extracting the dominant features of cervical vertebrae in CT images

    Directory of Open Access Journals (Sweden)

    Tung-Ying Wu

    2013-07-01

    Full Text Available Localization of the dominant points of cervical spines in medical images is important for improving the medical automation in clinical head and neck applications. In order to automatically identify the dominant points of cervical vertebrae in neck CT images with precision, we propose a method based on multi-scale contour analysis to analyzing the deformable shape of spines. To extract the spine contour, we introduce a method to automatically generate the initial contour of the spine shape, and the distance field for level set active contour iterations can also be deduced. In the shape analysis stage, we at first coarsely segment the extracted contour with zero-crossing points of the curvature based on the analysis with curvature scale space, and the spine shape is modeled with the analysis of curvature scale space. Then, each segmented curve is analyzed geometrically based on the turning angle property at different scales, and the local extreme points are extracted and verified as the dominant feature points. The vertices of the shape contour are approximately derived with the analysis at coarse scale, and then adjusted precisely at fine scale. Consequently, the results of experiment show that we approach a success rate of 93.4% and accuracy of 0.37mm by comparing with the manual results.

  16. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography.

    Science.gov (United States)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin; Miró, Manuel

    2014-10-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron hydroxide and manganese dioxide co-precipitation and evaporation were compared and the applicability of different techniques was discussed in order to evaluate and establish the optimal method for in vivo radioassay program. The analytical results indicate that the various sample pre-concentration approaches afford dissimilar method performances and care should be taken for specific experimental parameters for improving chemical yields. The best analytical performances in terms of turnaround time (6h) and chemical yields for plutonium (88.7 ± 11.6%) and neptunium (94.2 ± 2.0%) were achieved by manganese dioxide co-precipitation. The need of drying ashing (≥ 7h) for calcium phosphate co-precipitation and long-term aging (5d) for iron hydroxide co-precipitation, respectively, rendered time-consuming analytical protocols. Despite the fact that evaporation is also somewhat time-consuming (1.5d), it endows urinalysis methods with better reliability and repeatability compared with co-precipitation techniques. In view of the applicability of different pre-concentration techniques proposed previously in the literature, the main challenge behind relevant method development is pointed to be the release of plutonium and neptunium associated with organic compounds in real urine assays. In this work, different protocols for decomposing organic matter in urine were investigated, of which potassium persulfate (K2S2O8) treatment provided the highest chemical yield of neptunium in the iron hydroxide co-precipitation step, yet, the occurrence of sulfur compounds in the processed sample deteriorated the analytical performance of the ensuing extraction chromatographic separation with chemical

  17. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  18. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    OpenAIRE

    Li, Y.; Hu, X.; H. Guan; Liu, P.

    2016-01-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these...

  19. Automatic Extraction of Planetary Image Features

    Science.gov (United States)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  20. Automatic Feature Extraction from Planetary Images

    Science.gov (United States)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  1. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin;

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron...... by manganese dioxide co-precipitation. The need of drying ashing (>= 7 h) for calcium phosphate co-precipitation and long-term aging (5 d) for iron hydroxide co-precipitation, respectively, rendered time-consuming analytical protocols. Despite the fact that evaporation is also somewhat time-consuming (1.5 d...... of plutonium and neptunium associated with organic compounds in real urine assays. In this work, different protocols for decomposing organic matter in urine were investigated, of which potassium persulfate (K2S2O8) treatment provided the highest chemical yield of neptunium in the iron hydroxide co...

  2. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    OpenAIRE

    Haijian Chen; Dongmei Han; Yonghui Dai; Lina Zhao

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course ...

  3. Automatic target extraction in complicated background for camera calibration

    Science.gov (United States)

    Guo, Xichao; Wang, Cheng; Wen, Chenglu; Cheng, Ming

    2016-03-01

    In order to perform high precise calibration of camera in complex background, a novel design of planar composite target and the corresponding automatic extraction algorithm are presented. Unlike other commonly used target designs, the proposed target contains the information of feature point coordinate and feature point serial number simultaneously. Then based on the original target, templates are prepared by three geometric transformations and used as the input of template matching based on shape context. Finally, parity check and region growing methods are used to extract the target as final result. The experimental results show that the proposed method for automatic extraction and recognition of the proposed target is effective, accurate and reliable.

  4. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  5. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  6. Automatic extraction of legal concepts and definitions

    NARCIS (Netherlands)

    R. Winkels; R. Hoekstra

    2012-01-01

    In this paper we present the results of an experiment in automatic concept and definition extraction from written sources of law using relatively simple natural language and standard semantic web technology. The software was tested on six laws from the tax domain.

  7. Automatically extracting class diagrams from spreadsheets

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2010-01-01

    The use of spreadsheets to capture information is widespread in industry. Spreadsheets can thus be a wealthy source of domain information. We propose to automatically extract this information and transform it into class diagrams. The resulting class diagram can be used by software engineers to under

  8. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    OpenAIRE

    Chuqing Cao; Ying Sun

    2014-01-01

    Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data....

  9. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    Directory of Open Access Journals (Sweden)

    Xiao Yu

    2015-11-01

    Full Text Available Because roller element bearings (REBs failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT. In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS into window spectrums, following which Rand Index (RI criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs. Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines. The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU. The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault

  10. Automatic Extraction of JPF Options and Documentation

    Science.gov (United States)

    Luks, Wojciech; Tkachuk, Oksana; Buschnell, David

    2011-01-01

    Documenting existing Java PathFinder (JPF) projects or developing new extensions is a challenging task. JPF provides a platform for creating new extensions and relies on key-value properties for their configuration. Keeping track of all possible options and extension mechanisms in JPF can be difficult. This paper presents jpf-autodoc-options, a tool that automatically extracts JPF projects options and other documentation-related information, which can greatly help both JPF users and developers of JPF extensions.

  11. A semi-automatic method to extract canal pathways in 3D micro-CT images of Octocorals.

    Directory of Open Access Journals (Sweden)

    Alfredo Morales Pinzón

    Full Text Available The long-term goal of our study is to understand the internal organization of the octocoral stem canals, as well as their physiological and functional role in the growth of the colonies, and finally to assess the influence of climatic changes on this species. Here we focus on imaging tools, namely acquisition and processing of three-dimensional high-resolution images, with emphasis on automated extraction of canal pathways. Our aim was to evaluate the feasibility of the whole process, to point out and solve - if possible - technical problems related to the specimen conditioning, to determine the best acquisition parameters and to develop necessary image-processing algorithms. The pathways extracted are expected to facilitate the structural analysis of the colonies, namely to help observing the distribution, formation and number of canals along the colony. Five volumetric images of Muricea muricata specimens were successfully acquired by X-ray computed tomography with spatial resolution ranging from 4.5 to 25 micrometers. The success mainly depended on specimen immobilization. More than [Formula: see text] of the canals were successfully detected and tracked by the image-processing method developed. Thus obtained three-dimensional representation of the canal network was generated for the first time without the need of histological or other destructive methods. Several canal patterns were observed. Although most of them were simple, i.e. only followed the main branch or "turned" into a secondary branch, many others bifurcated or fused. A majority of bifurcations were observed at branching points. However, some canals appeared and/or ended anywhere along a branch. At the tip of a branch, all canals fused into a unique chamber. Three-dimensional high-resolution tomographic imaging gives a non-destructive insight to the coral ultrastructure and helps understanding the organization of the canal network. Advanced image-processing techniques greatly

  12. A method for automatic extraction of key fames%一种关键帧的自动提取方法

    Institute of Scientific and Technical Information of China (English)

    刘善磊; 赵银娣; 王光辉; 李英成; 薛艳丽; 李建军

    2012-01-01

    In the paper, first, two main lens distortions and camera calibration were introduced* Then the formula of key frames forward overlap was developed within intrinsic parameters of the camera and video frame rate. Then key frames were automatically extracted from the file or real time source. Automatic extraction of timing from real time source was processed and key frame positioning method was used for extracting key frames from file source. Finally key frames were corrected with the calibration parameters. Real data was used to test the developed method, and results showed that the technique was efficient and exact*%本文首先介绍了2种主要透镜畸变及摄像机标定方法;然后结合摄像机内部参数和视频帧率推导出关键帧航向重叠度计算公式;在此基础上实现了指定航向重叠度关键帧从文件源或实时源中的自动提取,文件源中采用定位关键帧自动提取算法,实时源中采用定时自动提取算法;最后利用得到的标定参数完成关键帧影像矫正.实验结果表明本文采用的算法能够高效、准确地得到矫正好的指定航向重叠度关键帧.

  13. An automatic face contour extracting method%一种自动的人脸轮廓定位方法

    Institute of Scientific and Technical Information of China (English)

    李昕昕; 龚勋; 夏冉

    2013-01-01

    人脸分割对人脸识别、人脸三维建模等人脸图像处理问题具有重要意义,而人脸图像往往轮廓边缘模糊、梯度不明显,常规无边缘几何活动轮廓模型通常无法获得理想的分割效果且计算量较大.为实现快速、准确的人脸轮廓定位及分割,将无边缘几何活动轮廓模型和稀疏场数值算法相结合提出了一个改进的算法,并结合人脸检测和数学形态学算子提出一个基于曲线演化的人脸分割方案.实验结果表明,该算法不仅提高了计算效率,还可以有效地检测出局部模糊或分断边界,进化曲线不会断裂,能够获得较好的人脸分割效果.%Images containing faces are essential to intelligent vision-based human computer interaction,and research efforts in face processing include face recognition, face tracking, and expression recognition. Many applications assume that the faces in an image or an image sequence have been identified and localized. To build fully automated systems that analyze the information contained in face images, robust and efficient face detection algorithms are required. However,such a problem is challenging because faces are non-rigid and have a high degree of variability in size,shape,color,and texture. The purpose of this paper is to provide a relative robust method for face segmentation in images based on curve evolution methodology. Since the face image always has a blur boundary and little gradient changes,the region segmentations obtained by the original Chan-Vese model are generally unsatisfactory and need large amount of calculations. To achieve more accurate facial contour extraction and face segmentation, a new face segmentation scheme based on curve evolution model is proposed which is a combination of Chan-Vese model, sparse-field algorithm, face detection and mathematical morphology operators. Experimental results show that the improved algorithm can effectively detect the local blur and breaking

  14. Rapid, potentially automatable, method extract biomarkers for HPLC/ESI/MS/MS to detect and identify BW agents

    Energy Technology Data Exchange (ETDEWEB)

    White, D.C. [Univ. of Tennessee, Knoxville, TN (United States). Center for Environmental Biotechnology]|[Oak Ridge National Lab., TN (United States). Environmental Science Div.; Burkhalter, R.S.; Smith, C. [Univ. of Tennessee, Knoxville, TN (United States). Center for Environmental Biotechnology; Whitaker, K.W. [Microbial Insights, Inc., Rockford, TN (United States)

    1997-12-31

    The program proposes to concentrate on the rapid recovery of signature biomarkers based on automated high-pressure, high-temperature solvent extraction (ASE) and/or supercritical fluid extraction (SFE) to produce lipids, nucleic acids and proteins sequentially concentrated and purified in minutes with yields especially from microeukaryotes, Gram-positive bacteria and spores. Lipids are extracted in higher proportions greater than classical one-phase, room temperature solvent extraction without major changes in lipid composition. High performance liquid chromatography (HPLC) with or without derivatization, electrospray ionization (ESI) and highly specific detection by mass spectrometry (MS) particularly with (MS){sup n} provides the detection, identification and because the signature lipid biomarkers are both phenotypic as well as genotypic biomarkers, insights into potential infectivity of BW agents. Feasibility has been demonstrated with detection, identification, and determination of infectious potential of Cryptosporidium parvum at the sensitivity of a single oocyst (which is unculturable in vitro) and accurate identification and prediction, pathogenicity, and drug-resistance of Mycobacteria spp.

  15. Automatic Extraction of Protein Interaction in Literature

    OpenAIRE

    Liu, Peilei; Wang, Ting

    2014-01-01

    Protein-protein interaction extraction is the key precondition of the construction of protein knowledge network, and it is very important for the research in the biomedicine. This paper extracted directional protein-protein interaction from the biological text, using the SVM-based method. Experiments were evaluated on the LLL05 corpus with good results. The results show that dependency features are import for the protein-protein interaction extraction and features related to the interaction w...

  16. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  17. A new generic method for semi-automatic extraction of river and road networks in low- and mid-resolution satellite images

    Science.gov (United States)

    Grazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-10-01

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  18. Automatic Railway Power Line Extraction Using Mobile Laser Scanning Data

    Science.gov (United States)

    Zhang, Shanxin; Wang, Cheng; Yang, Zhuang; Chen, Yiping; Li, Jonathan

    2016-06-01

    Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning) data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis) combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  19. Semi-automatic methods for landslide features and channel network extraction in a complex mountainous terrain: new opportunities but also challenges from high resolution topography

    Science.gov (United States)

    Tarolli, Paolo; Sofia, Giulia; Pirotti, Francesco; Dalla Fontana, Giancarlo

    2010-05-01

    In recent years, remotely sensed technologies such as airborne and terrestrial laser scanner have improved the detail of analysis providing high-resolution and high-quality topographic data over large areas better than other technologies. A new generation of high resolution (~ 1m) Digital Terrain Models (DTMs) are now available for different landscapes. These data call for the development of the new generation of methodologies for objective extraction of geomorphic features, such as channel heads, channel networks, bank geometry, landslide scars, service roads, etc. The most important benefit of a high resolution DTM is the detailed recognition of surface features. It is possible to recognize in detail divergent-convex landforms, associated with the dominance of hillslope processes, and convergent-concave landforms, associated with fluvial-dominated erosion. In this work, we test the performance of new methodologies for objective extraction of geomorphic features related to landsliding and channelized processes in order to provide a semi-automatic method for channel network and landslide features recognition in a complex mountainous terrain. The methodologies are based on the detection of thresholds derived by statistical analysis of variability of surface curvature. We considered a study area located in the eastern Italian Alps where a high-quality set of LiDAR data is available and where channel heads, related channel network, and landslides have been mapped in the field by DGPS. In the analysis we derived 1 m DTMs from bare ground LiDAR points, and we used different smoothing factors for the curvature calculation in order to set the more suitable curvature maps for the recognition of selected features. Our analyses suggest that: i) the scale for curvature calculations has to be a function of the scale of the features to be detected, (ii) rougher curvature maps are not optimal as they do not explore a sufficient range at which features occur, while smoother

  20. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    Science.gov (United States)

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  1. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  2. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data fr...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  3. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  4. Automatic Eye Extraction in Human Face Images

    Institute of Scientific and Technical Information of China (English)

    LIU Rujie; YUAN Baozong

    2001-01-01

    This paper presents a fuzzy-basedmethod to locate the position and the size of irises ina head-shoulder image with plain background. Thismethod is composed of two stages: the face region es-timation stage and the eye feature extraction stage.In the first stage, a region growing method is adoptedto estimate the face region. In the second stage, thecoarse eye area is firstly extracted based on the loca-tion of the nasion, and the deformable template al-gorithm is then applied to eye area to determine theposition and the size of irises. Experimental resultsshow the efficiency and robustness of this method.

  5. Fast Hough transform for automatic bridge extraction

    Science.gov (United States)

    Hao, Qiwei; Chen, Xiaomei; Ni, Guoqiang; Zhang, Huaili

    2008-03-01

    In this paper, a new method to recognize bridge in the complicated background is presented. The algorithm takes full advantages of the characteristics of the bridge image. Firstly, the image is preprocessed and the object edges are extracted. Then according to the limitations of traditional Hough transform (HT), the extraction method of the image line segment characteristic of HT is improved, which eliminates spurious peaks on the basis of global and local thresholds, discriminates the position relation between two straight line segments, and merges segments with near endpoints, etc. Experiments show that this algorithm is more precise and efficient than traditional HT, moreover it can provide a complete description of the bridge in a complicated background.

  6. Automatic object extraction over multiscale edge field for multimedia retrieval.

    Science.gov (United States)

    Kiranyaz, Serkan; Ferreira, Miguel; Gabbouj, Moncef

    2006-12-01

    In this work, we focus on automatic extraction of object boundaries from Canny edge field for the purpose of content-based indexing and retrieval over image and video databases. A multiscale approach is adopted where each successive scale provides further simplification of the image by removing more details, such as texture and noise, while keeping major edges. At each stage of the simplification, edges are extracted from the image and gathered in a scale-map, over which a perceptual subsegment analysis is performed in order to extract true object boundaries. The analysis is mainly motivated by Gestalt laws and our experimental results suggest a promising performance for main objects extraction, even for images with crowded textural edges and objects with color, texture, and illumination variations. Finally, integrating the whole process as feature extraction module into MUVIS framework allows us to test the mutual performance of the proposed object extraction method and subsequent shape description in the context of multimedia indexing and retrieval. A promising retrieval performance is achieved, and especially in some particular examples, the experimental results show that the proposed method presents such a retrieval performance that cannot be achieved by using other features such as color or texture. PMID:17153949

  7. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    Science.gov (United States)

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  8. Image feature meaning for automatic key-frame extraction

    Science.gov (United States)

    Di Lecce, Vincenzo; Guerriero, Andrea

    2003-12-01

    Video abstraction and summarization, being request in several applications, has address a number of researches to automatic video analysis techniques. The processes for automatic video analysis are based on the recognition of short sequences of contiguous frames that describe the same scene, shots, and key frames representing the salient content of the shot. Since effective shot boundary detection techniques exist in the literature, in this paper we will focus our attention on key frames extraction techniques to identify the low level visual features of the frames that better represent the shot content. To evaluate the features performance, key frame automatically extracted using these features, are compared to human operator video annotations.

  9. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  10. Condition Monitoring Method for Automatic Transmission Clutches

    Directory of Open Access Journals (Sweden)

    Agusmian Partogi Ompusunggu

    2012-01-01

    Full Text Available This paper presents the development of a condition monitoring method for wet friction clutches which might be useful for automatic transmission applications. The method is developed based on quantifying the change of the relative rotational velocity signal measured between the input and output shaft of a clutch. Prior to quantifying the change, the raw velocity signal is preprocessed to capture the relative velocity signal of interest. Three dimensionless parameters, namely the normalized engagement duration, the normalized Euclidean distance and the spectral angle mapper distance, that can be easily extracted from the signal of interest are proposed in this paper to quantify the change. In order to experimentally evaluate and verify the potential of the proposed method, clutches' life data obtained by conducting accelerated life tests on some commercial clutches with different lining friction materials using a fully instrumented SAE#2 test setup, are utilized for this purpose. The aforementioned parameters extracted from the experimental data exhibit clearly progressive changes during the clutch service life and are well correlated with the evolution of the mean coefficient of friction (COF, which can be seen as a reference feature. Hence, the quantities proposed in this paper can therefore be seen as principle features that may enable us to monitor and assess the condition of wet friction clutches.

  11. Automatically extracting functionally equivalent proteins from SwissProt

    Directory of Open Access Journals (Sweden)

    Martin Andrew CR

    2008-10-01

    Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and

  12. Automatic Statistics Extraction for Amateur Soccer Videos

    NARCIS (Netherlands)

    Gemert, J.C. van; Schavemaker, J.G.M.; Bonenkamp, C.W.B.

    2014-01-01

    Amateur soccer statistics have interesting applications such as providing insights to improve team performance, individual coaching, monitoring team progress and personal or team entertainment. Professional soccer statistics are extracted with labor intensive expensive manual effort which is not rea

  13. Unsupervised segmentation of cardiac PET transmission images for automatic heart volume extraction.

    Science.gov (United States)

    Juslin, Anu; Tohka, Jussi

    2006-01-01

    In this study, we propose an automatic method to extract the heart volume from the cardiac positron emission tomography (PET) transmission images. The method combines the automatic 3D segmentation of the transmission image using Markov random fields (MRFs) to surface extraction using deformable models. Deformable models were automatically initialized using the MRFs segmentation result. The extraction of the heart region is needed e.g. in independent component analysis (ICA). The volume of the heart can be used to mask the emission image corresponding to the transmission image, so that only the cardiac region is used for the analysis. The masking restricts the number of independent components and reduces the computation time. In addition, the MRF segmentation result could be used for attenuation correction. The method was tested with 25 patient images. The MRF segmentation results were of good quality in all cases and we were able to extract the heart volume from all the images. PMID:17946020

  14. Automatic Contour Extraction from 2D Neuron Images

    CERN Document Server

    Leandro, J J G; Costa, L da F

    2008-01-01

    The current work describes a novel system devised for automatic contour extraction of 2D branching structures images obtained from 3D neurons. Most contour-based methods for neuronal cell shape analysis fall short of suitable representation of such cells because overlaps between neuronal processes prevent traditional contour following algorithms from entering the innermost cell regions. The herein-proposed framework is specifically aimed at the problem of contour following even in presence of multiple overlaps. First, the input image is preprocessed in order to obtain an 8-connected skeleton with one-pixel-wide branches, as well as a set of subtree seed pixels and critical regions (i.e., bifurcations and crossings). Next, for each subtree, the tracking algorithm iteratively labels all valid pixel branches, up to a critical region, where the algorithm determines the suitable direction to proceed. Our algorithm has been found to exhibit robustness even for images with close parallel segments. Experimental resul...

  15. Automatic moving object extraction toward compact video representation

    Science.gov (United States)

    Fan, Jianping; Fujita, Gen; Furuie, Makoto; Onoye, Takao; Shirakawa, Isao; Wu, Lide

    2000-02-01

    An automatic object-oriented video segmentation and representation algorithm is proposed, where the local variance contrast and the frame differences contrast are jointly exploited for meaningful moving object extinction because these two visual features can indicate the spatial homogeneity of the gray levels and the temporal coherence of the motion fields efficiently. The 2D entropic thresholding technique and the watershed transformation method are further developed to determine the global feature thresholds adaptively according to the variation of the video components. The obtained video components are first represented by a group of 4 X 4 blocks coarsely, and then the meaningful moving objects are generated by an iterative region-merging procedure according to the spatiotemporal similarity measure. The temporal tracking procedure is further proposed to obtain more semantic moving objects among frames. Therefore, the proposed automatic moving object extraction algorithm can detect the appearance of new objects as well as the disappearance of existing objects efficiently because the correspondence of the video objects among frames is also established. Moreover, an object- oriented video representation and indexing approach is suggested, where both the operation of the camera (i.e., change of the viewpoint) and the birth or death of the individual objects are exploited to detect the breakpoints of the video data and to select the key frames adaptively.

  16. 基于主动轮廓模型的肺纹理自动提取新方法%A Novel Automatic Extraction Method of Lung Texture Tree from HRCT Images

    Institute of Scientific and Technical Information of China (English)

    刘军伟; 冯焕清; 周颖玥; 李传富

    2009-01-01

    Computed tomography (CT) is the primary imaging modality for investigation of lung function and lung diseases. High resolution CT slice images of chest contain lots of texture information, which provides powerful datasets to research computer aid-diagnosis (CAD) system. But the extraction of lung tissue textures is a challenge task. In this paper, we introduce a novel method based on level set to extract lung tissue texture tree, which is automatic and effectual. Firstly, we propose an improved implicit active contour model driven by local binary fitting energy, and the parameters are dynamic and modulated by image gradient information. Secondly, a new technique of painting background based on intensity nonlinear mapping is brought forward to remove the influence of background during the evolution of single level set function. At last, a number of contrast experiments are performed, and the results of 3D surface reconstruction show our method is efficient and powerful for the segmentation of fine lung tree texture structures.

  17. Automatic Extraction of Mangrove Vegetation from Optical Satellite Data

    Science.gov (United States)

    Agrawal, Mayank; Sushma Reddy, Devireddy; Prasad, Ram Chandra

    2016-06-01

    Mangrove, the intertidal halophytic vegetation, are one of the most significant and diverse ecosystem in the world. They protect the coast from sea erosion and other natural disasters like tsunami and cyclone. In view of their increased destruction and degradation in the current scenario, mapping of this vegetation is at priority. Globally researchers mapped mangrove vegetation using visual interpretation method or digital classification approaches or a combination of both (hybrid) approaches using varied spatial and spectral data sets. In the recent past techniques have been developed to extract these coastal vegetation automatically using varied algorithms. In the current study we tried to delineate mangrove vegetation using LISS III and Landsat 8 data sets for selected locations of Andaman and Nicobar islands. Towards this we made an attempt to use segmentation method, that characterize the mangrove vegetation based on their tone and the texture and the pixel based classification method, where the mangroves are identified based on their pixel values. The results obtained from the both approaches are validated using maps available for the region selected and obtained better accuracy with respect to their delineation. The main focus of this paper is simplicity of the methods and the availability of the data on which these methods are applied as these data (Landsat) are readily available for many regions. Our methods are very flexible and can be applied on any region.

  18. Automatically extracting translation links using a wide coverage semantic taxonomy

    OpenAIRE

    Rigau Claramunt, German; Rodríguez Hontoria, Horacio; Turmo Borras, Jorge

    1995-01-01

    TGE (Tlink Generator Environment) is a system for semi-automatically extracting translation links. The system was developed within the ACQUILEX II project as a tool for supporting the construction of a multi-lingual lexical knowledge base containing detailed syntactic and semantic information from MRD resources. A drawback of the original system was the need of human intervention for selecting the more appropriate translation links in the case where more than one were extracted and proposed b...

  19. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  20. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    OpenAIRE

    Mohammad Awrangjeb; Fraser, Clive S.

    2014-01-01

    Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging) data. Using the ground height from a DEM (digital elevation model), the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the b...

  1. 基于厦门岛的海岸线自动提取方法研究%The Method of Coastline Automatic Extraction in Xiamen Island

    Institute of Scientific and Technical Information of China (English)

    齐宇; 任航科

    2012-01-01

    应用遥感的方法监测海岸线变化、提取海岸线、进行景观分析具有范围广、精度高、可动态监测的特点。提取海岸线由于海岸带类型的不同,选取的提取方法不同,会得出不同的结果。本文以厦门岛海岸线为例,使用TM和遥感影像,利用两种提取海岸线方法,得到计算机自动提取的两种海岸线位置,并通过实地调查确认海岸类型和叠加高空间分辨率的SPOT影像进行精度分析。探讨了根据不同海岸带类型,如何选取海岸线自动提取方法的问题。%In terms of supervising the change of coastline, extracting the coastline, and analyzing the landscape, re- mote sensing has many advantages: it is wider, more precise and dynamic. Owing to different types of coastal zone and different extracting methods,the results of coastline auto-extraction may differ significantly. Taking the coastal zone a- round Xiamen island as an example, this paper uses TM image and two different methods of computer coastline auto-ex- traction to extract its coastline,which are of two types: sandy-beach and artificial beach coasts. Based on the result of field research around the Xiamen island, the paper also precisely analyzes the result of the fusion with its high spatial res- olution SPOT image. Finally, the paper discusses how to select the methods of computer coastline auto-extraction subject to different coastal zones

  2. Fully automatic extraction of salient objects from videos in near real-time

    CERN Document Server

    Kazuma, Akamine; Kimura, Akisato; Takagi, Shigeru

    2010-01-01

    Automatic video segmentation plays an important role in a wide range of computer vision and image processing applications. Recently, various methods have been proposed for this purpose. The problem is that most of these methods are far from real-time processing even for low-resolution videos due to the complex procedures. To this end, we propose a new and quite fast method for automatic video segmentation with the help of 1) efficient optimization of Markov random fields with polynomial time of number of pixels by introducing graph cuts, 2) automatic, computationally efficient but stable derivation of segmentation priors using visual saliency and sequential update mechanism, and 3) an implementation strategy in the principle of stream processing with graphics processor units (GPUs). Test results indicates that our method extracts appropriate regions from videos as precisely as and much faster than previous semi-automatic methods even though any supervisions have not been incorporated.

  3. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    Science.gov (United States)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  4. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    Science.gov (United States)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction

  5. Development of Automatic Extraction Weld for Industrial Radiographic Negative Inspection

    Institute of Scientific and Technical Information of China (English)

    张晓光; 林家骏; 李浴; 卢印举

    2003-01-01

    In industrial X-ray inspection, in order to identify weld defects automatically, raise the identification ratio, and avoid processing of complex background, it is an important step for sequent processing to extract weld from the image. According to the characteristics of weld radiograph image, median filter is adopted to reduce the noise with high frequency, then relative gray-scale of image is chosen as fuzzy characteristic, and image gray-scale fuzzy matrix is constructed and suitable membership function is selected to describe edge characteristic. A fuzzy algorithm is adopted for enhancing radiograph image processing. Based on the intensity distribution characteristic in weld, methodology of weld extraction is then designed. This paper describes the methodology of all the weld extraction, including reducing noise, fuzzy enhancement and weld extraction process. To prove its effectiveness, this methodology was tested with 64 weld negative images available for this study. The experimental results show that this methodology is very effective for extracting linear weld.

  6. Automatic Building Extraction From LIDAR Data Covering Complex Urban Scenes

    Science.gov (United States)

    Awrangjeb, M.; Lu, G.; Fraser, C.

    2014-08-01

    This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the

  7. Condition Monitoring Method for Automatic Transmission Clutches

    OpenAIRE

    Agusmian Partogi Ompusunggu; Jean-Michel Papy; Steve Vandenplas; Paul Sas; Hendrik Van Brussel

    2012-01-01

    This paper presents the development of a condition monitoring method for wet friction clutches which might be useful for automatic transmission applications. The method is developed based on quantifying the change of the relative rotational velocity signal measured between the input and output shaft of a clutch. Prior to quantifying the change, the raw velocity signal is preprocessed to capture the relative velocity signal of interest. Three dimensionless parameters, namely the normalized eng...

  8. Painful Bile Extraction Methods

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    It was only in the past 20 years that countries in Asia began to search for an alternative to protect moon bears from being killed for their bile and other body parts. In the early 1980s, a new method of extracting bile from living bears was developed in North Korea. In 1983, Chinese scientists imported this technique from North Korea. According to the Animals Asia Foundation, the most original method of bile extraction is to embed a latex catheter, a narrow rubber

  9. Automatic extraction of forward stroke volume using dynamic 11C-acetate PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik;

    , potentially introducing bias if measured with a separate modality. The aim of this study was to develop and validate methods for automatically extracting FSV directly from the dynamic PET used for measuring oxidative metabolism. Methods: 16 subjects underwent a dynamic 27 min PET scan on a Siemens Biograph...... TruePoint 64 PET/CT scanner after bolus injection of 399±27 MBq of 11C-acetate. The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was derived by automatic extrapolation of the down-slope of the TAC. FSV...... was then calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured in the left ventricular outflow tract by cardiovascular magnetic resonance using phase-contrast velocity mapping within two weeks of PET imaging. Results...

  10. Automatic extraction of gene and protein synonyms from MEDLINE and journal articles.

    OpenAIRE

    Hong YU; Hatzivassiloglou, Vasileios; Friedman, Carol; Rzhetsky, Andrey; Wilbur, W. John

    2002-01-01

    Genes and proteins are often associated with multiple names, and more names are added as new functional or structural information is discovered. Because authors often alternate between these synonyms, information retrieval and extraction benefits from identifying these synonymous names. We have developed a method to extract automatically synonymous gene and protein names from MEDLINE and journal articles. We first identified patterns authors use to list synonymous gene and protein names. We d...

  11. Physiologically Motivated Feature Extraction for Robust Automatic Speech Recognition

    Directory of Open Access Journals (Sweden)

    Ibrahim Missaoui

    2016-04-01

    Full Text Available In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that the proposed feature extraction method outperforms the classic methods such as Perceptual Linear Prediction, Linear Predictive Coding, Linear Prediction Cepstral coefficients and Mel Frequency Cepstral Coefficients.

  12. Automatic local Gabor Features extraction for face recognition

    CERN Document Server

    Jemaa, Yousra Ben

    2009-01-01

    We present in this paper a biometric system of face detection and recognition in color images. The face detection technique is based on skin color information and fuzzy classification. A new algorithm is proposed in order to detect automatically face features (eyes, mouth and nose) and extract their correspondent geometrical points. These fiducial points are described by sets of wavelet components which are used for recognition. To achieve the face recognition, we use neural networks and we study its performances for different inputs. We compare the two types of features used for recognition: geometric distances and Gabor coefficients which can be used either independently or jointly. This comparison shows that Gabor coefficients are more powerful than geometric distances. We show with experimental results how the importance recognition ratio makes our system an effective tool for automatic face detection and recognition.

  13. AUTOMATIC EXTRACTION OF BUILDING OUTLINE FROM HIGH RESOLUTION AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2016-06-01

    Full Text Available In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.

  14. Automatic Extraction of Building Outline from High Resolution Aerial Imagery

    Science.gov (United States)

    Wang, Yandong

    2016-06-01

    In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.

  15. Physiologically Motivated Feature Extraction for Robust Automatic Speech Recognition

    OpenAIRE

    Ibrahim Missaoui; Zied Lachiri

    2016-01-01

    In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that ...

  16. ANALYSIS METHOD OF AUTOMATIC PLANETARY TRANSMISSION KINEMATICS

    Directory of Open Access Journals (Sweden)

    Józef DREWNIAK

    2014-06-01

    Full Text Available In the present paper, planetary automatic transmission is modeled by means of contour graphs. The goals of modeling could be versatile: ratio calculating via algorithmic equation generation, analysis of velocity and accelerations. The exemplary gears running are analyzed, several drives/gears are consecutively taken into account discussing functional schemes, assigned contour graphs and generated system of equations and their solutions. The advantages of the method are: algorithmic approach, general approach where particular drives are cases of the generally created model. Moreover, the method allows for further analyzes and synthesis tasks e.g. checking isomorphism of design solutions.

  17. Automatic landmark extraction from image data using modified growing neural gas network.

    Science.gov (United States)

    Fatemizadeh, Emad; Lucas, Caro; Soltanian-Zadeh, Hamid

    2003-06-01

    A new method for automatic landmark extraction from MR brain images is presented. In this method, landmark extraction is accomplished by modifying growing neural gas (GNG), which is a neural-network-based cluster-seeking algorithm. Using modified GNG (MGNG) corresponding dominant points of contours extracted from two corresponding images are found. These contours are borders of segmented anatomical regions from brain images. The presented method is compared to: 1) the node splitting-merging Kohonen model and 2) the Teh-Chin algorithm (a well-known approach for dominant points extraction of ordered curves). It is shown that the proposed algorithm has lower distortion error, ability of extracting landmarks from two corresponding curves simultaneously, and also generates the best match according to five medical experts. PMID:12834162

  18. Automatic archaeological feature extraction from satellite VHR images

    Science.gov (United States)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  19. AUTOMATIC ROAD EXTRACTION BASED ON INTEGRATION OF HIGH RESOLUTION LIDAR AND AERIAL IMAGERY

    OpenAIRE

    Rahimi, S.; H. Arefi; Bahmanyar, R.

    2015-01-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads’ precise...

  20. Automatic centerline extraction of coronary arteries in coronary computed tomographic angiography.

    Science.gov (United States)

    Yang, Guanyu; Kitslaar, Pieter; Frenay, Michel; Broersen, Alexander; Boogers, Mark J; Bax, Jeroen J; Reiber, Johan H C; Dijkstra, Jouke

    2012-04-01

    Coronary computed tomographic angiography (CCTA) is a non-invasive imaging modality for the visualization of the heart and coronary arteries. To fully exploit the potential of the CCTA datasets and apply it in clinical practice, an automated coronary artery extraction approach is needed. The purpose of this paper is to present and validate a fully automatic centerline extraction algorithm for coronary arteries in CCTA images. The algorithm is based on an improved version of Frangi's vesselness filter which removes unwanted step-edge responses at the boundaries of the cardiac chambers. Building upon this new vesselness filter, the coronary artery extraction pipeline extracts the centerlines of main branches as well as side-branches automatically. This algorithm was first evaluated with a standardized evaluation framework named Rotterdam Coronary Artery Algorithm Evaluation Framework used in the MICCAI Coronary Artery Tracking challenge 2008 (CAT08). It includes 128 reference centerlines which were manually delineated. The average overlap and accuracy measures of our method were 93.7% and 0.30 mm, respectively, which ranked at the 1st and 3rd place compared to five other automatic methods presented in the CAT08. Secondly, in 50 clinical datasets, a total of 100 reference centerlines were generated from lumen contours in the transversal planes which were manually corrected by an expert from the cardiology department. In this evaluation, the average overlap and accuracy were 96.1% and 0.33 mm, respectively. The entire processing time for one dataset is less than 2 min on a standard desktop computer. In conclusion, our newly developed automatic approach can extract coronary arteries in CCTA images with excellent performances in extraction ability and accuracy. PMID:21637981

  1. Automatic Authorship Detection Using Textual Patterns Extracted from Integrated Syntactic Graphs

    Science.gov (United States)

    Gómez-Adorno, Helena; Sidorov, Grigori; Pinto, David; Vilariño, Darnes; Gelbukh, Alexander

    2016-01-01

    We apply the integrated syntactic graph feature extraction methodology to the task of automatic authorship detection. This graph-based representation allows integrating different levels of language description into a single structure. We extract textual patterns based on features obtained from shortest path walks over integrated syntactic graphs and apply them to determine the authors of documents. On average, our method outperforms the state of the art approaches and gives consistently high results across different corpora, unlike existing methods. Our results show that our textual patterns are useful for the task of authorship attribution. PMID:27589740

  2. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær;

    2015-01-01

    Background The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. Methods 35 subjects underwent...... a dynamic 11 C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic 15 O-water PET and 11 C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically...... from PET data using cluster analysis. The first-pass peak was isolated by automatic extrapolation of the downslope of the TAC. FSV was calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured using phase...

  3. Motion states extraction with optical flow for rat-robot automatic navigation.

    Science.gov (United States)

    Zhang, Xinlu; Sun, Chao; Zheng, Nenggan; Chen, Weidong; Zheng, Xiaoxiang

    2012-01-01

    The real-time acquisition of precise motion states is significant and difficult for bio-robot automatic navigation. In this paper, we propose a real-time video-tracking algorithm to extract motion states of rat-robots in complex environment using optical flow. The rat-robot's motion states, including location, speed and motion trend, are acquired accurately in real time. Compared with the traditional methods based on single frame image, our algorithm using consecutive frames provides more exact and rich motion information for the automatic navigation of bio-robots. The video of the manual navigation experiments on rat-robots in eight-arm maze is applied to test this algorithm. The average computation time is 25.76 ms which is less than the speed of image acquisition. The results show that our method could extract the motion states with good performance of accuracy and time consumption.

  4. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition as

  5. Template Guided Live Wire and Its Application on Automatic Extraction of Tongue in Digital Image

    Institute of Scientific and Technical Information of China (English)

    ZHENG Yuan-jie; YANG Jie; ZHOU Yue

    2005-01-01

    In this paper, we propose a novel automatic object extraction algorithm, named the Template Guided Live Wire, based on the popularly used livewire techniques. We discuss in details the novel method's applications on tongue extraction in digital images. With the guides of a given template curve which approximates the tongue's shape, our method can finish the extraction of tongue without any human intervention. In the paper, we also discussed in details how the template guides the live wire, and why our method functions more effectively than other boundary based segmentation methods especially the snake algorithm. Experimental results on some tongue images areas well provided to show our method's better accuracy and robustness than the snake algorithm.

  6. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    Directory of Open Access Journals (Sweden)

    Ed Baker

    2013-09-01

    Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity  have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.

  7. Automatic Metadata Extraction - The High Energy Physics Use Case

    CERN Document Server

    Boyd, Joseph; Rajman, Martin

    Automatic metadata extraction (AME) of scientific papers has been described as one of the hardest problems in document engineering. Heterogeneous content, varying style, and unpredictable placement of article components render the problem inherently indeterministic. Conditional random fields (CRF), a machine learning technique, can be used to classify document metadata amidst this uncertainty, annotating document contents with semantic labels. High energy physics (HEP) papers, such as those written at CERN, have unique content and structural characteristics, with scientific collaborations of thousands of authors altering article layouts dramatically. The distinctive qualities of these papers necessitate the creation of specialised datasets and model features. In this work we build an unprecedented training set of HEP papers and propose and evaluate a set of innovative features for CRF models. We build upon state-of-the-art AME software, GROBID, a tool coordinating a hierarchy of CRF models in a full document ...

  8. Image-based mobile service: automatic text extraction and translation

    Science.gov (United States)

    Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.

    2010-01-01

    We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.

  9. Definition extraction for glossary creation : a study on extracting definitions for semi-automatic glossary creation in Dutch

    NARCIS (Netherlands)

    Westerhout, E.N.

    2010-01-01

    The central topic of this thesis is the automatic extraction of definitions from text. Definition extraction can play a role in various applications including the semi-automatic development of glossaries in an eLearning context, which constitutes the main focus of this dissertation. A glossary provi

  10. Automatic Road Extraction Based on Integration of High Resolution LIDAR and Aerial Imagery

    Science.gov (United States)

    Rahimi, S.; Arefi, H.; Bahmanyar, R.

    2015-12-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads' precise delineation has been less attended to. In this paper, we propose a new unsupervised fully-automatic road extraction method, based on the integration of the high resolution LiDAR and aerial images of a scene using Principal Component Analysis (PCA). This method discriminates the existing roads in a scene; and then precisely delineates them. Hough transform is then applied to the integrated information to extract straight lines; which are further used to segment the scene and discriminate the existing roads. The roads' edges are then precisely localized using a projection-based technique, and the round corners are further refined. Experimental results demonstrate that our proposed method extracts and delineates the roads with a high accuracy.

  11. Automatic Key-Frame Extraction from Optical Motion Capture Data

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qiang; YU Shao-pei; ZHOU Dong-sheng; WEI Xiao-peng

    2013-01-01

    Optical motion capture is an increasingly popular animation technique. In the last few years, plenty of methods have been proposed for key-frame extraction of motion capture data, and it is a common method to extract key-frame using quaternion. Here, one main difficulty is due to the fact that previous algorithms often need to manually set various parameters. In addition, it is problematic to predefine the appropriate threshold without knowing the data content. In this paper, we present a novel adaptive threshold-based extraction method. Key-frame can be found according to quaternion distance. We propose a simple and efficient algorithm to extract key-frame from a motion sequence based on adaptive threshold. It is convenient with no need to predefine parameters to meet certain compression ratio. Experimental results of many motion captures with different traits demonstrate good performance of the proposed algorithm. Our experiments show that one can typically cut down the process of extraction from several minutes to a couple of seconds.

  12. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik;

    Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study was to vali......Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study...... was to validate automated methods for extracting FSV directly from dynamic PET studies for two different tracers and to examine potential scanner hardware bias. Methods: 21 subjects underwent a dynamic 27 min 11C-acetate PET scan on a Siemens Biograph TruePoint 64 PET/CT scanner (scanner I). In addition, 8...... subjects underwent a dynamic 6 min 15O-water PET scan followed by a 27 min 11C-acetate PET scan on a GE Discovery ST PET/CT scanner (scanner II). The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was isolated by automatic...

  13. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2014-04-01

    Full Text Available Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging data. Using the ground height from a DEM (digital elevation model, the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the building mask. A cluster of points usually represents an individual building or tree. During segmentation, the planar roof segments are extracted from each cluster of points and refined using rules, such as the coplanarity of points and their locality. Planes on trees are removed using information, such as area and point height difference. Experimental results on nine areas of six different data sets show that the proposed method can successfully remove vegetation and, so, offers a high success rate for building detection (about 90% correctness and completeness and roof plane extraction (about 80% correctness and completeness, when LIDAR point density is as low as four points/m2. Thus, the proposed method can be exploited in various applications.

  14. Method of information extraction of marbling image characteristic and automatic classification for beef%牛肉大理石花纹图像特征信息提取及自动分级方法

    Institute of Scientific and Technical Information of China (English)

    周彤; 彭彦昆

    2013-01-01

    . Light intensity was regulated through a light controller, and the distance between the camera lens and the beef samples was adjusted though translation stages in the image acquisition device. Collected images were automatically stored in the computer for further image processing. First, some methods such as image denoising, background removal, and image enhancement were adopted to preprocess the image to obtain a region of interest (ROI). In this step, the image was cropped to separate the beef from the background. Then, an iteration method was used to segment the beef area, obtain the beef marbling area and fat area. The redundant fat area was removed to extract an effective rib-eye region. Ten characteristic parameters of beef marbling namely, the rate of marbling area in the rib-eye region, the number of large grain fat, medium grain fat, small grain fat, total grain fat, the density of large grain fat, medium grain fat, small grain fat, total grain fat and, the evenness degree of fat distribution in the rib-eye region can reflect the amount of marbling and its distribution. So they were used to establish principal component regression (PCR) model. The PCR model result yielded a correction coefficient (Rv) of 0.88 and a standard error of prediction (SEP) of 0.56. And the PCR model showed that the rate of the marbling area in the rib-eye region had the greatest effect on the grade of beef marbling. Fisher discriminant functions were constructed based on the PCR model results to classify the grade of beef marbling. Experimental results showed that the classification accuracy was 97.0%in the calibration set and 91.2%in the prediction set. On this basis, a software system was developed for the automatic grading of beef marbling. A corresponding hardware device was also developed, controlled by the software system for real time application. The speed and accuracy of the algorithm were verified with theoretical analysis and a practical test. Through tests, the average

  15. A General Method for Module Automatic Testing in Avionics Systems

    Directory of Open Access Journals (Sweden)

    Li Ma

    2013-05-01

    Full Text Available The traditional Automatic Test Equipment (ATE systems are insufficient to cope with the challenges of testing more and more complex avionics systems. In this study, we propose a general method for module automatic testing in the avionics test platform based on PXI bus. We apply virtual instrument technology to realize the automatic testing and the fault reporting of signal performance. Taking the avionics bus ARINC429 as an example, we introduce the architecture of automatic test system as well as the implementation of algorithms in Lab VIEW. The comprehensive experiments show the proposed method can effectively accomplish the automatic testing and fault reporting of signal performance. It greatly improves the generality and reliability of ATE in avionics systems.

  16. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    Science.gov (United States)

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  17. A method of automatic control procedures cardiopulmonary resuscitation

    Science.gov (United States)

    Bureev, A. Sh.; Zhdanov, D. S.; Kiseleva, E. Yu.; Kutsov, M. S.; Trifonov, A. Yu.

    2015-11-01

    The study is to present the results of works on creation of methods of automatic control procedures of cardiopulmonary resuscitation (CPR). A method of automatic control procedure of CPR by evaluating the acoustic data of the dynamics of blood flow in the bifurcation of carotid arteries and the dynamics of air flow in a trachea according to the current guidelines for CPR is presented. Evaluation of the patient is carried out by analyzing the respiratory noise and blood flow in the interspaces between the chest compressions and artificial pulmonary ventilation. The device operation algorithm of automatic control procedures of CPR and its block diagram has been developed.

  18. Automatic cell object extraction of red tide algae in microscopic images

    Science.gov (United States)

    Yu, Kun; Ji, Guangrong; Zheng, Haiyong

    2016-05-01

    Extracting the cell objects of red tide algae is the most important step in the construction of an automatic microscopic image recognition system for harmful algal blooms. This paper describes a set of composite methods for the automatic segmentation of cells of red tide algae from microscopic images. Depending on the existence of setae, we classify the common marine red tide algae into non-setae algae species and Chaetoceros, and design segmentation strategies for these two categories according to their morphological characteristics. In view of the varied forms and fuzzy edges of non-setae algae, we propose a new multi-scale detection algorithm for algal cell regions based on border- correlation, and further combine this with morphological operations and an improved GrabCut algorithm to segment single-cell and multicell objects. In this process, similarity detection is introduced to eliminate the pseudo cellular regions. For Chaetoceros, owing to the weak grayscale information of their setae and the low contrast between the setae and background, we propose a cell extraction method based on a gray surface orientation angle model. This method constructs a gray surface vector model, and executes the gray mapping of the orientation angles. The obtained gray values are then reconstructed and linearly stretched. Finally, appropriate morphological processing is conducted to preserve the orientation information and tiny features of the setae. Experimental results demonstrate that the proposed methods can eff ectively remove noise and accurately extract both categories of algae cell objects possessing a complete shape, regular contour, and clear edge. Compared with other advanced segmentation techniques, our methods are more robust when considering images with different appearances and achieve more satisfactory segmentation eff ects.

  19. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties. PMID:25019134

  20. Automatic Extraction of Tide-Coordinated Shoreline Using Open Source Software and Landsat Imagery

    Science.gov (United States)

    Goncalves, G.; Duro, N.; Sousa, E.; Figueiredo, I.

    2015-04-01

    Due to both natural and anthropogenic causes, the coastal lines keeps changing dynamically and continuously their shape, position and extend over time. In this paper we propose an approach to derive a tide-coordinate shoreline from two extracted instantaneous shorelines corresponding to a nearly low tide and high tide events. First, all the multispectral images are panshaperned to meet the 15 meters spatial resolution of the panchromatic images. Second, by using the Modification of Normalized Difference Water Index (MNDWI) and the kmeans clustering method we extract the raster shoreline for each image acquisition time. Third, each raster shoreline is smoothed and vectorized using a penalized least square method. Fourth, a 2D constrained Delaunay triangulation is built from the two extracted instantaneous shorelines with their respective heights interpolated from a Tidal gauche station. Finally, the desired tide-coordinate shoreline is interpolated from the previous triangular intertidal surface. The results show that an automatic tide-coordinated extraction method can be efficiently implemented using free available remote sensing imagery data (Landsat 8) and open source software (QGIS and Orfeo toolbox) and python scripting for task automation and software integration.

  1. Template-based automatic extraction of the joint space of foot bones from CT scan

    Science.gov (United States)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  2. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  3. Evaluation of DNA and RNA extraction methods.

    Science.gov (United States)

    Edwin Shiaw, C S; Shiran, M S; Cheah, Y K; Tan, G C; Sabariah, A R

    2010-06-01

    This study was done to evaluate various DNA and RNA extractions from archival FFPE tissues. A total of 30 FFPE blocks from the years of 2004 to 2006 were assessed with each modified and adapted method. Extraction protocols evaluated include the modified enzymatic extraction method (Method A), Chelex-100 extraction method (Method B), heat-induced retrieval in alkaline solution extraction method (Methods C and D) and one commercial FFPE DNA Extraction kit (Qiagen, Crawley, UK). For RNA extraction, 2 extraction protocols were evaluated including the enzymatic extraction method (Method 1), and Chelex-100 RNA extraction method (Method 2). Results show that the modified enzymatic extraction method (Method A) is an efficient DNA extraction protocol, while for RNA extraction, the enzymatic method (Method 1) and the Chelex-100 RNA extraction method (Method 2) are equally efficient RNA extraction protocols.

  4. Automatic Classification of Marine Mammals with Speaker Classification Methods.

    Science.gov (United States)

    Kreimeyer, Roman; Ludwig, Stefan

    2016-01-01

    We present an automatic acoustic classifier for marine mammals based on human speaker classification methods as an element of a passive acoustic monitoring (PAM) tool. This work is part of the Protection of Marine Mammals (PoMM) project under the framework of the European Defense Agency (EDA) and joined by the Research Department for Underwater Acoustics and Geophysics (FWG), Bundeswehr Technical Centre (WTD 71) and Kiel University. The automatic classification should support sonar operators in the risk mitigation process before and during sonar exercises with a reliable automatic classification result.

  5. A semi-automatic method for ontology mapping

    OpenAIRE

    PEREZ, Laura Haide; Cechich, Alejandra; Buccella, Agustina

    2007-01-01

    Ontology mapping involves the task of finding similarities among overlapping sources by using ontologies. In a Federated System in which distributed, autonomous and heterogeneous information sources must be integrated, ontologies have emerged as tools to solve semantic heterogeneity problems. In this paper we propose a three-level approach that provides a semi-automatic method to ontology mapping. It performs some tasks automatically and guides the user in performing other tasks for which ...

  6. Automatic layout feature extraction for lithography hotspot detection based on deep neural network

    Science.gov (United States)

    Matsunawa, Tetsuaki; Nojima, Shigeki; Kotani, Toshiya

    2016-03-01

    Lithography hotspot detection in the physical verification phase is one of the most important techniques in today's optical lithography based manufacturing process. Although lithography simulation based hotspot detection is widely used, it is also known to be time-consuming. To detect hotspots in a short runtime, several machine learning based methods have been proposed. However, it is difficult to realize highly accurate detection without an increase in false alarms because an appropriate layout feature is undefined. This paper proposes a new method to automatically extract a proper layout feature from a given layout for improvement in detection performance of machine learning based methods. Experimental results show that using a deep neural network can achieve better performance than other frameworks using manually selected layout features and detection algorithms, such as conventional logistic regression or artificial neural network.

  7. Statistical Analysis of Automatic Seed Word Acquisition to Improve Harmful Expression Extraction in Cyberbullying Detection

    Directory of Open Access Journals (Sweden)

    Suzuha Hatakeyama

    2016-04-01

    Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. [1]. We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.

  8. Automatic Extraction and Regularization of Building Outlines from Airborne LIDAR Point Clouds

    Science.gov (United States)

    Albers, Bastian; Kada, Martin; Wichmann, Andreas

    2016-06-01

    Building outlines are needed for various applications like urban planning, 3D city modelling and updating cadaster. Their automatic reconstruction, e.g. from airborne laser scanning data, as regularized shapes is therefore of high relevance. Today's airborne laser scanning technology can produce dense 3D point clouds with high accuracy, which makes it an eligible data source to reconstruct 2D building outlines or even 3D building models. In this paper, we propose an automatic building outline extraction and regularization method that implements a trade-off between enforcing strict shape restriction and allowing flexible angles using an energy minimization approach. The proposed procedure can be summarized for each building as follows: (1) an initial building outline is created from a given set of building points with the alpha shape algorithm; (2) a Hough transform is used to determine the main directions of the building and to extract line segments which are oriented accordingly; (3) the alpha shape boundary points are then repositioned to both follow these segments, but also to respect their original location, favoring long line segments and certain angles. The energy function that guides this trade-off is evaluated with the Viterbi algorithm.

  9. Image Processing Method for Automatic Discrimination of Hoverfly Species

    Directory of Open Access Journals (Sweden)

    Vladimir Crnojević

    2014-01-01

    Full Text Available An approach to automatic hoverfly species discrimination based on detection and extraction of vein junctions in wing venation patterns of insects is presented in the paper. The dataset used in our experiments consists of high resolution microscopic wing images of several hoverfly species collected over a relatively long period of time at different geographic locations. Junctions are detected using the combination of the well known HOG (histograms of oriented gradients and the robust version of recently proposed CLBP (complete local binary pattern. These features are used to train an SVM classifier to detect junctions in wing images. Once the junctions are identified they are used to extract statistics characterizing the constellations of these points. Such simple features can be used to automatically discriminate four selected hoverfly species with polynomial kernel SVM and achieve high classification accuracy.

  10. Data mining of geospatial data: combining visual and automatic methods

    OpenAIRE

    Demšar, Urška

    2006-01-01

    Most of the largest databases currently available have a strong geospatial component and contain potentially useful information which might be of value. The discipline concerned with extracting this information and knowledge is data mining. Knowledge discovery is performed by applying automatic algorithms which recognise patterns in the data. Classical data mining algorithms assume that data are independently generated and identically distributed. Geospatial data are multidimensional, spatial...

  11. Feature-point-extracting-based automatically mosaic for composite microscopic images

    Institute of Scientific and Technical Information of China (English)

    YIN YanSheng; ZHAO XiuYang; TIAN XiaoFeng; LI Jia

    2007-01-01

    Image mosaic is a crucial step in the three-dimensional reconstruction of composite materials to align the serial images. A novel method is adopted to mosaic two SiC/Al microscopic images with an amplification coefficient of 1000. The two images are denoised by Gaussian model, and feature points are then extracted by using Harris corner detector. The feature points are filtered through Canny edge detector. A 40x40 feature template is chosen by sowing a seed in an overlapped area of the reference image, and the homologous region in floating image is acquired automatically by the way of correlation analysis. The feature points in matched templates are used as feature point-sets. Using the transformational parameters acquired by SVD-ICP method, the two images are transformed into the universal coordinates and merged to the final mosaic image.

  12. Automatic extraction of gene ontology annotation and its correlation with clusters in protein networks

    Directory of Open Access Journals (Sweden)

    Mazo Ilya

    2007-07-01

    Full Text Available Abstract Background Uncovering cellular roles of a protein is a task of tremendous importance and complexity that requires dedicated experimental work as well as often sophisticated data mining and processing tools. Protein functions, often referred to as its annotations, are believed to manifest themselves through topology of the networks of inter-proteins interactions. In particular, there is a growing body of evidence that proteins performing the same function are more likely to interact with each other than with proteins with other functions. However, since functional annotation and protein network topology are often studied separately, the direct relationship between them has not been comprehensively demonstrated. In addition to having the general biological significance, such demonstration would further validate the data extraction and processing methods used to compose protein annotation and protein-protein interactions datasets. Results We developed a method for automatic extraction of protein functional annotation from scientific text based on the Natural Language Processing (NLP technology. For the protein annotation extracted from the entire PubMed, we evaluated the precision and recall rates, and compared the performance of the automatic extraction technology to that of manual curation used in public Gene Ontology (GO annotation. In the second part of our presentation, we reported a large-scale investigation into the correspondence between communities in the literature-based protein networks and GO annotation groups of functionally related proteins. We found a comprehensive two-way match: proteins within biological annotation groups form significantly denser linked network clusters than expected by chance and, conversely, densely linked network communities exhibit a pronounced non-random overlap with GO groups. We also expanded the publicly available GO biological process annotation using the relations extracted by our NLP technology

  13. Sensitive, automatic method for the determination of diazepam and its five metabolites in human oral fluid by online solid-phase extraction and liquid chromatography with tandem mass spectrometry

    DEFF Research Database (Denmark)

    Jiang, Fengli; Rao, Yulan; Wang, Rong;

    2016-01-01

    in human oral fluid. Human oral fluid was obtained using the Salivette(®) collection device, and 100 μL of oral fluid samples were loaded onto HySphere Resin GP cartridge for extraction. Analytes were separated on a Waters Xterra C18 column and quantified by liquid chromatography with tandem mass......A novel and simple online solid-phase extraction liquid chromatography-tandem mass spectrometry method was developed and validated for the simultaneous determination of diazepam and its five metabolites including nordazepam, oxazepam, temazepam, oxazepam glucuronide, and temazepam glucuronide...

  14. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    Science.gov (United States)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  15. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  16. Automatic Extraction of Optimal Endmembers from Airborne Hyperspectral Imagery Using Iterative Error Analysis (IEA and Spectral Discrimination Measurements

    Directory of Open Access Journals (Sweden)

    Ahram Song

    2015-01-01

    Full Text Available Pure surface materials denoted by endmembers play an important role in hyperspectral processing in various fields. Many endmember extraction algorithms (EEAs have been proposed to find appropriate endmember sets. Most studies involving the automatic extraction of appropriate endmembers without a priori information have focused on N-FINDR. Although there are many different versions of N-FINDR algorithms, computational complexity issues still remain and these algorithms cannot consider the case where spectrally mixed materials are extracted as final endmembers. A sequential endmember extraction-based algorithm may be more effective when the number of endmembers to be extracted is unknown. In this study, we propose a simple but accurate method to automatically determine the optimal endmembers using such a method. The proposed method consists of three steps for determining the proper number of endmembers and for removing endmembers that are repeated or contain mixed signatures using the Root Mean Square Error (RMSE images obtained from Iterative Error Analysis (IEA and spectral discrimination measurements. A synthetic hyperpsectral image and two different airborne images such as Airborne Imaging Spectrometer for Application (AISA and Compact Airborne Spectrographic Imager (CASI data were tested using the proposed method, and our experimental results indicate that the final endmember set contained all of the distinct signatures without redundant endmembers and errors from mixed materials.

  17. Comparison of mentha extracts obtained by different extraction methods

    OpenAIRE

    Milić Slavica; Lepojević Žika; Adamović Dušan; Mujić Ibrahim; Zeković Zoran

    2006-01-01

    The different methods of mentha extraction, such as steam distillation, extraction by methylene chloride (Soxhlet extraction) and supercritical fluid extraction (SFE) by carbon dioxide (CO J were investigated. SFE by CO, was performed at pressure of 100 bar and temperature of40°C. The extraction yield, as well as qualitative and quantitative composition of obtained extracts, determined by GC-MS method, were compared.

  18. Automatic extraction of semantic relations between medical entities: a rule based approach

    Directory of Open Access Journals (Sweden)

    Ben Abacha Asma

    2011-10-01

    Full Text Available Abstract Background Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration. MeTAE allows (i to extract and annotate medical entities and relationships from medical texts and (ii to explore semantically the produced RDF annotations. Results Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i recognition of medical entities and (ii identification of the correct semantic relation between each pair of entities. The first step is achieved by an enhanced use of MetaMap which improves the precision obtained by MetaMap by 19.59% in our evaluation. The second step relies on linguistic patterns which are built semi-automatically from a corpus selected according to semantic criteria. We evaluate our system’s ability to identify medical entities of 16 types. We also evaluate the extraction of treatment relations between a treatment (e.g. medication and a problem (e.g. disease: we obtain 75.72% precision and 60.46% recall. Conclusions According to our experiments, using an external sentence segmenter and noun phrase chunker may improve the precision of MetaMap-based medical entity recognition. Our pattern-based relation extraction method obtains good precision and recall w.r.t related works. A more precise comparison with related approaches remains difficult however given the differences in corpora and in the exact nature of the extracted relations. The selection of MEDLINE articles through queries related to known drug-disease pairs enabled us to obtain a more focused corpus of relevant examples of treatment relations than a more general MEDLINE query.

  19. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  20. The Automatic Start Method of Application Program Using API

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper introduces on a method about the au-tomactic start of application program. Through defining Registryby API function, the automatic start of specified application pro-gram is fulfilled when Windows98 is taking action. It gives facil-ities to many computer application works.

  1. An efficient method for parallel CRC automatic generation

    Institute of Scientific and Technical Information of China (English)

    陈红胜; 张继承; 王勇; 陈抗生

    2003-01-01

    The State Transition Equation (STE) based method to automatically generate the parallel CRC circuits for any generator polynomial or required amount of parallelism is presented. The parallel CRC circuit so generated is partially optimized before being fed to synthesis tools and works properly in our LAN transceiv-er. Compared with the cascading method, the proposed method gives better timing results and significantly re-duces the synthesis time, in particular.

  2. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  3. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  4. Automatic Extraction of Leaf Characters from Herbarium Specimens

    OpenAIRE

    Corney, DPA; Clark, JY; Tang, HL; Wilkin, P

    2012-01-01

    Herbarium specimens are a vital resource in botanical taxonomy. Many herbaria are undertaking large-scale digitization projects to improve access and to preserve delicate specimens, and in doing so are creating large sets of images. Leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens. Here, we demonstrate that herbarium images can be analysed using suitable software and that leaf characters can be extracted automa...

  5. Automatic Extraction of DTM from Low Resolution Dsm by Twosteps Semi-Global Filtering

    Science.gov (United States)

    Zhang, Yanfeng; Zhang, Yongjun; Zhang, Yi; Li, Xin

    2016-06-01

    Automatically extracting DTM from DSM or LiDAR data by distinguishing non-ground points from ground points is an important issue. Many algorithms for this issue are developed, however, most of them are targeted at processing dense LiDAR data, and lack the ability of getting DTM from low resolution DSM. This is caused by the decrease of distinction on elevation variation between steep terrains and surface objects. In this paper, a method called two-steps semi-global filtering (TSGF) is proposed to extract DTM from low resolution DSM. Firstly, the DSM slope map is calculated and smoothed by SGF (semi-global filtering), which is then binarized and used as the mask of flat terrains. Secondly, the DSM is segmented with the restriction of the flat terrains mask. Lastly, each segment is filtered with semi-global algorithm in order to remove non-ground points, which will produce the final DTM. The first SGF is based on global distribution characteristic of large slope, which distinguishes steep terrains and flat terrains. The second SGF is used to filter non-ground points on DSM within flat terrain segments. Therefore, by two steps SGF non-ground points are removed robustly, while shape of steep terrains is kept. Experiments on DSM generated by ZY3 imagery with resolution of 10-30m demonstrate the effectiveness of the proposed method.

  6. Apparatus and methods for hydrocarbon extraction

    Energy Technology Data Exchange (ETDEWEB)

    Bohnert, George W.; Verhulst, Galen G.

    2016-04-26

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  7. Automatic Extraction of Document Keyphrases for Use in Digital Libraries: Evaluation and Applications.

    Science.gov (United States)

    Jones, Steve; Paynter, Gordon W.

    2002-01-01

    Discussion of finding relevant documents in electronic document collections focuses on an evaluation of the Kea automatic keyphrase extraction algorithm which was developed by members of the New Zealand Digital Library Project. Results are based on evaluations by human assessors of the quality and appropriateness of Kea keyphrases. (Author/LRW)

  8. An improved, SSH-based method to automatically identify mesoscale eddies in the ocean

    Institute of Scientific and Technical Information of China (English)

    WANG Xin; DU Yun-yan; ZHOU Cheng-hu; FAN Xing; YI Jia-wei

    2013-01-01

      Mesoscale eddies are an important component of oceanic features. How to automatically identify these mesoscale eddies from available data has become an important research topic. Through careful examination of existing methods, we propose an improved, SSH-based automatic identification method. Using the inclusion relation of enclosed SSH contours, the mesoscale eddy boundary and core(s) can be automatically identified. The time evolution of eddies can be examined by a threshold search algorithm and a tracking algorithm based on similarity. Sea-surface height (SSH) data from Naval Research Laboratory Layered Ocean Model (NLOM) and sea-level anomaly (SLA) data from altimeter are used in the many experiments, in which different automatic identification methods are compared. Our results indicate that the improved method is able to extract the mesoscale eddy boundary more precisely, retaining the multiple-core structure. In combination with the tracking algorithm, this method can capture complete mesoscale eddy processes. It can thus provide reliable information for further study of reconstructing eddy dynamics, merging, splitting, and evolution of a multi-core structure.

  9. Automatic extraction of ontological relations from Arabic text

    Directory of Open Access Journals (Sweden)

    Mohammed G.H. Al Zamil

    2014-12-01

    The proposed methodology has been designed to analyze Arabic text using lexical semantic patterns of the Arabic language according to a set of features. Next, the features have been abstracted and enriched with formal descriptions for the purpose of generalizing the resulted rules. The rules, then, have formulated a classifier that accepts Arabic text, analyzes it, and then displays related concepts labeled with its designated relationship. Moreover, to resolve the ambiguity of homonyms, a set of machine translation, text mining, and part of speech tagging algorithms have been reused. We performed extensive experiments to measure the effectiveness of our proposed tools. The results indicate that our proposed methodology is promising for automating the process of extracting ontological relations.

  10. A Method of Automatic Keyword Extraction Based on Word Span%基于词跨度的中文文本关键词自动提取方法

    Institute of Scientific and Technical Information of China (English)

    谢晋

    2012-01-01

    针对中文文本关键词提取方法中普遍存在的噪声干扰问题,本文提出一种基于词跨度的关键词自动提取方法。该方法通过在传统的关键词权重计算方法中,加入词跨因子,利用词跨度来过滤高频噪声数据,以达到降低噪声干扰的效果。整个关键词提取过程通过分词计算、停用词过滤、特征统计和权重计算,选出若干个能够表达文章主旨的关键词。复旦大学语料库的实验结果表明,该方法提高了关键词提取的精度,并且具备良好的稳定性。%Considering the noise interference problem commonly existing in keyword extraction, this paper proposes a new keyword extraction method in Chinese text by analyzing word span. The proposed scheme analyzes the relative importance of a word to a text through measuring the distance between the positions of this word appearing firstly and lastly in the given text. This distance, called word span, indicates the scope of the word appearing. Since a significant difference exists between the word spans of keyword and noise word, it is a valuable idea to adopt word span to precisely recognize and filter out the.noises. Here, word span is used to calculate the final weights of keywords extracted from text by analyzing characters including frequency, location and POS(part of speech). Some experiments were compete based on Fudan University Corpus, in which different types of texts were made to test this method. The results showed that this approach improved the quality of the keyword extraction, and had a stable performance effect on various texts.

  11. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Subhi Al-batah

    2014-01-01

    Full Text Available To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL and high-grade squamous intraepithelial lesion (HSIL. The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

  12. Uncertain Training Data Edition for Automatic Object-Based Change Map Extraction

    Science.gov (United States)

    Hajahmadi, S.; Mokhtarzadeh, M.; Mohammadzadeh, A.; Valadanzouj, M. J.

    2013-09-01

    Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  13. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    Science.gov (United States)

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  14. Automatic indicator dilution curve extraction in dynamic-contrast enhanced imaging using spectral clustering

    Science.gov (United States)

    Saporito, Salvatore; Herold, Ingeborg HF; Houthuizen, Patrick; van den Bosch, Harrie CM; Korsten, Hendrikus HM; van Assen, Hans C.; Mischi, Massimo

    2015-07-01

    Indicator dilution theory provides a framework for the measurement of several cardiovascular parameters. Recently, dynamic imaging and contrast agents have been proposed to apply the method in a minimally invasive way. However, the use of contrast-enhanced sequences requires the definition of regions of interest (ROIs) in the dynamic image series; a time-consuming and operator dependent task, commonly performed manually. In this work, we propose a method for the automatic extraction of indicator dilution curves, exploiting the time domain correlation between pixels belonging to the same region. Individual time intensity curves were projected into a low dimensional subspace using principal component analysis; subsequently, clustering was performed to identify the different ROIs. The method was assessed on clinically available DCE-MRI and DCE-US recordings, comparing the derived IDCs with those obtained manually. The robustness to noise of the proposed approach was shown on simulated data. The tracer kinetic parameters derived on real images were in agreement with those obtained from manual annotation. The presented method is a clinically useful preprocessing step prior to further ROI-based cardiac quantifications.

  15. Automatic landslide and mudflow detection method via multichannel sparse representation

    Science.gov (United States)

    Chao, Chen; Zhou, Jianjun; Hao, Zhuo; Sun, Bo; He, Jun; Ge, Fengxiang

    2015-10-01

    Landslide and mudflow detection is an important application of aerial images and high resolution remote sensing images, which is crucial for national security and disaster relief. Since the high resolution images are often large in size, it's necessary to develop an efficient algorithm for landslide and mudflow detection. Based on the theory of sparse representation and, we propose a novel automatic landslide and mudflow detection method in this paper, which combines multi-channel sparse representation and eight neighbor judgment methods. The whole process of the detection is totally automatic. We make the experiment on a high resolution image of ZhouQu district of Gansu province in China on August, 2010 and get a promising result which proved the effective of using sparse representation on landslide and mudflow detection.

  16. Towards Automatic Music Transcription: Extraction of MIDI-Data out of Polyphonic Piano Music

    Directory of Open Access Journals (Sweden)

    Jens Wellhausen

    2005-06-01

    Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.

  17. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    Science.gov (United States)

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  18. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    Science.gov (United States)

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  19. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Science.gov (United States)

    Maquet, Pierre

    2016-01-01

    Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation) and individual characteristics (intellectual quotient). Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  20. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  1. A Method of Automatic Extraction of Image Control Points for UAV Image Based on POS Data%一种基于POS数据的无人机影像自动展绘控制点方法

    Institute of Scientific and Technical Information of China (English)

    鲁恒; 李永树; 江禹

    2011-01-01

    The Unmanned Aerial Vehicle (UAV) images have the characteristics of high overlapping degree and heavy image processing workload. In order to improve the efficiency of UAV photogrammetry and take the advantages of fast mapping by the UAV technology, a method of extraction of image control points by correcting POS data was put forward. According to the principle of correct UAV POS data, POS data correction model was established and POS data error correction parameter were acquired by layout a small amount of the control points in regional network, and the corrected POS data were used in extraction of UAV images control points. The study results show that the method for UAV image rapid processing has good practical value.%针对无人机影像重叠度高,影像处理工作量大的特点,为了提高无人机摄影测量的工作效率,充分发挥无人机技术快速成图的优点,提出了一种利用改正后POS数据自动展绘控制点的方法.该方法根据无人机POS数据纠正原理,通过在区域网内布设少量控制点,建立POS数据改正模型,从而获取POS数据误差改正参数对原始POS数据进行改正,利用改正后POS数据在无人机影像上自动展绘控制点.研究结果表明,该方法对于无人机影像快速处理具有较好的实用价值.

  2. Automatic Extraction of Femur Contours from Calibrated X-Ray Images using Statistical Information

    Directory of Open Access Journals (Sweden)

    Xiao Dong

    2007-09-01

    Full Text Available Automatic identification and extraction of bone contours from x-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated x-ray images. The automatic initialization to align the 3D model with the x-ray images is solved by an Estimation of Bayesian Network Algorithm to fit a simplified multiple component geometrical model of the proximal femur to the x-ray data. Landmarks can be extracted from the geometrical model for the initialization of the 3D statistical model. The contour extraction is then accomplished by a joint registration and segmentation procedure. We iteratively updates the extracted bone contours and an instanced 3D model to fit the x-ray images. Taking the projected silhouettes of the instanced 3D model on the registered x-ray images as templates, bone contours can be extracted by a graphical model based Bayesian inference. The 3D model can then be updated by a non-rigid 2D/3D registration between the 3D statistical model and the extracted bone contours. Preliminary experiments on clinical data sets verified its validity.

  3. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten;

    2014-01-01

    BACKGROUND: There are challenges, when extracting bacterial DNA from specimens for molecular diagnostics, since fecal samples also contain DNA from human cells and many different substances derived from food, cell residues and medication that can inhibit downstream PCR. The purpose of the study...... was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France......) and a manual one (QIAamp DNA Stool Mini Kit, Qiagen, Hilden, Germany) were tested on stool samples collected from 3 patients with Inflammatory Bowel disease (IBD) and 5 healthy individuals. DNA extracts obtained by the QIAamp DNA Stool Mini Kit yield a higher amount of DNA compared to DNA extracts obtained...

  4. An Automatic Optical and SAR Image Registration Method Using Iterative Multi-Level and Refinement Model

    Science.gov (United States)

    Xu, C.; Sui, H. G.; Li, D. R.; Sun, K. M.; Liu, J. Y.

    2016-06-01

    Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using -level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM) to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  5. Method of purifying neutral organophosphorus extractants

    Science.gov (United States)

    Horwitz, E. Philip; Gatrone, Ralph C.; Chiarizia, Renato

    1988-01-01

    A method for removing acidic contaminants from neutral mono and bifunctional organophosphorous extractants by contacting the extractant with a macroporous cation exchange resin in the H.sup.+ state followed by contact with a macroporous anion exchange resin in the OH.sup.- state, whereupon the resins take up the acidic contaminants from the extractant, purifying the extractant and improving its extraction capability.

  6. Optimization of Doppler velocity echocardiographic measurements using an automatic contour detection method.

    Science.gov (United States)

    Gaillard, E; Kadem, L; Pibarot, P; Durand, L-G

    2009-01-01

    Intra- and inter-observer variability in Doppler velocity echocardiographic measurements (DVEM) is a significant issue. Indeed, imprecisions of DVEM can lead to diagnostic errors, particularly in the quantification of the severity of heart valve dysfunction. To minimize the variability and rapidity of DVEM, we have developed an automatic method of Doppler velocity wave contour detection, based on active contour models. To validate our new method, results obtained with this method were compared to those obtained manually by an experienced echocardiographer on Doppler echocardiographic images of left ventricular outflow tract and transvalvular flow velocity signals recorded in 30 patients, 15 with aortic stenosis and 15 with mitral stenosis. We focused on three essential variables that are measured routinely by Doppler echocardiography in the clinical setting: the maximum velocity, the mean velocity and the velocity-time integral. Comparison between the two methods has shown a very good agreement (linear correlation coefficient R(2) = 0.99 between the automatically and the manually extracted variables). Moreover, the computation time was really short, about 5s. This new method applied to DVEM could, therefore, provide a useful tool to eliminate the intra- and inter-observer variabilities associated with DVEM and thereby to improve the diagnosis of cardiovascular disease. This automatic method could also allow the echocardiographer to realize these measurements within a much shorter period of time compared to standard manual tracing method. From a practical point of view, the model developed can be easily implanted in a standard echocardiographic system. PMID:19965162

  7. Semi-automatic method for routine evaluation of fibrinolytic components.

    Science.gov (United States)

    Collen, D; Tytgat, G; Verstraete, M

    1968-11-01

    A semi-automatic method for the routine evaluation of fibrinolytic activity is described. The principle is based upon graphic recording by a multichannel voltmeter of tension drops over a potentiometer, caused by variations in the influence of light upon a light-dependent resistance, resulting from modifications in the composition of the fibrin fibres by lysis. The method is applied to the assessment of certain fibrinolytic factors with widespread fibrinolytic endpoints, and the results are compared with simultaneously obtained visual data on the plasmin assay, the plasminogen assay, and on the euglobulin clot lysis time.

  8. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  9. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  10. Facilities of different methods of automatic recognition of sleep stages

    Directory of Open Access Journals (Sweden)

    Erofeev A.E.

    2012-06-01

    Full Text Available

    The gole of the research is to consider the information content in application of different fractal methods of deterministic chaos to the automated recognition of sleep phases in computer electroencephalogram (EEG. The Hurst normalized range method, the method of calculating of Grassberger — Procaccia correlation integral and the approximated entropy method are used during the research. The research reveals that a hypnogram can be obtained. It’s possible if appropriate parameters of the methods indicated above are used, as well as the necessary normalization of the original data and averaging the results. A hypnogram has a total coincidence of defned sleep phases for half of the epochs which are recorded on EEG. Current methods of automatic recognition of sleep stages based on the deterministic chaos allow to reduce signifcantly the time of interpretation of polysomnographic recording and reduce the number of channels through which parameters of sleep are registrated.

  11. Automatic facial feature extraction and expression recognition based on neural network

    OpenAIRE

    Khandait, S. P.; Dr. R.C.Thool; P.D.Khandait

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image...

  12. Automatic facial expression recognition based on features extracted from tracking of facial landmarks

    Science.gov (United States)

    Ghimire, Deepak; Lee, Joonwhoan

    2014-01-01

    In this paper, we present a fully automatic facial expression recognition system using support vector machines, with geometric features extracted from the tracking of facial landmarks. Facial landmark initialization and tracking is performed by using an elastic bunch graph matching algorithm. The facial expression recognition is performed based on the features extracted from the tracking of not only individual landmarks, but also pair of landmarks. The recognition accuracy on the Extended Kohn-Kanade (CK+) database shows that our proposed set of features produces better results, because it utilizes time-varying graph information, as well as the motion of individual facial landmarks.

  13. Semi-Automatically Extracting FAQs to Improve Accessibility of Software Development Knowledge

    CERN Document Server

    Henß, Stefan; Mezini, Mira

    2012-01-01

    Frequently asked questions (FAQs) are a popular way to document software development knowledge. As creating such documents is expensive, this paper presents an approach for automatically extracting FAQs from sources of software development discussion, such as mailing lists and Internet forums, by combining techniques of text mining and natural language processing. We apply the approach to popular mailing lists and carry out a survey among software developers to show that it is able to extract high-quality FAQs that may be further improved by experts.

  14. A method for automatically constructing the initial contour of the common carotid artery

    Directory of Open Access Journals (Sweden)

    Yara Omran

    2013-10-01

    Full Text Available In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images of the transverse section of the common carotid artery, but it can be extended to be used on the images of the longitudinal section. The intensity profiles are classified using Support vector machine algorithm, and the results of different kernels are compared. The extracted features used for the classification are basically statistical features of the intensity profiles. The echogenicity of the arterial lumen, and gives the profiles that intersect the artery a special shape that helps recognizing these profiles from other general profiles.The outlining of the arterial walls may seem a classic task in image processing. However, most of the methods used to outline the artery start from a manual, or semi-automatic, initial contour.The proposed method is highly appreciated in automating the entire process of automatic artery detection and segmentation.

  15. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  16. Progressive Concept Evaluation Method for Automatically Generated Concept Variants

    Directory of Open Access Journals (Sweden)

    Woldemichael Dereje Engida

    2014-07-01

    Full Text Available Conceptual design is one of the most critical and important phases of design process with least computer support system. Conceptual design support tool (CDST is a conceptual design support system developed to automatically generate concepts for each subfunction in functional structure. The automated concept generation process results in large number of concept variants which require a thorough evaluation process to select the best design. To address this, a progressive concept evaluation technique consisting of absolute comparison, concept screening and weighted decision matrix using analytical hierarchy process (AHP is proposed to eliminate infeasible concepts at each stage. The software implementation of the proposed method is demonstrated.

  17. An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA. So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine, NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.

  18. Automatic Extraction and Size Distribution of Landslides in Kurdistan Region, NE Iraq

    Directory of Open Access Journals (Sweden)

    Arsalan A. Othman

    2013-05-01

    Full Text Available This study aims to assess the localization and size distribution of landslides using automatic remote sensing techniques in (semi- arid, non-vegetated, mountainous environments. The study area is located in the Kurdistan region (NE Iraq, within the Zagros orogenic belt, which is characterized by the High Folded Zone (HFZ, the Imbricated Zone and the Zagros Suture Zone (ZSZ. The available reference inventory includes 3,190 landslides mapped from sixty QuickBird scenes using manual delineation. The landslide types involve rock falls, translational slides and slumps, which occurred in different lithological units. Two hundred and ninety of these landslides lie within the ZSZ, representing a cumulated surface of 32 km2. The HFZ implicates 2,900 landslides with an overall coverage of about 26 km2. We first analyzed cumulative landslide number-size distributions using the inventory map. We then proposed a very simple and robust algorithm for automatic landslide extraction using specific band ratios selected upon the spectral signatures of bare surfaces as well as posteriori slope and the normalized difference vegetation index (NDVI thresholds. The index is based on the contrast between landslides and their background, whereas the landslides have high reflections in the green and red bands. We applied the slope threshold map to remove low slope areas, which have high reflectance in red and green bands. The algorithm was able to detect ~96% of the recent landslides known from the reference inventory on a test site. The cumulative landslide number-size distribution of automatically extracted landslide is very similar to the one based on visual mapping. The automatic extraction is therefore adapted for the quantitative analysis of landslides and thus can contribute to the assessment of hazards in similar regions.

  19. [An automatic extraction algorithm for individual tree crown projection area and volume based on 3D point cloud data].

    Science.gov (United States)

    Xu, Wei-Heng; Feng, Zhong-Ke; Su, Zhi-Fang; Xu, Hui; Jiao, You-Quan; Deng, Ou

    2014-02-01

    Tree crown projection area and crown volume are the important parameters for the estimation of biomass, tridimensional green biomass and other forestry science applications. Using conventional measurements of tree crown projection area and crown volume will produce a large area of errors in the view of practical situations referring to complicated tree crown structures or different morphological characteristics. However, it is difficult to measure and validate their accuracy through conventional measurement methods. In view of practical problems which include complicated tree crown structure, different morphological characteristics, so as to implement the objective that tree crown projection and crown volume can be extracted by computer program automatically. This paper proposes an automatic untouched measurement based on terrestrial three-dimensional laser scanner named FARO Photon120 using plane scattered data point convex hull algorithm and slice segmentation and accumulation algorithm to calculate the tree crown projection area. It is exploited on VC+6.0 and Matlab7.0. The experiments are exploited on 22 common tree species of Beijing, China. The results show that the correlation coefficient of the crown projection between Av calculated by new method and conventional method A4 reaches 0.964 (ppoint or sixteen-point projection with fixed angles to estimate crown projections, and (2) different regular volume formula to simulate crown volume according to the tree crown shapes. Based on the high-resolution 3D LIDAR point cloud data of individual tree, tree crown structure was reconstructed at a high rate of speed with high accuracy, and crown projection and volume of individual tree were extracted by this automatical untouched method, which can provide a reference for tree crown structure studies and be worth to popularize in the field of precision forestry.

  20. Kernel sparse coding method for automatic target recognition in infrared imagery using covariance descriptor

    Science.gov (United States)

    Yang, Chunwei; Yao, Junping; Sun, Dawei; Wang, Shicheng; Liu, Huaping

    2016-05-01

    Automatic target recognition in infrared imagery is a challenging problem. In this paper, a kernel sparse coding method for infrared target recognition using covariance descriptor is proposed. First, covariance descriptor combining gray intensity and gradient information of the infrared target is extracted as a feature representation. Then, due to the reason that covariance descriptor lies in non-Euclidean manifold, kernel sparse coding theory is used to solve this problem. We verify the efficacy of the proposed algorithm in terms of the confusion matrices on the real images consisting of seven categories of infrared vehicle targets.

  1. Automatic extraction of building boundaries using aerial LiDAR data

    Science.gov (United States)

    Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian

    2016-01-01

    Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.

  2. Automatic numerical integration methods for Feynman integrals through 3-loop

    Science.gov (United States)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

    2015-05-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

  3. The BUME method: a novel automated chloroform-free 96-well total lipid extraction method for blood plasma[S

    OpenAIRE

    Löfgren, Lars; Ståhlman, Marcus; Forsberg, Gun-Britt; Saarinen, Sinikka; Nilsson, Ralf; Göran I Hansson

    2012-01-01

    Lipid extraction from biological samples is a critical and often tedious preanalytical step in lipid research. Primarily on the basis of automation criteria, we have developed the BUME method, a novel chloroform-free total lipid extraction method for blood plasma compatible with standard 96-well robots. In only 60 min, 96 samples can be automatically extracted with lipid profiles of commonly analyzed lipid classes almost identically and with absolute recoveries similar or better to what is ob...

  4. Method for automatic measurement of second language speaking proficiency

    Science.gov (United States)

    Bernstein, Jared; Balogh, Jennifer

    2005-04-01

    Spoken language proficiency is intuitively related to effective and efficient communication in spoken interactions. However, it is difficult to derive a reliable estimate of spoken language proficiency by situated elicitation and evaluation of a person's communicative behavior. This paper describes the task structure and scoring logic of a group of fully automatic spoken language proficiency tests (for English, Spanish and Dutch) that are delivered via telephone or Internet. Test items are presented in spoken form and require a spoken response. Each test is automatically-scored and primarily based on short, decontextualized tasks that elicit integrated listening and speaking performances. The tests present several types of tasks to candidates, including sentence repetition, question answering, sentence construction, and story retelling. The spoken responses are scored according to the lexical content of the response and a set of acoustic base measures on segments, words and phrases, which are scaled with IRT methods or parametrically combined to optimize fit to human listener judgments. Most responses are isolated spoken phrases and sentences that are scored according to their linguistic content, their latency, and their fluency and pronunciation. The item development procedures and item norming are described.

  5. An automatic countercurrent liquid-liquid micro-extraction system coupled with atomic absorption spectrometry for metal determination.

    Science.gov (United States)

    Mitani, Constantina; Anthemidis, Aristidis N

    2015-02-01

    A novel and versatile automatic sequential injection countercurrent liquid-liquid microextraction (SI-CC-LLME) system coupled with atomic absorption spectrometry (FAAS) is presented for metal determination. The extraction procedure was based on the countercurrent flow of aqueous and organic phases which takes place into a newly designed lab made microextraction chamber. A noteworthy feature of the extraction chamber is that it can be utilized for organic solvents heavier or lighter than water. The proposed method was successfully demonstrated for on-line lead determination and applied in environmental water samples using an amount of 120 μL of chloroform as extractant and ammonium diethyldithiophosphate as chelating reagent. The effect of the major experimental parameters including the volume of extractant, as well as the flow rate of aqueous and organic phases were studied and optimized. Under the optimum conditions for 6 mL sample consumption an enhancement factor of 130 was obtained. The detection limit was 1.5 μg L(-1) and the precision of the method, expressed as relative standard deviation (RSD) was 2.7% at 40.0 μg L(-1) Pb(II) concentration level. The proposed method was evaluated by analyzing certified reference materials and spiked environmental water samples. PMID:25435230

  6. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  7. Semi-Automatic Mapping Generation for the DBpedia Information Extraction Framework

    Directory of Open Access Journals (Sweden)

    Arup Sarkar, Ujjal Marjit, Utpal Biswas

    2013-03-01

    Full Text Available DBpedia is one of the very well known live projectsfrom the Semantic Web. It is likeamirror version ofthe Wikipedia site in Semantic Web. Initially itpublishes the information collected from theWikipedia, but only that part which is relevant tothe Semantic Web.Collecting information forSemantic Web from the Wikipedia is demonstratedas the extraction of structured data. DBpedianormally do this by using a specially designedframework called DBpedia Information ExtractionFramework. This extraction framework do itsworks thorough the evaluation of the similarproperties from the DBpedia Ontology and theWikipedia template. This step is known as DBpediamapping.At present mostof the mapping jobs aredone complete manually.In this paper a newframework is introduced considering the issuesrelated to the template to ontology mapping. A semi-automatic mapping tool for the DBpedia projectisproposedwith the capability of automaticsuggestion generation for the end usersso thatusers can identify the similar Ontology and templateproperties.Proposed framework is useful since afterselection of similar properties, the necessary code tomaintain the mapping between Ontology andtemplate is generated automatically.

  8. Automatic extraction of road seeds from high-resolution aerial images

    Directory of Open Access Journals (Sweden)

    Aluir P. Dal-Poz

    2005-09-01

    Full Text Available This article presents an automatic methodology for extraction of road seeds from high-resolution aerial images. The method is based on a set of four road objects and another set of connection rules among road objects. Each road object is a local representation of an approximately straight road fragment and its construction is based on a combination of polygons describing all relevant image edges, according to some rules embodying road knowledge. Each one of the road seeds is composed by a sequence of connected road objects, in which each sequence of this type can be geometrically structured as a chain of contiguous quadrilaterals. Experiments carried out with high-resolution aerial images showed that the proposed methodology is very promising in extracting road seeds. This article presents the fundamentals of the method and the experimental results, as well.Este artigo apresenta uma metodologia automática para extração de sementes de rodovia a partir de imagens aéreas de alta resolução. O método se baseia em um conjunto de quatro objetos de rodovia e em um conjunto de regras de conexão entre tais objetos. Cada objeto de rodovia é uma representação local de um fragmento de rodovia aproximadamente reto e sua construção é baseada na combinação de polígonos que descrevem todas as bordas relevantes da imagem, de acordo com algumas regras que incorporam conhecimento sobre a feição rodovia. Cada uma das sementes de rodovia é composta por uma sucessão de objetos de rodovia conectados, sendo que cada sucessão deste tipo pode ser geometricamente estruturada como uma cadeia de quadriláteros contíguos. Os experimentos realizados com imagens aéreas de alta resolução mostraram que a metodologia proposta é muito promissora na extração de sementes de rodovia. Este artigo apresenta os fundamentos do método, bem como os resultados experimentais.

  9. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images

    Directory of Open Access Journals (Sweden)

    Kimori Yoshitaka

    2010-07-01

    Full Text Available Abstract Background A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. Results A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Conclusions Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  10. An automatic detection method to the field wheat based on image processing

    Science.gov (United States)

    Wang, Yu; Cao, Zhiguo; Bai, Xiaodong; Yu, Zhenghong; Li, Yanan

    2013-10-01

    The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.

  11. Automatic Inspection of Nuclear-Reactor Tubes During Production and Processing, Using Eddy-Current Methods

    International Nuclear Information System (INIS)

    The possibilities of automatic and semi-automatic inspection of tubes using eddy-current methods are described. The paper deals in particular with modem processes, compared to the use of other non-destructive methods. The essence of the paper is that the methods discussed are ideal for objective automatic inspection. Not only are the known methods described, but certain new methods and their application to the detection of flaws in reactor tubes are discussed. (author)

  12. A Novel Automatic Method for Removal of Flicker in Video

    Institute of Scientific and Technical Information of China (English)

    ZHOU Lei; NI Qiang; WANG Xing-dong; ZHOU Yuan-hua

    2005-01-01

    Intensity flicker is a common form of degradation in archived film. Most algorithms on this distortion are complicated and uncontrolled. This paper presented a discrete mathematical model of flicker, designed a blockbased estimation method of the model's parameters according to their features of intensity variation in large area.With this estimation result it constructed a compensation model to repair the current frame. This restoration approach is full automatic and the repair process of current frame does not need the information of frames behind it.The algorithm was realized to establish a simple and adjustable repair system. The experimental results show that the proposed algorithm can remove most intensity flicker and preserve the wanted effects.

  13. Extracting Noun Phrases from Large-Scale Texts A Hybrid Approach and Its Automatic Evaluation

    CERN Document Server

    Chen, K; Chen, Kuang-hua; Chen, Hsin-Hsi

    1994-01-01

    To acquire noun phrases from running texts is useful for many applications, such as word grouping,terminology indexing, etc. The reported literatures adopt pure probabilistic approach, or pure rule-based noun phrases grammar to tackle this problem. In this paper, we apply a probabilistic chunker to deciding the implicit boundaries of constituents and utilize the linguistic knowledge to extract the noun phrases by a finite state mechanism. The test texts are SUSANNE Corpus and the results are evaluated by comparing the parse field of SUSANNE Corpus automatically. The results of this preliminary experiment are encouraging.

  14. Automatic detection of microaneurysms using microstructure and wavelet methods

    Indian Academy of Sciences (India)

    M Tamilarasi; K Duraiswamy

    2015-06-01

    Retinal microaneurysm is one of the earliest signs in diabetic retinopathy diagnosis. This paper has developed an approach to automate the detection of microaneurysms using wavelet-based Gaussian mixture model and microstructure texture feature extraction. First, the green channel of the colour retinal fundus image is extracted and pre-processed using various enhancement techniques such as bottom-hat filtering and gamma correction. Second, microstructures are extracted as Gaussian profiles in wavelet domain using the three-level generative model. Multiscale Gaussian kernels are obtained and histogram-based features are extracted from the best kernel. Using the Markov Chain Monte Carlo method, microaneurysms are classified using the optimal feature set. The proposed approach is experimented with DIARETDB0 and DIARETDB1 datasets using a classifier based on multi-layer perceptron procedure. For DIARETDB0 dataset, the proposed algorithm obtains the results with a sensitivity of 98.32 and specificity of 97.59. In the case of DIARETDB1 dataset, the sensitivity and specificity of 98.91 and 97.65 have been achieved. The accuracies achieved by the proposed algorithm are 97.86 and 98.33 using DIARETDB0 and DIARETDB1 datasets respectively. Based on ground truth validation, good segmentation results are achieved when compared to existing algorithms such as local relative entropy-based thresholding, inverse adaptive surface thresholding, inverse segmentation method, and dark object segmentation.

  15. Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping

    Directory of Open Access Journals (Sweden)

    Sophie Crommelinck

    2016-08-01

    Full Text Available Unmanned Aerial Vehicles (UAVs have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to its full potential: based on UAV data, cadastral boundaries are visually detected and manually digitized. A workflow that automatically extracts boundary features from UAV data could increase the pace of current mapping procedures. This review introduces a workflow considered applicable for automated boundary delineation from UAV data. This is done by reviewing approaches for feature extraction from various application fields and synthesizing these into a hypothetical generalized cadastral workflow. The workflow consists of preprocessing, image segmentation, line extraction, contour generation and postprocessing. The review lists example methods per workflow step—including a description, trialed implementation, and a list of case studies applying individual methods. Furthermore, accuracy assessment methods are outlined. Advantages and drawbacks of each approach are discussed in terms of their applicability on UAV data. This review can serve as a basis for future work on the implementation of most suitable methods in a UAV-based cadastral mapping workflow.

  16. 帕金森病患者红外线数字化步态测量数据的半自动提取方法的建立%Semi-automatic Extraction Method to Establish for the PD Gait Data of Infrared Digital Measurement

    Institute of Scientific and Technical Information of China (English)

    于昌琳; 沈林勇; 胡小吾; 钱晋武; 吴曦

    2014-01-01

    Many neurological diseases and bone-damaged diseases can cause movement disorder leading to abnormal gait,such as Parkinson′s disease.It will be more accurate to evaluate the rehabilitation of some disease through quantitative evaluation of gait param-eters instead of qualitative evaluation by doctor′s visual inspection.At present,the common method to make quantitative analysis of gait is to collect the three-dimensional coordinates of human body through the motion capture devices,then to extract the gait charac-teristics through the three-dimensional coordinates.During the extraction process,it is difficult to conduct completely automatic selec-tion due to the large masses of the original data,the complexity of completely manual processing,as well as the numerous cases of the demarcation points of clinical gait.In this case,with the combination of the advantages of multiple softwares,we utilize matlab to select demarcation points manually and then automatically extracted the displaying results of gait characteristics in friendly interface in order to realize the semi-automatic processing of three -dimensional coordinates,thus managing to extract the gait parameters efficiently as well as reflect the individual characteristics of clinical gait for Parkinson′s disease accurately.%很多神经性疾病和骨骼损伤性疾病都会造成运动障碍导致异常步态,帕金森症就是其中的一种,通过步态参数定量评估代替医生目测定性评估,可以更准确的对疾病进行康复评估。目前,对步态定量分析常用的方法是通过运动捕捉仪采集人体三维坐标,再通过三维坐标提取步态特征。在提取过程中,由于原始数据庞大,完全手工处理繁复,同时自动处理中临床步态分界点情况众多,完全自动选取存在困难。本研究结合多个软件的优势,利用matlab绘图手工选取分界点,再自动提取步态特征在友好界面中显示结果,实现对

  17. The Automatic Generation of Chinese Outline Font Based on Stroke Extraction

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    A new method to obtain spline outline description of Chinese font based on stroke extraction is presented.It has two primary advantages:(1)the quality of Chinese output is greatly improved;(2)the memory requirement is reduced.The method for stroke extraction is discussed in detail and experimental results are presented.

  18. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    Science.gov (United States)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  19. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  20. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    Science.gov (United States)

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning.

  1. On A Semi-Automatic Method for Generating Composition Tables

    CERN Document Server

    Liu, Weiming

    2011-01-01

    Originating from Allen's Interval Algebra, composition-based reasoning has been widely acknowledged as the most popular reasoning technique in qualitative spatial and temporal reasoning. Given a qualitative calculus (i.e. a relation model), the first thing we should do is to establish its composition table (CT). In the past three decades, such work is usually done manually. This is undesirable and error-prone, given that the calculus may contain tens or hundreds of basic relations. Computing the correct CT has been identified by Tony Cohn as a challenge for computer scientists in 1995. This paper addresses this problem and introduces a semi-automatic method to compute the CT by randomly generating triples of elements. For several important qualitative calculi, our method can establish the correct CT in a reasonable short time. This is illustrated by applications to the Interval Algebra, the Region Connection Calculus RCC-8, the INDU calculus, and the Oriented Point Relation Algebras. Our method can also be us...

  2. Automatic Extraction of Tongue Coatings from Digital Images: A Traditional Chinese Medicine Diagnostic Tool

    Institute of Scientific and Technical Information of China (English)

    Linda Yunlu BAI; SHI Yundi; WU Jia; ZHANG Yonghong; WONG Weiliang; WU Yu; BAI Jing

    2009-01-01

    In traditional Chinese medicine, the coating on the tongue is considered to be a reflection of various pathologic factors. However, the conventional method to examine the tongue lacks an acceptable standard and does not provide the means for sharing information. This paper describes a segmentation method to extract tongue coatings. First, the tongue body was extracted from the original image using the watershed transform. Then, a threshold method was applied to the image to eliminate the light from the camera flash.Finally, a threshold method using the Otsu model in combination with a splitting-merging method was used in the red, green, and blue (RGB) space to extract the thin coating. The combination of the above two methods is applied in the hue, saturation, and value (HSV) space to extract the thick coating. The feasibility of this method is tested by experiments, and the accuracy of segmentation is 95.9%.

  3. A Novel and Efficient Method for Iris Automatic Location

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2007-01-01

    An efficient and robust iris location algorithm plays a very important role in a real iris recognition system. A novel and efficient iris automatic location method is presented in this study. It includes following two steps mainly: pupil location and iris outer boundary location. A digital eye image was divided into many small rectangular blocks with fixed size in the pupil location, and the block with the smallest average intensity was selected as a reference area. Then image binarization was implemented taking the average intensity of the reference area as a threshold. At last the center coordinates and radius of pupil were estimated by extending the reference area to the pupil's boundaries in the binary iris image. In the iris outer location, two local parts of the eye image were selected and transformed into polar coordinates from Cartesian reference. In order to detect the fainter outer boundary of the iris quickly, a novel edge detector was used to locate boundaries of the two parts. The center coordinates and radius of the iris outer boundary can be estimated using the fusion of the locating results of the two local parts and the location information of the pupil. The algorithm was tested on CASIA v1.0 and MMU v1.0 digital eye image databases and experimental results show that the proposed method has satisfying performance and good robustness.

  4. A Method for Determining Sedimentary Micro-Facies Belts Automatically

    Institute of Scientific and Technical Information of China (English)

    Linfu Xue; Qitai Mei; Quan Sun

    2003-01-01

    It is important to understand the distribution of sedimentary facies, especially the distribution of sand body that is the key for oil production and exploration. The secondary oil recovery requires analyzing a great deal of data accumulated within decades of oil field development. At many cases sedimentary micro-facies maps need to be reconstructed and redrawn frequently, which is time-consuming and heavy. This paper presents an integrated approach for determining the distribution of sedimentary micro-facies, tracing the micro-facies boundary, and drawing the map of sedimentary micro-facies belts automatically by computer technique. The approach is based on the division and correlation of strata of multiple wells as well as analysis of sedimentary facies. The approach includes transform, gridding, interpolation, superposing, searching boundary and drawing the map of sedimentary facies belts, and employs the spatial interpolation method and "worm" interpolation method to determine the distribution of sedimentary micro-facies including sand ribbon and/or sand blanket. The computer software developed on the basis of the above principle provides a tool for quick visualization and understanding the distribution of sedimentary micro-facies and reservoir. Satisfied results have been achieveed by applying the technique to the Putaohua Oil Field in Songliao Basin, China.

  5. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  6. Framework for automatic information extraction from research papers on nanocrystal devices.

    Science.gov (United States)

    Dieb, Thaer M; Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C

    2015-01-01

    To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called " NaDev" (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called "NaDevEx" (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39-73%); however, precision is better (75-97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers

  7. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Science.gov (United States)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  8. Automatic segmentation of coronary artery tree based on multiscale Gabor filtering and transition region extraction

    Science.gov (United States)

    Wang, Fang; Wang, Guozhu; Kang, Lie; Wang, Juan

    2011-11-01

    This paper presents a novel segmentation method for extracting coronary artery tree from angiogram, which is based on multiscale Gabor filtering and transition region extraction. Firstly the enhanced image is obtained after multiscale Gabor filtering, then the transition region of the enhanced image is extracted using the local complexity algorithm, and the final segmentation threshold is calculated, finally the image segmentation is achieved. To evaluate the performance of the proposed approach, we carried out experiments on various sets of angiographic images, and compared its effects with those of the improved top-hat segmentation method. The experiments indicate that the proposed method outperforms the latter method about better extraction of small vessels, more background elimination, better visualized coronary artery tree and continuity of the vessels.

  9. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  10. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    Science.gov (United States)

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  11. A NOVEL METHOD FOR ARABIC MULTI-WORD TERM EXTRACTION

    Directory of Open Access Journals (Sweden)

    Hadni Meryem

    2014-10-01

    Full Text Available Arabic Multiword Terms (AMWTs are relevant strings of words in text documents. Once they are automatically extracted, they can be used to increase the performance of any Arabic Text Mining applications such as Categorization, Clustering, Information Retrieval System, Machine Translation, and Summarization, etc. Mainly the proposed methods for AMWTs extraction can be categorized in three approaches: Linguistic-based, Statistic-based, and hybrid-based approach. These methods present some drawbacks that limit their use. In fact they can only deal with bi-grams terms and their yield not good accuracies. In this paper, to overcome these drawbacks, we propose a new and efficient method for AMWTs Extraction based on a hybrid approach. This latter is composed by two main filtering steps: the Linguistic filter and the Statistical one. The Linguistic Filter uses our proposed Part Of Speech (POS Tagger and the Sequence identifier as patterns in order to extract candidate AMWTs. While the Statistical filter incorporate the contextual information, and a new proposed association measure based on Termhood and Unithood Estimation named NTC-Value. To evaluate and illustrate the efficiency of our proposed method for AMWTs extraction, a comparative study has been conducted based on Kalimat Corpus and using nine experiment schemes: In the linguistic filter, we used three POS Taggers such as Taani’s method based Rule-approach, HMM method based Statistical-approach, and our recently proposed Tagger based Hybrid –approach. While in the Statistical filter, we used three statistical measures such as C-Value, NC-Value, and our proposed NTC-Value. The obtained results demonstrate the efficiency of our proposed method for AMWTs extraction: it outperforms the other ones and can deal correctly with the tri-grams terms.

  12. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. PMID:26518205

  13. Automatic Extraction of Open Space Area from High Resolution Urban Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Hiremath P S & Kodge B G

    2010-06-01

    Full Text Available In the 21st century, Aerial and satellite images are information rich. They are alsocomplex to analyze. For GIS systems, many features require fast and reliableextraction of open space area from high resolution satellite imagery. In this paperwe will study efficient and reliable automatic extraction algorithm to find out theopen space area from the high resolution urban satellite imagery. This automaticextraction algorithm uses some filters and segmentations and grouping isapplying on satellite images. And the result images may use to calculate the totalavailable open space area and the built up area. It may also use to compare thedifference between present and past open space area using historical urbansatellite images of that same projection.

  14. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols.

  15. DEM automatic extraction on Rio de Janeiro from WV2 stereo pair images

    International Nuclear Information System (INIS)

    The use of three-dimensional data has become, for a lot of mapping applications, very important. DEM are applied for modelling purposes, i.e. the 3D city model generation, but principally for imagery orthorectification. In aerial photogrammetry is well known the suitable use of stereo imagery to produce an accurate DEM, but the limits of the process (cost, schedule of data collection, highly technical staff) and the new advanced digital image processing algorithms have open the work scenario to the remote sensing data. This research has wanted to investigate the possibility to obtain accurate DEMs by means of automatic terrain extraction algorithms implemented in Leica Photogrammetry Suite (LPS) from stereoscopic remote sensing images collected by DigitalGlobe's WorldView-2 (WV2) satellite. The DEM of Rio de Janeiro (Brazil) and the correspondent digital orthoimages have been the results

  16. Fully Automatic Method for 3D T1-Weighted Brain Magnetic Resonance Images Segmentation

    Directory of Open Access Journals (Sweden)

    Bouchaib Cherradi

    2011-05-01

    Full Text Available Accurate segmentation of brain MR images is of interest for many brain disorders. However, dueto several factors such noise, imaging artefacts, intrinsic tissue variation and partial volumeeffects, brain extraction and tissue segmentation remains a challenging task. So, in this paper, afull automatic method for segmentation of anatomical 3D brain MR images is proposed. Themethod consists of many steps. First, noise reduction by median filtering is done; secondsegmentation of brain/non-brain tissue is performed by using a Threshold Morphologic BrainExtraction method (TMBE. Then initial centroids estimation by gray level histogram analysis isexecuted, this stage yield to a Modified version of Fuzzy C-means Algorithm (MFCM that is usedfor MRI tissue segmentation. Finally 3D visualisation of the three clusters (CSF, GM and WM isperformed. The efficiency of the proposed method is demonstrated by extensive segmentationexperiments using simulated and real MR images. A confrontation of the method with similarmethods of the literature has been undertaken trough different performance measures. TheMFCM for tissue segmentation introduce a gain in rapidity of convergence of about 70%.

  17. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  18. Comparison of edge detection techniques for the automatic information extraction of Lidar data

    Science.gov (United States)

    Li, H.; di, L.; Huang, X.; Li, D.

    2008-05-01

    In recent years, there has been much interest in information extraction from Lidar point cloud data. Many automatic edge detection algorithms have been applied to extracting information from Lidar data. Generally they can be divided as three major categories: early vision gradient operators, optimal detectors and operators using parametric fitting models. Lidar point cloud includes the intensity information and the geographic information. Thus, traditional edge detectors used in remote sensed images can take advantage with the coordination information provided by point data. However, derivation of complex terrain features from Lidar data points depends on the intensity properties and topographic relief of each scene. Take road for example, in some urban area, road has the alike intensity as buildings, but the topographic relationship of road is distinct. The edge detector for road in urban area is different from the detector for buildings. Therefore, in Lidar extraction, each kind of scene has its own suitable edge detector. This paper compares application of the different edge detectors from the previous paragraph to various terrain areas, in order to figure out the proper algorithm for respective terrain type. The Canny, EDISON and SUSAN algorithms were applied to data points with the intensity character and topographic relationship of Lidar data. The Lidar data for test are over different terrain areas, such as an urban area with a mass of buildings, a rural area with vegetation, an area with slope, or an area with a bridge, etc. Results using these edge detectors are compared to determine which algorithm is suitable for a specific terrain area. Key words: Edge detector, Extraction, Lidar, Point data

  19. Automatic detecting method of LED signal lamps on fascia based on color image

    Science.gov (United States)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  20. Automatic Extraction of Spatio-Temporal Information from Arabic Text Documents

    Directory of Open Access Journals (Sweden)

    Abdelkoui Feriel

    2015-10-01

    Full Text Available Unstructured Arabic text documents are an important source of geographical and temporal information. The possibility of automatically tracking spatio-temporal information, capturing changes relating to events from text documents, is a new challenge in the fields of geographic information retrieval (GIR, temporal information retrieval (TIR and natural language processing (NLP. There was a lot of work on the extraction of information in other languages that use Latin alphabet, such as English,, French, or Spanish, by against the Arabic language is still not well supported in GIR and TIR and it needs to conduct more researches. In this paper, we present an approach that support automated exploration and extraction of spatio-temporal information from Arabic text documents in order to capture and model such information before it can be utilized in search and exploration tasks. The system has been successfully tested on 50 documents that include a mixture of types of Spatial/temporal information. The result achieved 91.01% of recall and of 80% precision. This illustrates that our approach is effective and its performance is satisfactory.

  1. AUTOMATIC ROAD EXTRACTION FROM SATELLITE IMAGES USING EXTENDED KALMAN FILTERING AND EFFICIENT PARTICLE FILTERING

    Directory of Open Access Journals (Sweden)

    Jenita Subash

    2011-12-01

    Full Text Available Users of geospatial data in government, military, industry, research, and other sectors have need foraccurate display of roads and other terrain information in areas where there are ongoing operations orlocations of interest. Hence, road extraction that is significantly more automated than the employment ofcostly and scarce human resources has become a challenging technical issue for the geospatialcommunity. An automatic road extraction based on Extended Kalman Filtering (EKF and variablestructured multiple model particle filter (VS-MMPF from satellite images is addressed. EKF traces themedian axis of a single road segment while VS-MMPF traces all road branches initializing at theintersection. In case of Local Linearization Particle filter (LLPF, a large number of particles are usedand therefore high computational expense is usually required in order to attain certain accuracy androbustness. The basic idea is to reduce the whole sampling space of the multiple model system to the modesubspace by marginalization over the target subspace and choose better importance function for modestate sampling. The core of the system is based on profile matching. During the estimation, new referenceprofiles were generated and stored in the road template memory for future correlation analysis, thuscovering the space of road profiles. .

  2. AUTOMATIC EXTRACTION OF ROAD SURFACE AND CURBSTONE EDGES FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    A. Miraliakbari

    2015-05-01

    Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  3. Challenges for automatically extracting molecular interactions from full-text articles

    Directory of Open Access Journals (Sweden)

    Curran James R

    2009-09-01

    Full Text Available Abstract Background The increasing availability of full-text biomedical articles will allow more biomedical knowledge to be extracted automatically with greater reliability. However, most Information Retrieval (IR and Extraction (IE tools currently process only abstracts. The lack of corpora has limited the development of tools that are capable of exploiting the knowledge in full-text articles. As a result, there has been little investigation into the advantages of full-text document structure, and the challenges developers will face in processing full-text articles. Results We manually annotated passages from full-text articles that describe interactions summarised in a Molecular Interaction Map (MIM. Our corpus tracks the process of identifying facts to form the MIM summaries and captures any factual dependencies that must be resolved to extract the fact completely. For example, a fact in the results section may require a synonym defined in the introduction. The passages are also annotated with negated and coreference expressions that must be resolved. We describe the guidelines for identifying relevant passages and possible dependencies. The corpus includes 2162 sentences from 78 full-text articles. Our corpus analysis demonstrates the necessity of full-text processing; identifies the article sections where interactions are most commonly stated; and quantifies the proportion of interaction statements requiring coherent dependencies. Further, it allows us to report on the relative importance of identifying synonyms and resolving negated expressions. We also experiment with an oracle sentence retrieval system using the corpus as a gold-standard evaluation set. Conclusion We introduce the MIM corpus, a unique resource that maps interaction facts in a MIM to annotated passages within full-text articles. It is an invaluable case study providing guidance to developers of biomedical IR and IE systems, and can be used as a gold-standard evaluation

  4. Automatic extraction of mandibular bone geometry for anatomy-based synthetization of radiographs.

    Science.gov (United States)

    Antila, Kari; Lilja, Mikko; Kalke, Martti; Lötjönen, Jyrki

    2008-01-01

    We present an automatic method for segmenting Cone-Beam Computerized Tomography (CBCT) volumes and synthetizing orthopantomographic, anatomically aligned views of the mandibular bone. The model-based segmentation method was developed having the characteristics of dental CBCT, severe metal artefacts, relatively high noise and high variability of the mandibular bone shape, in mind. First, we applied the segmentation method to delineate the bone. Second, we aligned a model resembling the geometry of orthopantomographic imaging according to the segmented surface. Third, we estimated the tooth orientations based on the local shape of the segmented surface. These results were used in determining the geometry of the synthetized radiograph. Segmentation was done with excellent results: with 14 samples we reached 0.57+/-0.16 mm mean distance from hand drawn reference. The estimation of tooth orientations was accurate with error of 0.65+/-8.0 degrees. An example of these results used in synthetizing panoramic radiographs is presented.

  5. Complementary methods for extracting road centerlines from IKONOS imagery

    Science.gov (United States)

    Haverkamp, Donna S.; Poulsen, Rick

    2003-03-01

    We present both semi-automated and automated methods for road extraction using IKONOS imagery. The automated method extracts straight-line, gridded road networks by inferring a local grid structure from initial information and then filling in missing pieces using hypothesization and verification. This can be followed by the semi-automated road tracker tool to approximate curvilinear roads and to fill in some of the remaining missing road structure. After a panchromatic texture analysis, our automated method incorporates an object-level processing phase which enables the algorithm to avoid problems arising from interference such as crosswalks and vehicles. It is limited, however, in that the logic is designed for reasoning concerning intersecting grid patterns of straight road segments. Many suburban areas are characterized by curving streets which may not be well-approximated using this automatic method. In these areas, missing content can be filled in using a semi-automated tool which tracks between user-supplied points. The semi-automated algorithm is based on measures derived from both the panchromatic and multispectral bands of IKONOS. We will discuss both of these algorithms in detail and how they fit into our overall solution strategy for road extraction. A presentation of current experimentation and test results will be followed by a discussion of advantages, shortcomings, and directions for future research and improvements.

  6. Automatic Signature Verification: Bridging the Gap between Existing Pattern Recognition Methods and Forensic Science

    OpenAIRE

    Malik, Muhammad Imran

    2015-01-01

    The main goal of this thesis is twofold. First, the thesis aims at bridging the gap between existing Pattern Recognition (PR) methods of automatic signature verification and the requirements for their application in forensic science. This gap, attributed by various factors ranging from system definition to evaluation, prevents automatic methods from being used by Forensic Handwriting Examiners (FHEs). Second, the thesis presents novel signature verification methods developed particularly cons...

  7. Automatic extraction of myocardial mass and volumes using parametric images from dynamic non-gated PET

    DEFF Research Database (Denmark)

    Harms, Hans; Hansson, Nils Henrik Stubkjær; Tolbod, Lars Poulsen;

    2016-01-01

    -gated dynamic cardiac PET. METHODS: Thirty-five patients with aortic-valve stenosis and 10 healthy controls (HC) underwent a 27-min 11C-acetate PET/CT scan and cardiac magnetic resonance imaging (CMR). HC were scanned twice to assess repeatability. Parametric images of uptake rate K1 and the blood pool were......LV and WT only and an overestimation for LVEF at lower values. Intra- and inter-observer correlations were >0.95 for all PET measurements. PET repeatability accuracy in HC was comparable to CMR. CONCLUSION: LV mass and volumes are accurately and automatically generated from dynamic 11C-acetate PET without...... ECG-gating. This method can be incorporated in a standard routine without any additional workload and can, in theory, be extended to other PET tracers....

  8. Automatic dynamic mask extraction for PIV images containing an unsteady interface, bubbles, and a moving structure

    Science.gov (United States)

    Dussol, David; Druault, Philippe; Mallat, Bachar; Delacroix, Sylvain; Germain, Grégory

    2016-07-01

    When performing Particle Image Velocimetry (PIV) measurements in complex fluid flows with moving interfaces and a two-phase flow, it is necessary to develop a mask to remove non-physical measurements. This is the case when studying, for example, the complex bubble sweep-down phenomenon observed in oceanographic research vessels. Indeed, in such a configuration, the presence of an unsteady free surface, of a solid-liquid interface and of bubbles in the PIV frame, leads to generate numerous laser reflections and therefore spurious velocity vectors. In this note, an image masking process is developed to successively identify the boundaries of the ship and the free surface interface. As the presence of the solid hull surface induces laser reflections, the hull edge contours are simply detected in the first PIV frame and dynamically estimated for consecutive ones. As for the unsteady surface determination, a specific process is implemented like the following: i) the edge detection of the gradient magnitude in the PIV frame, ii) the extraction of the particles by filtering high-intensity large areas related to the bubbles and/or hull reflections, iii) the extraction of the rough region containing these particles and their reflections, iv) the removal of these reflections. The unsteady surface is finally obtained with a fifth-order polynomial interpolation. The resulted free surface is successfully validated from the Fourier analysis and by visualizing selected PIV images containing numerous spurious high intensity areas. This paper demonstrates how this data analysis process leads to PIV images database without reflections and an automatic detection of both the free surface and the rigid body. An application of this new mask is finally detailed, allowing a preliminary analysis of the hydrodynamic flow.

  9. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  10. Semantic Gap in CBIR: Automatic Objects Spatial Relationships Semantic Extraction and Representation

    Directory of Open Access Journals (Sweden)

    Hui Hui Wang, Dzulkifli Mohamad & N.A. Ismail

    2010-08-01

    Full Text Available The explosive growth of image data leads to the need of research anddevelopment of Image retrieval. Image retrieval researches are moving fromkeyword, to low level features and to semantic features. Drive towards semanticfeatures is due to the problem of the keywords which can be very subjective andtime consuming while low level features cannot always describe high levelconcepts in the users’ mind. This paper is proposed a novel technique for objectsspatial relationships semantics extraction and representation among objectsexists in images. All objects are identified based on low level features extractionintegrated with proposed line detection techniques. Objects are representedusing a Minimum Bound Region (MBR with a reference coordinate. Thereference coordinate is used to compute the spatial relation among objects.There are 8 spatial relationship concepts are determined: “Front”, “Back”, “Right”,“Left”, “Right-Front”, “Left-Front”, “Right-Back”, “Left-Back” concept. The userquery in text form is automatically translated to semantic meaning andrepresentation. Besides, the image similarity of objects spatial relationshipssemantic has been proposed.

  11. Automatic Registration Method for Fusion of ZY-1-02C Satellite Images

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2013-12-01

    Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.

  12. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    Science.gov (United States)

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building

  13. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Science.gov (United States)

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building

  14. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Directory of Open Access Journals (Sweden)

    Fasahat Ullah Siddiqui

    2016-07-01

    Full Text Available Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality. Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state

  15. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    Science.gov (United States)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  16. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  17. Automatic Shape-Based Target Extraction for Close-Range Photogrammetry

    Science.gov (United States)

    Guo, X.; Chen, Y.; Wang, C.; Cheng, M.; Wen, C.; Yu, J.

    2016-06-01

    In order to perform precise identification and location of artificial coded targets in natural scenes, a novel design of circle-based coded target and the corresponding coarse-fine extraction algorithm are presented. The designed target separates the target box and coding box totally and owns an advantage of rotation invariance. Based on the original target, templates are prepared by three geometric transformations and are used as the input of shape-based template matching. Finally, region growing and parity check methods are used to extract the coded targets as final results. No human involvement is required except for the preparation of templates and adjustment of thresholds in the beginning, which is conducive to the automation of close-range photogrammetry. The experimental results show that the proposed recognition method for the designed coded target is robust and accurate.

  18. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  19. Automatic Sleep Staging using Multi-dimensional Feature Extraction and Multi-kernel Fuzzy Support Vector Machine

    OpenAIRE

    Yanjun Zhang; Xiangmin Zhang; Wenhui Liu; Yuxi Luo; Enjia Yu; Keju Zou; Xiaoliang Liu

    2014-01-01

    This paper employed the clinical Polysomnographic (PSG) data, mainly including all-night Electroencephalogram (EEG), Electrooculogram (EOG) and Electromyogram (EMG) signals of subjects, and adopted the American Academy of Sleep Medicine (AASM) clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as cl...

  20. A new method for extracting domain terminology

    Institute of Scientific and Technical Information of China (English)

    PEI Bing-zhen; CHEN Xiao-rong; HU Yi; LU Ru-zhan

    2009-01-01

    This article proposes a new general, highly efficient algorithm for extracting domain terminologies.This domain-independent algorithm with multi-layers of filters is a hybrid of statistic-oriented and rule-oriented methods. Utilizing the features of domain terminologies and the characteristics that are unique to Chinese, this algorithm extracts domain terminologies by generating multi-word unit (MWU) candidates at first and then filtering the candidates through multi-strategies. Our test results show that this algorithm is feasible and effective.

  1. Automatic segmentation of the bone and extraction of the bone cartilage interface from magnetic resonance images of the knee

    Science.gov (United States)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K.; Ourselin, Sébastien

    2007-03-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  2. Automatic extraction of lunar impact craters from Chang'E images based on Hough transform and RANSAC

    Science.gov (United States)

    Luo, Zhongfei; Kang, Zhizhong

    2016-03-01

    This article proposed an algorithm combining Hough transform and RANSAC algorithm for automatic extraction of lunar craters. (1) In order to suppress noise, the images were filtered; (2) The edge of image were extracted, subsequently, eliminate false edge points by qualifying the gradient direction and the area of connected domain; (3) The edge images were segmented through Hough transform, gathering the same crater edge points together; (4) The edge images after segmentation were fitted using RANSAC algorithm, getting the high precision parameter. High precision of the algorithm was verified by the experiments of images acquired by the Chang'E-1 satellites.

  3. Purging Musical Instrument Sample Databases Using Automatic Musical Instrument Recognition Methods

    OpenAIRE

    Livshin, Arie; Rodet, Xavier

    2009-01-01

    cote interne IRCAM: Livshin09a None / None National audience Compilation of musical instrument sample databases requires careful elimination of badly recorded samples and validation of sample classification into correct categories. This paper introduces algorithms for automatic removal of bad instrument samples using Automatic Musical Instrument Recognition and Outlier Detection techniques. Best evaluation results on a methodically contaminated sound database are achieved using the i...

  4. Explodet Project:. Methods of Automatic Data Processing and Analysis for the Detection of Hidden Explosive

    Science.gov (United States)

    Lecca, Paola

    2003-12-01

    The research of the INFN Gruppo Collegato di Trento in the ambit of EXPLODET project for the humanitarian demining, is devoted to the development of a software procedure for the automatization of data analysis and decision taking about the presence of hidden explosive. Innovative algorithms of likely background calculation, a system based on neural networks for energy calibration and simple statistical methods for the qualitative consistency check of the signals are the main parts of the software performing the automatic data elaboration.

  5. A Self-adaptive Threshold Method for Automatic Sleep Stage Classification Using EOG and EMG

    Directory of Open Access Journals (Sweden)

    Li Jie

    2015-01-01

    Full Text Available Sleep stages are generally divided into three stages including Wake, REM and NRME. The standard sleep monitoring technology is Polysomnography (PSG. The inconvenience for PSG monitoring limits the usage of PSG in some areas. In this study, we developed a new method to classify sleep stage using electrooculogram (EOG and electromyography (EMG automatically. We extracted right and left EOG features and EMG feature in time domain, and classified them into strong, weak and none types through calculating self-adaptive threshold. Combination of the time features of EOG and EMG signals, we classified sleep stages into Wake, REM and NREM stages. The time domain features utilized in the method were Integrate Value, variance and energy. The experiment of 30 datasets showed a satisfactory result with the accuracy of 82.93% for Wake, NREM and REM stages classification, and the average accuracy of Wake stage classification was 83.29%, 82.11% for NREM stage and 76.73% for REM stage.

  6. Identifying Structures in Social Conversations in NSCLC Patients through the Semi-Automatic extraction of Topical Taxonomies

    Directory of Open Access Journals (Sweden)

    Giancarlo Crocetti

    2016-01-01

    Full Text Available The exploration of social conversations for addressing patient’s needs is an important analytical task in which many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in order to generate insight from social media data, which can be considered as one of the most challenging source of information available today due to its sheer volume and noise. This study is based on the work by Scott Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The mechanism for automatically identifying and generating a taxonomy from social conversations is developed and pressured tested using public data from media sites focused on the needs of cancer patients and their families. Moreover, a novel method for generating the category’s label and the determination of an optimal number of categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is familiar with taxonomies, what they are and how they are used.

  7. Automatic Morphological Sieving: Comparison between Different Methods, Application to DNA Ploidy Measurements

    Directory of Open Access Journals (Sweden)

    Christophe Boudry

    1999-01-01

    Full Text Available The aim of the present study is to propose alternative automatic methods to time consuming interactive sorting of elements for DNA ploidy measurements. One archival brain tumour and two archival breast carcinoma were studied, corresponding to 7120 elements (3764 nuclei, 3356 debris and aggregates. Three automatic classification methods were tested to eliminate debris and aggregates from DNA ploidy measurements (mathematical morphology (MM, multiparametric analysis (MA and neural network (NN. Performances were evaluated by reference to interactive sorting. The results obtained for the three methods concerning the percentage of debris and aggregates automatically removed reach 63, 75 and 85% for MM, MA and NN methods, respectively, with false positive rates of 6, 21 and 25%. Information about DNA ploidy abnormalities were globally preserved after automatic elimination of debris and aggregates by MM and MA methods as opposed to NN method, showing that automatic classification methods can offer alternatives to tedious interactive elimination of debris and aggregates, for DNA ploidy measurements of archival tumours.

  8. Automatic extraction of protein point mutations using a graph bigram association.

    Directory of Open Access Journals (Sweden)

    Lawrence C Lee

    2007-02-01

    Full Text Available Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram, that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR, tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76, protein tyrosine kinases (0.72 versus 0.69, and ion channel transporters (0.76 versus 0.74. Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.

  9. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    Science.gov (United States)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  10. Automatic Seamless Stitching Method for CCD Images of Chang'E-I Lunar Mission

    Institute of Scientific and Technical Information of China (English)

    Mengjie Ye; Jian Li; Yanyan Liang; Zhanchuan Cai; Zesheng Tang

    2011-01-01

    A novel automatic seamless stitching method is presented.Compared to the traditional method,it can speed the processing and minimize the utilization of human resources to produce global lunar map.Meanwhile,a new global image map of the Moon with spatial resolution of~120 m has been completed by the proposed method from Chang'E-1 CCD image data.

  11. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    OpenAIRE

    Miguel García-Remesal; Alejandro García-Ruiz; David Pérez-Rey; Diana de la Iglesia; Víctor Maojo

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from t...

  12. Characterization of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash using different extraction methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, P.; Weavers, L.K.; Taerakul, P.; Walker, H.W. [Ohio State University, Columbus, OH (United States). Dept. of Civil & Environmental Engineering

    2006-01-01

    In this study, traditional Soxhlet, automatic Soxhlet and ultrasonic extraction techniques were employed to determine the speciation and concentration of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash samples collected from the baghouse of a spreader stoker boiler. To test the efficiencies of different extraction methods, LSD ash samples were doped with a mixture of 16 US EPA specified PAHs to measure the matrix spike recoveries. The results showed that the spike recoveries of PAHs were different using these three extraction methods with dichloromethane (DCM) as the solvent. Traditional Soxhlet extraction achieved slightly higher recoveries than automatic Soxhlet and ultrasonic extraction. Different solvents including toluene, DCM:acetone (1:1 V/V) and hexane:acetone (1:1 V/V) were further examined to optimize the recovery using ultrasonic extraction. Toluene achieved the highest spike recoveries of PAHs at a spike level of 10 {mu}g kg{sup -1}. When the spike level was increased to 50 {mu}g kg{sup -1}, the spike recoveries of PAHs also correspondingly increased. Although the type and concentration of PAHs detected on LSD ash samples by different extraction methods varied, the concentration of each detected PAH was consistently low, at {mu}g kg{sup -1} levels.

  13. Characterization of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash using different extraction methods.

    Science.gov (United States)

    Sun, Ping; Weavers, Linda K; Taerakul, Panuwat; Walker, Harold W

    2006-01-01

    In this study, traditional Soxhlet, automatic Soxhlet and ultrasonic extraction techniques were employed to determine the speciation and concentration of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash samples collected from the baghouse of a spreader stoker boiler. To test the efficiencies of different extraction methods, LSD ash samples were doped with a mixture of 16 US EPA specified PAHs to measure the matrix spike recoveries. The results showed that the spike recoveries of PAHs were different using these three extraction methods with dichloromethane (DCM) as the solvent. Traditional Soxhlet extraction achieved slightly higher recoveries than automatic Soxhlet and ultrasonic extraction. Different solvents including toluene, DCM:acetone (1:1 V/V) and hexane:acetone (1:1 V/V) were further examined to optimize the recovery using ultrasonic extraction. Toluene achieved the highest spike recoveries of PAHs at a spike level of 10 microg kg(-1). When the spike level was increased to 50 microg kg(-1), the spike recoveries of PAHs also correspondingly increased. Although the type and concentration of PAHs detected on LSD ash samples by different extraction methods varied, the concentration of each detected PAH was consistently low, at microg kg(-1) levels. PMID:15990154

  14. Computer Domain Term Automatic Extraction and Hierarchical Structure Building%计算机领域术语的自动获取与层次构建

    Institute of Scientific and Technical Information of China (English)

    林源; 陈志泊; 孙俏

    2011-01-01

    This paper presents a computer domain term automatic extraction method based on roles and statistics.It uses computer book titles from Amazon.com website as corpus, data are preprocessed by words splitting, stop words and special characters filtering.Terms are extracted by a set of rules and frequency statistics and inserted into a word tree from ODP to build the hierarchical structure.Experimental results show high precision and recall of the automatically extracted results compared with manual tagged terms.%设计一种能够自动获取计算机领域术语的方案,提出基于规则与统计相结合的抽取方法,使用亚马逊网站的计算机类图书作为语料库,通过分词、去停止词预处理以及词频统计的方法提取出计算机类领域术语,并插入到由ODP构建的树中,形成计算机领域术语的层次结构.实验结果表明,与人工标注结果相比,使用该方法自动获取的术语有很高的准确率与召回率.

  15. Computer Vision Based Automatic Extraction and Thickness Measurement of Deep Cervical Flexor from Ultrasonic Images

    OpenAIRE

    Kwang Baek Kim; Doo Heon Song; Hyun Jun Park

    2016-01-01

    Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of t...

  16. Automatic control logics to eliminate xenon oscillation based on Axial Offsets Trajectory Method

    Energy Technology Data Exchange (ETDEWEB)

    Shimazu, Yoichiro [Mitsubishi Heavy Industries Ltd., Yokohama (Japan). Nuclear Energy Systems Engineering Center

    1996-06-01

    We have proposed Axial Offsets (AO) Trajectory Method for xenon oscillation control in pressurized water reactors. The features of this method are described as such that it can clearly give necessary control operations to eliminate xenon oscillations. It is expected that using the features automatic control logics for xenon oscillations can be simple and be realized easily. We investigated automatic control logics. The AO Trajectory Method could realize a very simple logic only for eliminating xenon oscillations. However it was necessary to give another considerations to eliminate the xenon oscillation with a given axial power distribution. The other control logic based on the modern control theory was also studied for comparison of the control performance of the new control logic. As the results, it is presented that the automatic control logics based on the AO Trajectory Method are very simple and effective. (author).

  17. Automatic control logics to eliminate xenon oscillation based on Axial Offsets Trajectory Method

    International Nuclear Information System (INIS)

    We have proposed Axial Offsets (AO) Trajectory Method for xenon oscillation control in pressurized water reactors. The features of this method are described as such that it can clearly give necessary control operations to eliminate xenon oscillations. It is expected that using the features automatic control logics for xenon oscillations can be simple and be realized easily. We investigated automatic control logics. The AO Trajectory Method could realize a very simple logic only for eliminating xenon oscillations. However it was necessary to give another considerations to eliminate the xenon oscillation with a given axial power distribution. The other control logic based on the modern control theory was also studied for comparison of the control performance of the new control logic. As the results, it is presented that the automatic control logics based on the AO Trajectory Method are very simple and effective. (author)

  18. Recent developments in automatic solid-phase extraction with renewable surfaces exploiting flow-based approaches

    DEFF Research Database (Denmark)

    Miró, Manuel; Hartwell, Supaporn Kradtap; Jakmunee, Jaroon;

    2008-01-01

    Solid-phase extraction (SPE) is the most versatile sample-processing method for removal of interfering species and/or analyte enrichment. Although significant advances have been made over the past two decades in automating the entire analytical protocol involving SPE via flow-injection approaches......,on-line SPE assays performed in permanent mode lack sufficient reliability as a consequence of progressively tighter packing of the bead reactor, contamination of the solid surfaces and potential leakage of functional moieties. This article overviews the current state-of-the-art of an appealing tool...

  19. Automatic SAR and optical images registration method based on improved SIFT

    Science.gov (United States)

    Yue, Chunyu; Jiang, Wanshou

    2014-10-01

    An automatic SAR and optical images registration method based on improved SIFT is proposed in this paper, which is a two-step strategy, from rough to accuracy. The geometry relation of images is first constructed by the geographic information, and images are arranged based on the elevation datum plane to eliminate rotation and resolution differences. Then SIFT features extracted by the dominant direction improved SIFT from two images are matched by SSIM as similar measure according to structure information of the SIFT feature. As rotation difference is eliminated in images of flat area after rough registration, the number of correct matches and correct matching rate can be increased by altering the feature orientation assignment. And then, parallax and angle restrictions are introduced to improve the matching performance by clustering analysis in the angle and parallax domains. Mapping the original matches to the parallax feature space and rotation feature space in sequence, which are established by the custom defined parallax parameters and rotation parameters respectively. Cluster analysis is applied in the parallax feature space and rotation feature space, and the relationship between cluster parameters and matching result is analysed. Owing to the clustering feature, correct matches are retained. Finally, the perspective transform parameters for the registration are obtained by RANSAC algorithm with removing the false matches simultaneously. Experiments show that the algorithm proposed in this paper is effective in the registration of SAR and optical images with large differences.

  20. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  1. A method for improving the accuracy of automatic indexing of Chinese-English mixed documents

    Institute of Scientific and Technical Information of China (English)

    Yan; ZHAO; Hui; SHI

    2012-01-01

    Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.

  2. AsteriX: a Web server to automatically extract ligand coordinates from figures in PDF articles.

    Science.gov (United States)

    Lounnas, V; Vriend, G

    2012-02-27

    Coordinates describing the chemical structures of small molecules that are potential ligands for pharmaceutical targets are used at many stages of the drug design process. The coordinates of the vast majority of ligands can be obtained from either publicly accessible or commercial databases. However, interesting ligands sometimes are only available from the scientific literature, in which case their coordinates need to be reconstructed manually--a process that consists of a series of time-consuming steps. We present a Web server that helps reconstruct the three-dimensional (3D) coordinates of ligands for which a two-dimensional (2D) picture is available in a PDF file. The software, called AsteriX, analyses every picture contained in the PDF file and attempts to determine automatically whether or not it contains ligands. Areas in pictures that may contain molecular structures are processed to extract connectivity and atom type information that allow coordinates to be subsequently reconstructed. The AsteriX Web server was tested on a series of articles containing a large diversity in graphical representations. In total, 88% of 3249 ligand structures present in the test set were identified as chemical diagrams. Of these, about half were interpreted correctly as 3D structures, and a further one-third required only minor manual corrections. It is principally impossible to always correctly reconstruct 3D coordinates from pictures because there are many different protocols for drawing a 2D image of a ligand, but more importantly a wide variety of semantic annotations are possible. The AsteriX Web server therefore includes facilities that allow the users to augment partial or partially correct 3D reconstructions. All 3D reconstructions are submitted, checked, and corrected by the users domain at the server and are freely available for everybody. The coordinates of the reconstructed ligands are made available in a series of formats commonly used in drug design research. The

  3. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    Directory of Open Access Journals (Sweden)

    Christine Sapienza

    2005-06-01

    Full Text Available Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW and the Itakura-Saito (IS distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  4. Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments

    Directory of Open Access Journals (Sweden)

    Xiaolong Shi

    2016-05-01

    Full Text Available Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood. Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines, which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is

  5. Method for Extracting and Sequestering Carbon Dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Rau, Gregory H.; Caldeira, Kenneth G.

    2005-05-10

    A method and apparatus to extract and sequester carbon dioxide (CO2) from a stream or volume of gas wherein said method and apparatus hydrates CO2, and reacts the resulting carbonic acid with carbonate. Suitable carbonates include, but are not limited to, carbonates of alkali metals and alkaline earth metals, preferably carbonates of calcium and magnesium. Waste products are metal cations and bicarbonate in solution or dehydrated metal salts, which when disposed of in a large body of water provide an effective way of sequestering CO2 from a gaseous environment.

  6. Determination of Artificial Sweetener 4-Ethoxyphenylurea in Succade by Automatic Solid-phase Extraction and High Performance Chromatography with Fluorescence Method%全自动固相萃取-高效液相色谱荧光法测定蜜饯中人工合成甜味剂对乙氧基苯脲含量

    Institute of Scientific and Technical Information of China (English)

    陈章捷; 陈金凤; 张艳燕; 钟坚海; 魏晶晶

    2014-01-01

    提出了高效液相色谱法测定蜜饯中人工合成甜味剂对乙氧基苯脲含量的方法。样品采用醋酸-醋酸铵缓冲液超声提取,全自动固相萃取仪净化,SB-C18反相色谱柱分离,荧光检测器检测。对乙氧基苯脲在0~10 mg/L范围内的线性相关系数为0.9987,方法定量限(S/N=10)小于0.1 mg/kg。以三种空白蜜饯为基体,在3个添加水平进行加标回收试验,平均回收率在81.7%~92.4%之间,相对标准偏差(n=6)在2.4%~6.8%之间。%High performance liquid chromatography is applied for the determination of artificial sweete-ner 4-Ethoxyphenylurea in succade.The sample is ultrasonic extracted with acetic acid/ammonium acetate buffer solution and purified by automatic solid-phase extraction.The extract is separated by SB-C1 8 column and detected by fluorescence detector.The value of correlation coefficient in the range of 0 to 10 mg/L is 0.9987.The limit of quantity (S/N=10)is less than 0.1 mg/kg.Using blank sample of succade as matrixes,the recovery is tested at 3 different concentration levels and the values of recovery are in the range of 81.7% to 92.4% with RSDs (n=6)in the range of 2.4% to 6.8%.

  7. Method of drill-worm coal extraction

    Energy Technology Data Exchange (ETDEWEB)

    Levkovich, P.Ye.; Bratishcheva, L.L.; Savich, N.S.

    1982-09-01

    The purpose of the invention is to increase extraction productivity. This goal is achieved because according to the method of drill-worm coal extraction which includes drilling from one preparatory shaft to the second of wells by paired worm shaft on a guide which is pulled into the drilled well during reverse course of the shaft, and reinforcement of the drilled well, the drilled well is reinforced by a wedge timbering which bulges out during drilling of the next well. According to the proposed method, coal is extracted by drilling wells from the preparatory shaft 1 (a haulage gallery is shown in the example). Drilling of the wells is done with the help of a sectional worm shaft equipped with a drilling crown and guide device, equipped with a cantilever used to attach the guide device to the main section of the worm shaft. The guide device also includes two horizontally installed, freely rotating cylinders located in front of the drilling crowns in the previously drilled well and the guide ski. During drilling of the well in the second preparatory shaft (a ventilation gallery is indicated in the example) on the guide platform sets of wedge timbering are installed connected with the help of flexible ties, for example chain segments. The wedge timbering (including the main set) consists of wedge elements made of inexpensive material, for example slag-concrete.

  8. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Rhine- Westphalia, Germany. A link to an example with ASTER data to detect change with the same method after the 2005 Kashmir earthquake is given. The method is also used to automatically normalize multitemporal, multispectral Landsat ETM+ data radiometrically. IDL/ENVI, Python and Matlab software...

  9. Automatic diagnostic methods of nuclear reactor collected signals

    International Nuclear Information System (INIS)

    This work is the first phase of an opwall study of diagnosis limited to problems of monitoring the operating state; this allows to show all what the pattern recognition methods bring at the processing level. The present problem is the research of the control operations. The analysis of the state of the reactor gives a decision which is compared with the history of the control operations, and if there is not correspondence, the state subjected to the analysis will be said 'abnormal''. The system subjected to the analysis is described and the problem to solve is defined. Then, one deals with the gaussian parametric approach and the methods to evaluate the error probability. After one deals with non parametric methods and an on-line detection has been tested experimentally. Finally a non linear transformation has been studied to reduce the error probability previously obtained. All the methods presented have been tested and compared to a quality index: the error probability

  10. Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

    CERN Document Server

    Tencé, Fabien

    2010-01-01

    Classic evaluation methods of believable agents are time-consuming because they involve many human to judge agents. They are well suited to validate work on new believable behaviours models. However, during the implementation, numerous experiments can help to improve agents' believability. We propose a method which aim at assessing how much an agent's behaviour looks like humans' behaviours. By representing behaviours with vectors, we can store data computed for humans and then evaluate as many agents as needed without further need of humans. We present a test experiment which shows that even a simple evaluation following our method can reveal differences between quite believable agents and humans. This method seems promising although, as shown in our experiment, results' analysis can be difficult.

  11. Automatic Sampling with the Ratio-of-uniforms Method

    OpenAIRE

    Leydold, Josef

    1999-01-01

    Applying the ratio-of-uniforms method for generating random variates results in very efficient, fast and easy to implement algorithms. However parameters for every particular type of density must be precalculated analytically. In this paper we show, that the ratio-of-uniforms method is also useful for the design of a black-box algorithm suitable for a large class of distributions, including all with log-concave densities. Using polygonal envelopes and squeezes results in an algorithm that is ...

  12. The Automatic Method of EEG State Classification by Using Self-Organizing Map

    Science.gov (United States)

    Tamura, Kazuhiro; Shimada, Takamasa; Saito, Yoichi

    In psychiatry, the sleep stage is one of the most important evidence for diagnosing mental disease. However, when doctor diagnose the sleep stage, much labor and skill are required, and a quantitative and objective method is required for more accurate diagnosis. For this reason, an automatic diagnosis system must be developed. In this paper, we propose an automatic sleep stage diagnosis method by using Self Organizing Maps (SOM). Neighborhood learning of SOM makes input data which has similar feature output closely. This function is effective to understandable classifying of complex input data automatically. We applied Elman-type feedback SOM to EEG of not only normal subjects but also subjects suffer from disease. The spectrum of characteristic waves in EEG of disease subjects is often different from it of normal subjects. So, it is difficult to classifying EEG of disease subjects with the rule for normal subjects. On the other hand, Elman-type feedback SOM Classifies the EEG with features which data include and classifying rule is made automatically, so even the EEG with disease subjects is able to be classified automatically. And this Elman-type feedback SOM has context units for diagnosing sleep stages considering contextual information of EEG. Experimental results indicate that the proposed method is able to achieve sleep stage judgment along with doctor's diagnosis.

  13. A cell extraction method for oily sediments

    Directory of Open Access Journals (Sweden)

    Michael eLappé

    2011-11-01

    Full Text Available Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are also an important economical resource, through natural seepage or accidental release they can also be major pollutants. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence and thereby hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008 separates the cells from the sediment matrix. In principle, this technique can also be used to separate cells from oily sediments, but it is not optimized for this application.Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification and enumeration much easier. Consequently, the resulting cell counts from samples treated according to our new protocol are significantly higher than those treated according to Kallmeyer et al. (2008. We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and – in samples containing more biodegraded oils – methanol, delivered the best results. However, as solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which hydrocarbon extraction is maximised and cell lysis minimized. A ratio between slurry and solvent of 1:2 to 1:5 delivered the highest cell counts without lysing too many cells. The method provided reproducibly good results on samples from very different environments, both marine and terrestrial.

  14. Unsupervised Threshold for Automatic Extraction of Dolphin Dorsal Fin Outlines from Digital Photographs in DARWIN (Digital Analysis and Recognition of Whale Images on a Network)

    CERN Document Server

    Hale, Scott A

    2012-01-01

    At least two software packages---DARWIN, Eckerd College, and FinScan, Texas A&M---exist to facilitate the identification of cetaceans---whales, dolphins, porpoises---based upon the naturally occurring features along the edges of their dorsal fins. Such identification is useful for biological studies of population, social interaction, migration, etc. The process whereby fin outlines are extracted in current fin-recognition software packages is manually intensive and represents a major user input bottleneck: it is both time consuming and visually fatiguing. This research aims to develop automated methods (employing unsupervised thresholding and morphological processing techniques) to extract cetacean dorsal fin outlines from digital photographs thereby reducing manual user input. Ideally, automatic outline generation will improve the overall user experience and improve the ability of the software to correctly identify cetaceans. Various transformations from color to gray space were examined to determine whi...

  15. Analysis of Fiber deposition using Automatic Image Processing Method

    Science.gov (United States)

    Belka, M.; Lizal, F.; Jedelsky, J.; Jicha, M.

    2013-04-01

    Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  16. Analysis of Fiber deposition using Automatic Image Processing Method

    Directory of Open Access Journals (Sweden)

    Jicha M.

    2013-04-01

    Full Text Available Fibers are permanent threat for a human health. They have an ability to penetrate deeper in the human lung, deposit there and cause health hazards, e.glung cancer. An experiment was carried out to gain more data about deposition of fibers. Monodisperse glass fibers were delivered into a realistic model of human airways with an inspiratory flow rate of 30 l/min. Replica included human airways from oral cavity up to seventh generation of branching. Deposited fibers were rinsed from the model and placed on nitrocellulose filters after the delivery. A new novel method was established for deposition data acquisition. The method is based on a principle of image analysis. The images were captured by high definition camera attached to a phase contrast microscope. Results of new method were compared with standard PCM method, which follows methodology NIOSH 7400, and a good match was found. The new method was found applicable for evaluation of fibers and deposition fraction and deposition efficiency were calculated afterwards.

  17. Virgin almond oil: Extraction methods and composition

    Directory of Open Access Journals (Sweden)

    Roncero, J. M.

    2016-09-01

    Full Text Available In this paper the extraction methods of virgin almond oil and its chemical composition are reviewed. The most common methods for obtaining oil are solvent extraction, extraction with supercritical fluids (CO2 and pressure systems (hydraulic and screw presses. The best industrial performance, but also the worst oil quality is achieved by using solvents. Oils obtained by this method cannot be considered virgin oils as they are obtained by chemical treatments. Supercritical fluid extraction results in higher quality oils but at a very high price. Extraction by pressing becomes the best option to achieve high quality oils at an affordable price. With regards chemical composition, almond oil is characterized by its low content in saturated fatty acids and the predominance of monounsaturated, especially oleic acid. Furthermore, almond oil contains antioxidants and fat-soluble bioactive compounds that make it an oil with interesting nutritional and cosmetic properties.En este trabajo se revisan los métodos de extracción del aceite de almendra virgen y su composición química. Los métodos más habituales para la obtención del aceite son la extracción con disolventes, la extracción con fluidos supercríticos (CO2 y los sistemas de presión (prensas hidráulica y de tornillo. El mayor rendimiento industrial, pero también la peor calidad de los aceites, se consigue mediante el uso de disolventes. Además, los aceites obtenidos por este método no se pueden considerar vírgenes, pues se obtienen por medio de tratamientos químicos. La extracción con fluidos supercríticos da lugar a aceites de mayor calidad pero a un precio muy elevado. La extracción mediante prensado se convierte en la mejor opción de extracción, al conseguir aceites de alta calidad a un precio asequible. En cuanto a su composición química, el aceite de almendra se caracteriza por su bajo contenido en ácidos grasos saturados y el predominio de los monoinsaturados, en

  18. Automatic ECG wave extraction in long-term recordings using Gaussian mesa function models and nonlinear probability estimators.

    Science.gov (United States)

    Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard

    2007-12-01

    This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186

  19. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Science.gov (United States)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  20. A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms

    International Nuclear Information System (INIS)

    The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations

  1. Comparison of modified automatic Dumas method and the traditional Kjeldahl method for nitrogen determination in infant food.

    Science.gov (United States)

    Bellomonte, G; Costantini, A; Giammarioli, S

    1987-01-01

    This study compares 2 methods for determining nitrogen and protein in various types of infant food: the Kjeldahl method, developed in 1883, which is time consuming and labor intensive, and a newer, automatic method, based on the Dumas method. In each category of infant food considered, the results obtained from both methods are shown to be comparable; however, the modified Dumas method is quicker, easier, and does not pollute the laboratory environment.

  2. Hierarchical Algorithm in DTM Generation and Automatic Extraction of Road from LIDAR Data

    Science.gov (United States)

    Hui-ying, L.; Yu-jun, X.; Zhi, W.; Yi-nan, L.

    2012-07-01

    Growing demand for an efficient land use above and below the ground is motivating cadastre and land management systems to move from traditional 2D systems toward three dimensional ones. Airborne laser technology offers direct acquisition of dense and accurate 3D data. In order to get 3D road this paper proposes a hierarchical algorithm to extract terrain point from LIDAR data. We stratify the raw LiDAR data according to the height, judge terrain points and non-terrain points by the connectivity. In the case of road network, it indicates the morphological characteristics of network structure with a certain length continuous strip and small difference in intensity. All these information, including elevation information, the intensity information, the morphological characteristics and other local features, are used for extracting the road network from DTM. Local morphological filtering method is implementing for finding clear boundaries and rich details of the road profile. Following the presentation of the algorithm results for this approach are shown and evaluated

  3. Automatic teleaudiometry: a low cost method to auditory screening

    Directory of Open Access Journals (Sweden)

    Campelo, Victor Eulálio Sousa

    2010-03-01

    Full Text Available Introduction: The auditory screening' benefits has been demonstrated, however these programs has been restricted to the big centers. Objectives: (a Developing a auditory screening method to distance; (b Testing its accuracy and comparing to the screening audiometry test (AV. Method: The teleaudiometry (TA, consists in a developed software, installed in a computer with phone TDH39. It was realized a study in series in 73 individuals between 17 and 50 years, being 57,%% of the female sex, they were randomly selected between patients and companions of the Hospital das Clínicas. Before were subjected to a symptom questionnaire and otoscopy, the individuals realized the tests of TA AV, with scanning in 20dB in the frequencies of 1,2 and 4kHz following the ASHA (1997 protocol and to the gold standard test of audiometry of pure tones in soundproof booth in aleatory order. Results: the TA has lasted average 125+11s and the AV 65+18s. 69 individuals (94,5% declaring to be found difficult or very easy to performing the TA and 61 (83,6% have considered easy or very easy the AV. The accuracy results of TA and AV were respectively: sensibility (86,7% / 86,7%, specificity (75,9%/ 72,4% and negative predictive value (95,7% / 95,5%, positive predictive value (48,1% / 55,2%. Conclusion: The teleaudiometry has showed a good option as an auditory screening method, presenting accuracy next to screening audiometry. In comparison with this method, the teleaudiometry has presented a similar sensibility, major specificity, negative predictive value and endurance time and, under positive predictive value.

  4. An Automatic Interference Recognition Method in Spread Spectrum Communication System

    Institute of Scientific and Technical Information of China (English)

    YANG Xiao-ming; TAO Ran

    2007-01-01

    An algorithm to detect and recognize interferences embedded in a direct sequence spread spectrum (DSSS) communication system is proposed. Based on Welch's averaging modified periodogram method and fractional Fourier transformation (FRFT), the paper proposes a decision tree-based algorithm in which a set of decision criteria for identifying different types of interferences is developed. Simulation results demonstrate that the proposed algorithm provides a high recognition rate and is robust for various ISR and SNR.

  5. A cell extraction method for oily sediments

    Science.gov (United States)

    Lappé, M.; Kallmeyer, J.

    2012-04-01

    Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are an important economical resource and, through natural seepage or accidental release, they can be major pollutants. Oil sands from Alberta, Canada, and samples from the seafloor of the Gulf of Mexico represent typical examples of either natural or anthropogenically affected oily sediments. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence and thereby massively hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008) separates the cells from the sediment matrix, producing a sediment free cell extract that can then be used for subsequent staining and cell enumeration under a fluorescence microscope. In principle, this technique can also be used to separate cells from oily sediments, but it was not originally optimized for this application and does not provide satisfactory results. Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction by a solvent treatment. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification and enumeration much easier. Consequently, the resulting cell counts from oily samples treated according to our new protocol were significantly higher than those treated according to Kallmeyer et al. (2008). We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and - in samples containing more biodegraded oils - methanol, delivered the best results. Because solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which the positive effect of hydrocarbon extraction overcomes the negative effect of cell lysis. A volumetric ratio of 1:2 to 1:5 between a formalin-fixed sediment slurry and solvent delivered highest cell counts. Extraction

  6. A new method for the automatic calculation of prosody

    International Nuclear Information System (INIS)

    An algorithm is presented for the calculation of the prosodic parameters for speech synthesis. It uses the melodic patterns, composed of rising and falling slopes, suggested by G. CAELEN, and rests on: 1. An analysis into units of meaning to determine a melodic pattern 2. the calculation of the numeric values for the prosodic variations of each syllable; 3. The use of a table of vocalic values for the three parameters for each vowel according to the consonantal environment and of a table of standard duration for consonants. This method was applied in the 'SARA' program of synthesis with satisfactory results. (author)

  7. Automatic extraction analysis of the anatomical functional area for normal brain 18F-FDG PET imaging

    International Nuclear Information System (INIS)

    Using self-designed automatic extraction software of brain functional area, the grey scale distribution of 18F-FDG imaging and the relationship between the 18F-FDG accumulation of brain anatomic function area and the 18F-FDG injected dose, the level of glucose, the age, etc., were studied. According to the Talairach coordinate system, after rotation, drift and plastic deformation, the 18F-FDG PET imaging was registered into the Talairach coordinate atlas, and then the average gray value scale ratios between individual brain anatomic functional area and whole brain area was calculated. Further more the statistics of the relationship between the 18F-FDG accumulation of every brain anatomic function area and the 18F-FDG injected dose, the level of glucose and the age were tested by using multiple stepwise regression model. After images' registration, smoothing and extraction, main cerebral cortex of the 18F-FDG PET brain imaging can be successfully localized and extracted, such as frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain ventricle, thalamus and hippocampus. The average ratios to the inner reference of every brain anatomic functional area were 1.01 ± 0.15. By multiple stepwise regression with the exception of thalamus and hippocampus, the grey scale of all the brain functional area was negatively correlated to the ages, but with no correlation to blood sugar and dose in all areas. To the 18F-FDG PET imaging, the brain functional area extraction program could automatically delineate most of the cerebral cortical area, and also successfully reflect the brain blood and metabolic study, but extraction of the more detailed area needs further investigation

  8. Accuracy of structure-based sequence alignment of automatic methods

    Directory of Open Access Journals (Sweden)

    Lee Byungkook

    2007-09-01

    Full Text Available Abstract Background Accurate sequence alignments are essential for homology searches and for building three-dimensional structural models of proteins. Since structure is better conserved than sequence, structure alignments have been used to guide sequence alignments and are commonly used as the gold standard for sequence alignment evaluation. Nonetheless, as far as we know, there is no report of a systematic evaluation of pairwise structure alignment programs in terms of the sequence alignment accuracy. Results In this study, we evaluate CE, DaliLite, FAST, LOCK2, MATRAS, SHEBA and VAST in terms of the accuracy of the sequence alignments they produce, using sequence alignments from NCBI's human-curated Conserved Domain Database (CDD as the standard of truth. We find that 4 to 9% of the residues on average are either not aligned or aligned with more than 8 residues of shift error and that an additional 6 to 14% of residues on average are misaligned by 1–8 residues, depending on the program and the data set used. The fraction of correctly aligned residues generally decreases as the sequence similarity decreases or as the RMSD between the Cα positions of the two structures increases. It varies significantly across CDD superfamilies whether shift error is allowed or not. Also, alignments with different shift errors occur between proteins within the same CDD superfamily, leading to inconsistent alignments between superfamily members. In general, residue pairs that are more than 3.0 Å apart in the reference alignment are heavily (>= 25% on average misaligned in the test alignments. In addition, each method shows a different pattern of relative weaknesses for different SCOP classes. CE gives relatively poor results for β-sheet-containing structures (all-β, α/β, and α+β classes, DaliLite for "others" class where all but the major four classes are combined, and LOCK2 and VAST for all-β and "others" classes. Conclusion When the sequence

  9. A simple multi-scale Gaussian smoothing-based strategy for automatic chromatographic peak extraction.

    Science.gov (United States)

    Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng

    2016-06-24

    Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method.

  10. Development of automatic extraction of the corpus callosum from magnetic resonance imaging of the head and examination of the early dementia objective diagnostic technique in feature analysis

    International Nuclear Information System (INIS)

    We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)

  11. Method and apparatus for automatic control of a humanoid robot

    Science.gov (United States)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Reiland, Matthew J (Inventor); Sanders, Adam M (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  12. Concept of automatic programming of NC machine for metal plate cutting by genetic algorithm method

    Directory of Open Access Journals (Sweden)

    B. Vaupotic

    2005-12-01

    Full Text Available Purpose: In this paper the concept of automatic programs of the NC machine for metal plate cutting by genetic algorithm method has been presented.Design/methodology/approach: The paper was limited to automatic creation of NC programs for two-dimensional cutting of material by means of adaptive heuristic search algorithms.Findings: Automatic creation of NC programs in laser cutting of materials combines the CAD concepts, the recognition of features and creation and optimization of NC programs. The proposed intelligent system is capable to recognize automatically the nesting of products in the layout, to determine the incisions and sequences of cuts forming the laid out products. Position of incisions is determined at the relevant places on the cut. The system is capable to find the shortest path between individual cuts and to record the NC program.Research limitations/implications: It would be appropriate to orient future researches towards conceiving an improved system for three-dimensional cutting with optional determination of positions of incisions, with the capability to sense collisions and with optimization of the speed and acceleration during cutting.Practical implications: The proposed system assures automatic preparation of NC program without NC programer.Originality/value: The proposed concept shows a high degree of universality, efficiency and reliability and it can be simply adapted to other NC-machines.

  13. An unsupervised text mining method for relation extraction from biomedical literature.

    Directory of Open Access Journals (Sweden)

    Changqin Quan

    Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  14. Automatic data generation scheme for finite-element method /FEDGE/ - Computer program

    Science.gov (United States)

    Akyuz, F.

    1970-01-01

    Algorithm provides for automatic input data preparation for the analysis of continuous domains in the fields of structural analysis, heat transfer, and fluid mechanics. The computer program utilizes the natural coordinate systems concept and the finite element method for data generation.

  15. Ceramography and segmentation of polycristalline ceramics: application to grain size analysis by automatic methods

    Energy Technology Data Exchange (ETDEWEB)

    Arnould, X.; Coster, M.; Chermant, J.L.; Chermant, L. [LERMAT, ISMRA, Caen (France); Chartier, T. [SPCTS, ENSCI, Limoges (France)

    2002-07-01

    The knowledge of the mean grain size of ceramics is a very important problem to solve in the ceramic industry. Some specific methods of segmentation are presented to analyse, by an automatic way, the granulometry and morphological parameters of ceramic materials. Example presented concerns cerine materials. Such investigations lead to important information on the sintering process. (orig.)

  16. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  17. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    CERN Document Server

    Jonnalagadda, Siddhartha

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  18. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    Science.gov (United States)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  19. Green technology approach towards herbal extraction method

    Science.gov (United States)

    Mutalib, Tengku Nur Atiqah Tengku Ab; Hamzah, Zainab; Hashim, Othman; Mat, Hishamudin Che

    2015-05-01

    The aim of present study was to compare maceration method of selected herbs using green and non-green solvents. Water and d-limonene are a type of green solvents while non-green solvents are chloroform and ethanol. The selected herbs were Clinacanthus nutans leaf and stem, Orthosiphon stamineus leaf and stem, Sesbania grandiflora leaf, Pluchea indica leaf, Morinda citrifolia leaf and Citrus hystrix leaf. The extracts were compared with the determination of total phenolic content. Total phenols were analyzed using a spectrophotometric technique, based on Follin-ciocalteau reagent. Gallic acid was used as standard compound and the total phenols were expressed as mg/g gallic acid equivalent (GAE). The most suitable and effective solvent is water which produced highest total phenol contents compared to other solvents. Among the selected herbs, Orthosiphon stamineus leaves contain high total phenols at 9.087mg/g.

  20. Automatic extraction of the mid-sagittal plane using an ICP variant

    Science.gov (United States)

    Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus

    2008-03-01

    Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.

  1. Development of automatic blood extraction device with a micro-needle for blood-sugar level measurement

    Science.gov (United States)

    Kawanaka, Kaichiro; Uetsuji, Yasutomo; Tsuchiya, Kazuyoshi; Nakamachi, Eiji

    2008-12-01

    In this study, a portable type HMS (Health Monitoring System) device is newly developed. It has features 1) puncturing a blood vessel by using a minimally invasive micro-needle, 2) extracting and transferring human blood and 3) measuring blood glucose level. This miniature SMBG (Self-Monitoring of Blood Glucose) device employs a syringe reciprocal blood extraction system equipped with an electro-mechanical control unit for accurate and steady operations. The device consists of a) a disposable syringe unit, b) a non-disposable body unit, and c) a glucose enzyme sensor. The syringe unit consists of a syringe itself, its cover, a piston and a titanium alloy micro-needle, whose inner diameter is about 100µm. The body unit consists of a linear driven-type stepping motor, a piston jig, which connects directly to the shaft of the stepping motor, and a syringe jig, which is driven by combining with the piston jig and slider, which fixes the syringe jig. The required thrust to drive the slider is designed to be greater than the value of the blood extraction force. Because of this driving mechanism, the automatic blood extraction and discharging processes are completed by only one linear driven-type stepping motor. The experimental results using our miniature SMBG device was confirmed to output more than 90% volumetric efficiency under the driving speed of the piston, 1.0mm/s. Further, the blood sugar level was measured successfully by using the glucose enzyme sensor.

  2. Combination of automatic HPLC-RIA method for determination of estrone and estradiol in serum.

    Science.gov (United States)

    Yasui, T; Yamada, M; Kinoshita, H; Uemura, H; Yoneda, N; Irahara, M; Aono, T; Sunahara, S; Mito, Y; Kurimoto, F; Hata, K

    1999-01-01

    We developed a highly sensitive assay for estrone and 17 beta-estradiol in serum. Estrone and 17 beta-estradiol, obtained by solid-phase extraction using a Sep pak tC18 cartridge, were purified by high-performance liquid chromatography (HPLC). Quantitation of estrone and 17 beta-estradiol were carried out by radioimmunoassay. Not insignificantly, this automatic system of extraction and HPLC succeeded in analyzing 80 samples a week. Intra-assay coefficients of variation (CV) for estrone and 17 beta-estradiol ranged from 19.5 to 28.7%, and from 8.5 to 13.7%, respectively. The minimum detectable dose for estrone and 17 beta-estradiol were 1.04 pg/ml and 0.64 pg/ml, respectively. The serum levels of 17 beta-estradiol using our method strongly correlated with those by Gas chromatography mass spectrometry (GC-MS). The serum levels of estrone and 17 beta-estradiol in 154 peri- and postmenopausal women were estimated to be between 15 and 27 pg/ml and between 3.5 and 24.0 pg/ml, respectively, while the serum level of 17 beta-estradiol in postmenopausal women, in particular, was estimated to be from 3.5 to 6.3 pg/ml. For postmenopausal women who suffered from vasomotor symptoms, the mean levels of estrone and 17 beta-estradiol at 12 to 18 hours after treatment with daily 0.625 mg conjugated equine estrogen (CEE) and 2.5 mg medroxyprogesterone acetate (MPA) were 135.0 and 21.3 pg/ml at 12 months, respectively. On the other hand, levels of estrone and 17 beta-estradiol at 12 to 18 hours after treatment with CEE and MPA every other day, were 73.4 and 15.3 pg/ml, respectively. These highly sensitive assays for estrone and 17 beta-estradiol are useful in measuring low levels of estrogen in postmenopausal women, and monitoring estrogen levels in women receiving CEE as hormone replacement therapy. PMID:10633293

  3. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    Science.gov (United States)

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. PMID:22406220

  4. On the method of the automatic modeling in hydraulic pipe networks

    Institute of Scientific and Technical Information of China (English)

    孙以泽; 徐本洲; 王祖温

    2003-01-01

    In this paper the dynamic characteristics in pipes are analyzed with frequency method, and puts for-ward a simple and practical describing method. By establishing the model library beforehand, the modeling ofthe pipe-net is completed automatically, and we can accurately calculate the impedance characteristics of thepipe network, achieve the reasonable configuration of the pipe network, so that to decrease the pressure pulsa-tion.

  5. Novel methods for 3-D semi-automatic mapping of fracture geometry at exposed rock faces

    OpenAIRE

    Feng, Quanhong

    2001-01-01

    To analyse the influence of fractures on hydraulic andmechanical behaviour of fractured rock masses, it is essentialto characterise fracture geometry at exposed rock faces. Thisthesis describes three semi-automatic methods for measuring andquantifying geometrical parameters of fractures, and aims tooffer a novel approach to the traditional mapping methods. Three techniques, i.e. geodetic total station, close-rangephotogrammetry and 3-D laser scanner, are used in this studyfor measurement of f...

  6. Automatic electricity markets data extraction for realistic multi-agent simulations

    DEFF Research Database (Denmark)

    Pereira, Ivo F.; Sousa, Tiago M.; Praca, Isabel;

    2014-01-01

    markets data available on-line; capability of dealing with different file formats and types, some of them inserted by the user, resulting from information obtained not on-line but based on the possible collaboration with market entities; definition and implementation of database gathering information from...... different market sources, even including different market types; machine learning approach for automatic definition of downloads periodicity of new information available on-line. This is a crucial tool to go a step forward in electricity markets simulation, since the integration of this database...

  7. An atlas-based fuzzy connectedness method for automatic tissue classification in brain MRI

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yongxin; BAI Jing

    2006-01-01

    A framework incorporating a subject-registered atlas into the fuzzy connectedness (FC) method is proposed for the automatic tissue classification of 3D images of brain MRI. The pre-labeled atlas is first registered onto the subject to provide an initial approximate segmentation. The initial segmentation is used to estimate the intensity histograms of gray matter and white matter. Based on the estimated intensity histograms, multiple seed voxels are assigned to each tissue automatically. The normalized intensity histograms are utilized in the FC method as the intensity probability density function (PDF) directly. Relative fuzzy connectedness technique is adopted in the final classification of gray matter and white matter. Experimental results based on the 20 data sets from IBSR are included, as well as comparisons of the performance of our method with that of other published methods. This method is fully automatic and operator-independent. Therefore, it is expected to find wide applications, such as 3D visualization, radiation therapy planning, and medical database construction.

  8. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  9. NeurphologyJ: An automatic neuronal morphology quantification method and its application in pharmacological discovery

    Directory of Open Access Journals (Sweden)

    Huang Hui-Ling

    2011-06-01

    Full Text Available Abstract Background Automatic quantification of neuronal morphology from images of fluorescence microscopy plays an increasingly important role in high-content screenings. However, there exist very few freeware tools and methods which provide automatic neuronal morphology quantification for pharmacological discovery. Results This study proposes an effective quantification method, called NeurphologyJ, capable of automatically quantifying neuronal morphologies such as soma number and size, neurite length, and neurite branching complexity (which is highly related to the numbers of attachment points and ending points. NeurphologyJ is implemented as a plugin to ImageJ, an open-source Java-based image processing and analysis platform. The high performance of NeurphologyJ arises mainly from an elegant image enhancement method. Consequently, some morphology operations of image processing can be efficiently applied. We evaluated NeurphologyJ by comparing it with both the computer-aided manual tracing method NeuronJ and an existing ImageJ-based plugin method NeuriteTracer. Our results reveal that NeurphologyJ is comparable to NeuronJ, that the coefficient correlation between the estimated neurite lengths is as high as 0.992. NeurphologyJ can accurately measure neurite length, soma number, neurite attachment points, and neurite ending points from a single image. Furthermore, the quantification result of nocodazole perturbation is consistent with its known inhibitory effect on neurite outgrowth. We were also able to calculate the IC50 of nocodazole using NeurphologyJ. This reveals that NeurphologyJ is effective enough to be utilized in applications of pharmacological discoveries. Conclusions This study proposes an automatic and fast neuronal quantification method NeurphologyJ. The ImageJ plugin with supports of batch processing is easily customized for dealing with high-content screening applications. The source codes of NeurphologyJ (interactive and high

  10. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    OpenAIRE

    M.L. Khodra; D.H. Widyantoro; E.A. Aziz; B.R. Trilaksono

    2011-01-01

    This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM). The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best perfor...

  11. Automatic extraction of semantic relations between medical entities: a rule based approach

    OpenAIRE

    Ben Abacha Asma; Zweigenbaum Pierre

    2011-01-01

    Abstract Background Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration). MeTAE allows (i) to extract and annotate medical entities and relationships from medical texts and (ii) to explore semantically the produced RDF annotations. Results Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i) r...

  12. Effect of Temperature on the Color of Natural Dyes Extracted Using Pressurized Hot Water Extraction Method

    OpenAIRE

    Nursyamirah A. Razak; Siti M. Tumin; Ruziyati Tajuddin

    2011-01-01

    Problem statement: Traditionally, extraction of natural dyes with boiling method produced only one single tone of colorant/dyes which involved plenty of water in several hours of extraction time. A new modern extraction technique should be introduced especially to textile dyers so that a variety of tone of colorants can be produced in a shorter time with less consumption of water. Approach: This study demonstrated Pressurized Hot Water Extraction (PHWE) as a new technique to extract colorants...

  13. Semi-automatic extraction of sectional view from point clouds - The case of Ottmarsheim's abbey-church

    Science.gov (United States)

    Landes, T.; Bidino, S.; Guild, R.

    2014-06-01

    Today, elevations or sectional views of buildings are often produced from terrestrial laser scanning. However, due to the amount of data to process and because usually 2D maps are required by customers, the 3D point cloud is often degraded into 2D slices. In a sectional view, not only the portions of the objet which are intersected by the cutting plane but also edges and contours of other parts of the object which are visible behind the cutting plane are represented. To avoid the tedious manual drawing, the aim of this work is to propose a semi-automatic approach for creating sectional views by point cloud processing. The extraction of sectional views requires in a first step the segmentation of the point cloud into planar and non-planar entities. Since in cultural heritage buildings, arches, vaults, columns can be found, the position and the direction of the sectional view must be taken into account before contours extraction. Indeed, the edges of surfaces of revolution depend on the chosen view. The developed extraction approach is detailed based on point clouds acquired inside and outside churches. The resulting sectional view has been evaluated in a qualitative and quantitative way by comparing it with a reference sectional view made by hand. A mean deviation of 3 cm between both sections proves that the proposed approach is promising. Regarding the processing time, despite a few manual corrections, it has saved 40% of the time required for manual drawing.

  14. Method of automatic image registration of three-dimensional range of archaeological restoration

    International Nuclear Information System (INIS)

    We propose an automatic registration system for reconstruction of various positions of a large object based on a static structured light pattern. The system combines the technology of stereo vision, structured light pattern, the positioning system of the vision sensor and an algorithm that simplifies the process of finding correspondence for the modeling of large objects. A new structured light pattern based on Kautz sequence is proposed, using this pattern as static implement a proposed new registration method. (Author)

  15. Automatically classifying sentences in full-text biomedical articles into Introduction, Methods, Results and Discussion

    OpenAIRE

    Agarwal, Shashank; Yu, Hong

    2009-01-01

    Biomedical texts can be typically represented by four rhetorical categories: Introduction, Methods, Results and Discussion (IMRAD). Classifying sentences into these categories can benefit many other text-mining tasks. Although many studies have applied different approaches for automatically classifying sentences in MEDLINE abstracts into the IMRAD categories, few have explored the classification of sentences that appear in full-text biomedical articles. We first evaluated whether sentences in...

  16. A Method for Automatic Identification of Reliable Heart Rates Calculated from ECG and PPG Waveforms

    OpenAIRE

    Yu, Chenggang; Liu, Zhenqiu; McKenna, Thomas; Reisner, Andrew T.; Reifman, Jaques

    2006-01-01

    Objective: The development and application of data-driven decision-support systems for medical triage, diagnostics, and prognostics pose special requirements on physiologic data. In particular, that data are reliable in order to produce meaningful results. The authors describe a method that automatically estimates the reliability of reference heart rates (HRr) derived from electrocardiogram (ECG) waveforms and photoplethysmogram (PPG) waveforms recorded by vital-signs monitors. The reliabilit...

  17. Manual versus automatic bladder wall thickness measurements: a method comparison study

    OpenAIRE

    Oelke, M.; Mamoulakis, C; Ubbink, D T; Rosette, de la, J.J.M.C.H.; Wijkstra, H.

    2009-01-01

    Purpose To compare repeatability and agreement of conventional ultrasound bladder wall thickness (BWT) measurements with automatically obtained BWT measurements by the BVM 6500 device. Methods Adult patients with lower urinary tract symptoms, urinary incontinence, or postvoid residual urine were urodynamically assessed. During two subsequent cystometry sessions the infusion pump was temporarily stopped at 150 and 250 ml bladder filling to measure BWT with conventional ultrasound and the BVM 6...

  18. Technical characterization by image analysis: an automatic method of mineralogical studies

    International Nuclear Information System (INIS)

    The application of a modern method of image analysis fully automated for the study of grain size distribution modal assays, degree of liberation and mineralogical associations is discussed. The image analyser is interfaced with a scanning electron microscope and an energy dispersive X-rays analyser. The image generated by backscattered electrons is analysed automatically and the system has been used in accessment studies of applied mineralogy as well as in process control in the mining industry. (author)

  19. Antioxidant and Antibacterial Assays on Polygonum minus Extracts: Different Extraction Methods

    OpenAIRE

    Norsyamimi Hassim; Masturah Markom; Nurina Anuar; Kurnia Harlina Dewi; Syarul Nataqain Baharum; Normah Mohd Noor

    2015-01-01

    The effect of solvent type and extraction method was investigated to study the antioxidant and antibacterial activity of Polygonum minus. Two extraction methods were used: a solvent extraction using Soxhlet apparatus and supercritical fluid extraction (SFE). The antioxidant capacity was evaluated using the ferric reducing/antioxidant power (FRAP) assay and the free radical-scavenging capacity of 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The highest polyphenol content was obtained from the m...

  20. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    Directory of Open Access Journals (Sweden)

    M.L. Khodra

    2011-04-01

    Full Text Available This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM. The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best performer (80.68% is SVM classifier with all features.

  1. An automatic segmentation method for building facades from vehicle-borne LiDAR point cloud data based on fundamental geographical data

    Science.gov (United States)

    Li, Yongqiang; Mao, Jie; Cai, Lailiang; Zhang, Xitong; Li, Lixue

    2016-03-01

    In this paper, the author proposed a segmentation method based on the fundamental geographic data, the algorithm describes as following: Firstly, convert the coordinate system of fundamental geographic data to that of vehicle- borne LiDAR point cloud though some data preprocessing work, and realize the coordinate system between them; Secondly, simplify the feature of fundamental geographic data, extract effective contour information of the buildings, then set a suitable buffer threshold value for building contour, and segment out point cloud data of building facades automatically; Thirdly, take a reasonable quality assessment mechanism, check and evaluate of the segmentation results, control the quality of segmentation result. Experiment shows that the proposed method is simple and effective. The method also has reference value for the automatic segmentation for surface features of other types of point cloud.

  2. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    Science.gov (United States)

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  3. Automatic Extraction of Three Dimensional Prismatic Machining Features from CAD Model

    Directory of Open Access Journals (Sweden)

    B.V. Sudheer Kumar

    2011-12-01

    Full Text Available Machining features recognition provides the necessary platform for the computer aided process planning (CAPP and plays a key role in the integration of computer aided design (CAD and computer aided manufacturing (CAM. This paper presents a new methodology for extracting features from the geometrical data of the CAD Model present in the form of Virtual Reality Modeling Language (VRML files. First, the point cloud is separated into the available number of horizontal cross sections. Each cross section consists of a 2D point cloud. Then, a collection of points represented by a set of feature points is derived for each slice, describing the cross section accurately, and providing the basis for a feature-extraction. These extracted manufacturing features, gives the necessary information regarding the manufacturing activities tomanufacture the part. Software in Microsoft Visual C++ environment is developed to recognize the features, where geometric information of the part isextracted from the CAD model. By using this data, anoutput file i.e., text file is generated, which gives all the machinable features present in the part. This process has been tested on various parts and successfully extracted all the features

  4. An efficient method of key-frame extraction based on a cluster algorithm.

    Science.gov (United States)

    Zhang, Qiang; Yu, Shao-Pei; Zhou, Dong-Sheng; Wei, Xiao-Peng

    2013-12-18

    This paper proposes a novel method of key-frame extraction for use with motion capture data. This method is based on an unsupervised cluster algorithm. First, the motion sequence is clustered into two classes by the similarity distance of the adjacent frames so that the thresholds needed in the next step can be determined adaptively. Second, a dynamic cluster algorithm called ISODATA is used to cluster all the frames and the frames nearest to the center of each class are automatically extracted as key-frames of the sequence. Unlike many other clustering techniques, the present improved cluster algorithm can automatically address different motion types without any need for specified parameters from users. The proposed method is capable of summarizing motion capture data reliably and efficiently. The present work also provides a meaningful comparison between the results of the proposed key-frame extraction technique and other previous methods. These results are evaluated in terms of metrics that measure reconstructed motion and the mean absolute error value, which are derived from the reconstructed data and the original data.

  5. A new automatic design method to develop multilayer thin film devices for high power laser applications

    International Nuclear Information System (INIS)

    Optical thin film devices play a major role in many areas of frontier technology like development of various laser systems to the designing of complex and precision optical systems. Design and development of these devices are really challenging when they are meant for high power laser applications. In these cases besides desired optical characteristics, the devices are expected to satisfy a whole range of different needs like high damage threshold, durability etc. In the present work a novel completely automatic design method based on Modified Complex Method has been developed for designing of high power thin film devices. Unlike most of the other methods it does not need any suitable starting design. A quarterwave design is sufficient to start with. If required, it is capable of generating its own starting design. The computer code of the method is very simple to implement. This report discusses this novel automatic design method and presents various practicable output designs generated by it. The relative efficiency of the method along with other powerful methods has been presented while designing a broadband IR antireflection coating. The method is also incorporated with 2D and 3D electric field analysis programmes to produce high damage threshold designs. Some experimental devices developed using such designs are also presented in the report. (author). 36 refs., 41 figs

  6. Brazil nut sorting for aflatoxin prevention: a comparison between automatic and manual shelling methods

    Directory of Open Access Journals (Sweden)

    Ariane Mendonça Pacheco

    2013-06-01

    Full Text Available The impact of automatic and manual shelling methods during manual/visual sorting of different batches of Brazil nuts from the 2010 and 2011 harvests was evaluated in order to investigate aflatoxin prevention.The samples were tested as follows: in-shell, shell, shelled, and pieces in order to evaluate the moisture content (mc, water activity (Aw, and total aflatoxin (LOD = 0.3 µg/kg and LOQ 0.85 µg/kg at the Brazil nut processing plant. The results of aflatoxins obtained for the manually shelled nut samples ranged from 3.0 to 60.3 µg/g and from 2.0 to 31.0 µg/g for the automatically shelled samples. All samples showed levels of mc below the limit of 15%; on the other hand, shelled samples from both harvests showed levels of Aw above the limit. There were no significant differences concerning the manual or automatic shelling results during the sorting stages. On the other hand, the visual sorting was effective in decreasing the aflatoxin contamination in both methods.

  7. A fast automatic target detection method for detecting ships in infrared scenes

    Science.gov (United States)

    Özertem, Kemal Arda

    2016-05-01

    Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.

  8. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  9. GDRMS: a system for automatic extraction of the disease-centre relation

    Science.gov (United States)

    Yang, Ronggen; Zhang, Yue; Gong, Lejun

    2012-01-01

    With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.

  10. Automatic extraction of PIOPED interpretations from ventilation/perfusion lung scan reports.

    OpenAIRE

    Fiszman, M.; Haug, P. J.; Frederick, P. R.

    1998-01-01

    Free-text documents are the main type of data produced by a radiology department in a hospital information system. While this type of data is readily accessible for clinical data review it can not be accessed by other applications to perform medical decision support, quality assurance, and outcome studies. In an attempt to solve this problem, natural language processing systems have been developed and tested against chest x-rays reports to extract relevant clinical information and make it acc...

  11. Optimization-based Method for Automated Road Network Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, D

    2001-09-18

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction.

  12. Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2014-10-01

    Full Text Available Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  13. An Automatic Cycle-Slip Processing Method and Its Precision Analysis

    Institute of Scientific and Technical Information of China (English)

    ZHENG Zuoya; LU Xiushan

    2006-01-01

    On the basis of analyzing and researching the current algorithms of cycle-slip detection and correction, a new method of cycle-slip detection and correction is put forward in this paper, that is, a reasonable cycle-slip detection condition and algorithm with corresponding program COMPRE (COMpass PRE-processing) to detect and correct cycle-slip automatically, compared with GIPSY and GAMIT software, for example, it is proved that this method is effective and credible to cycle-slip detection and correction in GPS data pre-processing.

  14. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  15. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  16. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    Science.gov (United States)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  17. An automatic procedure to extract galaxy clusters from CRoNaRio catalogues

    CERN Document Server

    Puddu, E; Longo, G; Paolillo, M; Scaramella, R; Testa, V; Gal, R R; De Carvalho, R R; Djorgovski, S G

    1999-01-01

    We present preliminary results of a simple peak finding algorithm applied to catalogues of galaxies, extracted from the Second Palomar Sky Survey in the framework of the CRoNaRio project. All previously known Abell and Zwicky clusters in a test region of 5x5 sq. deg. are recovered and new candidate clusters are also detected. This algorithm represents an alternative way of searching for galaxy clusters with respect to that implemented and tested at Caltech on the same type of data (Gal et al. 1998).

  18. Automatic Building Extraction and Roof Reconstruction in 3k Imagery Based on Line Segments

    Science.gov (United States)

    Köhn, A.; Tian, J.; Kurz, F.

    2016-06-01

    We propose an image processing workflow to extract rectangular building footprints using georeferenced stereo-imagery and a derivative digital surface model (DSM) product. The approach applies a line segment detection procedure to the imagery and subsequently verifies identified line segments individually to create a footprint on the basis of the DSM. The footprint is further optimized by morphological filtering. Towards the realization of 3D models, we decompose the produced footprint and generate a 3D point cloud from DSM height information. By utilizing the robust RANSAC plane fitting algorithm, the roof structure can be correctly reconstructed. In an experimental part, the proposed approach has been performed on 3K aerial imagery.

  19. Design of a Direction-of-Arrival Estimation Method Used for an Automatic Bearing Tracking System

    Science.gov (United States)

    Guo, Feng; Liu, Huawei; Huang, Jingchang; Zhang, Xin; Zu, Xingshui; Li, Baoqing; Yuan, Xiaobing

    2016-01-01

    In this paper, we introduce a sub-band direction-of-arrival (DOA) estimation method suitable for employment within an automatic bearing tracking system. Inspired by the magnitude-squared coherence (MSC), we extend the MSC to the sub-band and propose the sub-band magnitude-squared coherence (SMSC) to measure the coherence between the frequency sub-bands of wideband signals. Then, we design a sub-band DOA estimation method which chooses a sub-band from the wideband signals by SMSC for the bearing tracking system. The simulations demonstrate that the sub-band method has a good tradeoff between the wideband methods and narrowband methods in terms of the estimation accuracy, spatial resolution, and computational cost. The proposed method was also tested in the field environment with the bearing tracking system, which also showed a good performance. PMID:27455267

  20. Histogram of Intensity Feature Extraction for Automatic Plastic Bottle Recycling System Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Suzaimah Ramli

    2008-01-01

    Full Text Available Currently, many recycling activities adopt manual sorting for plastic recycling that relies on plant personnel who visually identify and pick plastic bottles as they travel along the conveyor belt. These bottles are then sorted into the respective containers. Manual sorting may not be a suitable option for recycling facilities of high throughput. It has also been noted that the high turnover among sorting line workers had caused difficulties in achieving consistency in the plastic separation process. As a result, an intelligent system for automated sorting is greatly needed to replace manual sorting system. The core components of machine vision for this intelligent sorting system is the image recognition and classification. In this research, the overall plastic bottle sorting system is described. Additionally, the feature extraction algorithm used is discussed in detail since it is the core component of the overall system that determines the success rate. The performance of the proposed feature extractions were evaluated in terms of classification accuracy and result obtained showed an accuracy of more than 80%.

  1. Design of automatic control system for the precipitation of bromelain from the extract of pineapple wastes

    Directory of Open Access Journals (Sweden)

    Flavio Vasconcelos da Silva

    2010-12-01

    Full Text Available In this work, bromelain was recovered from ground pineapple stem and rind by means of precipitation with alcohol at low temperature. Bromelain is the name of a group of powerful protein-digesting, or proteolytic, enzymes that are particularly useful for reducing muscle and tissue inflammation and as a digestive aid. Temperature control is crucial to avoid irreversible protein denaturation and consequently to improve the quality of the enzyme recovered. The process was carried out alternatively in two fed-batch pilot tanks: a glass tank and a stainless steel tank. Aliquots containing 100 mL of pineapple aqueous extract were fed into the tank. Inside the jacketed tank, the protein was exposed to unsteady operating conditions during the addition of the precipitating agent (ethanol 99.5% because the dilution ratio "aqueous extract to ethanol" and heat transfer area changed. The coolant flow rate was manipulated through a variable speed pump. Fine tuned conventional and adaptive PID controllers were on-line implemented using a fieldbus digital control system. The processing performance efficiency was enhanced and so was the quality (enzyme activity of the product.

  2. Automatic Extraction and Recognition of Nmnbers in Topographic Maps%地形图数字注记的自动提取与识别

    Institute of Scientific and Technical Information of China (English)

    徐战武; 张涛; 刘肖琳

    2001-01-01

    地形图的自动扫描矢量化是GIS领域亟待解决的一个重要难题。地形图中包含了大量的字体丰富的数字注记,用以表示地物地貌的属性等特征,正确提取并识别这些数字是图纸处理中的重要组成部分。本文分析了现有的提取方法的不足,提出了一种新的数字注记自动提取与识别算法,首先根据先验的尺寸大小确定候选数字,再采用OCON结构的BP神经网络识别出真正的数字,然后利用近邻关系提取出扩展数字。实验表明,该算法是快速、高效、可靠的。%Automatic vectorization of scanned topographic maps is an important and difficult problem that needs to be solved urgently. Atopographic map includes plenty of numbers with various fonts which indicate properties and other features of general configuration. Extracting and recognizing these numbers correctly is an important part in map disposal. Many present methods of extraction are analyzed on their disadvantages and a new extraction and recognition algorithm of numbers is presented in this paper. The algorithm first fixes on candidates according to transcendental sizes, and then recognizes real numbers with BP neural network of OCON structure. At last, it extracts extended numbers using relation of neighborhood. Experiments have proved it is fast, efficient and reliable.

  3. Automatic selection of preprocessing methods for improving predictions on mass spectrometry protein profiles.

    Science.gov (United States)

    Pelikan, Richard C; Hauskrecht, Milos

    2010-11-13

    Mass spectrometry proteomic profiling has potential to be a useful clinical screening tool. One obstacle is providing a standardized method for preprocessing the noisy raw data. We have developed a system for automatically determining a set of preprocessing methods among several candidates. Our system's automated nature relieves the analyst of the need to be knowledgeable about which methods to use on any given dataset. Each stage of preprocessing is approached with many competing methods. We introduce metrics which are used to balance each method's attempts to correct noise versus preserving valuable discriminative information. We demonstrate the benefit of our preprocessing system on several SELDI and MALDI mass spectrometry datasets. Downstream classification is improved when using our system to preprocess the data.

  4. Methods for microbial DNA extraction from soil for PCR amplification

    Directory of Open Access Journals (Sweden)

    Yeates C

    1998-01-01

    Full Text Available Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1. DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol precipitation. This method was compared to other DNA extraction methods with regard to DNA purity and size.

  5. The effect of extraction method on antioxidant activity of Atractylis babelii Hochr. leaves and flowers extracts

    OpenAIRE

    Khadidja Boudebaz; Samira Nia, Malika; Trabelsi Ayadi; Jamila Kalthoum Cherif

    2015-01-01

    In this study, leaves and flowers of Atractylis babelii were chosen to investigate their antioxidant activities. Thus, a comparison between the antioxidant properties of ethanolic crude extracts obtained by two extraction methods, maceration and soxhlet extraction, was performed using two different tests; DPPH and ABTS radical assays. Besides, total polyphenol, flavonoid and condensed tannin contents were determined in leaves and flowers of Atractylis babelii by colorimetric methods. The resu...

  6. Effect of Temperature on the Color of Natural Dyes Extracted Using Pressurized Hot Water Extraction Method

    Directory of Open Access Journals (Sweden)

    Nursyamirah A. Razak

    2011-01-01

    Full Text Available Problem statement: Traditionally, extraction of natural dyes with boiling method produced only one single tone of colorant/dyes which involved plenty of water in several hours of extraction time. A new modern extraction technique should be introduced especially to textile dyers so that a variety of tone of colorants can be produced in a shorter time with less consumption of water. Approach: This study demonstrated Pressurized Hot Water Extraction (PHWE as a new technique to extract colorants from a selected plant, i.e., Xylocarpus moluccensis species which can be found abundantly in Peninsular Malaysia. Colorant from the heartwood of Xylocarpus moluccensis was extracted at different elevated temperatures, from 50°C up to 150°C using PHWE technique and the extracts obtained were compared to those obtained via boiling method at 100°C. The color strength of dye extracts was then analyzed using UV-Visible spectrophotometer and Video Spectral Comparator (VSC 5000. The effect of the extraction temperatures on the color of extracts obtained by PHWE was also investigated. Results: Results show that the colorimetric data obtained from VSC reading exhibited the exact tone of colors found in anthraquinone. UV-Visible spectrum also shows that higher absorbance of natural dyes extracted via PHWE compared to those obtained by boiling method. Conclusion: By using PHWE employed at different elevated temperatures, different tones of colorants can be produced from one single source in a shorter time with less consumption of water.

  7. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    Science.gov (United States)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  8. DEVELOPMENT AND METHOD VALIDATION OF AESCULUS HIPPOCASTANUM EXTRACT

    OpenAIRE

    Biradar sanjivkumar; Dhumansure Rajkumar; Patil Mallikarjun; Biradar Karankumar; K Sreenivasa Rao

    2012-01-01

    Aesculus hippocastanum is highly regarded for their medicinal properties in the indigenous system of medicine. The objectives of the present study include the validation of Aesculus hippocastanum extract. Authenticated extract of seeds of the plant was collected and the method was developed for the validation. In this the extract was subjected to check the Accuracy, Precision, Linearity and Specificity. For the validation UV spectrophotometer was used. The proposed UV validation method for ...

  9. A Robust Visual-Feature-Extraction Method in Public Environment

    OpenAIRE

    カ, ゴセー; Hua, Gangchen

    2015-01-01

    In this study we describe a new feature extracting method that can extract robust features from a sequence of images and also performs satisfactorily in a highly dynamic environment. This method is based on the geometric structure of matched local feature points. When compared with other previous methods, the proposed method is more accurate in appearance-only simultaneous localization and mapping (SLAM). When compared to position-invariant robust features, the proposed method is more suitabl...

  10. Study on Rear-end Real-time Data Quality Control Method of Regional Automatic Weather Station

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The aim was to study the rear-end real-time data quality control method of regional automatic weather station. [Method] The basic content and steps of rear-end real-time data quality control of regional automatic weather station were introduced. Each element was treated with systematic quality control procedure. The existence of rear-end real time data of regional meteorological station in Guangxi was expounded. Combining with relevant elements and linear changes, improvement based on traditiona...

  11. A method of automatic recognition of airport in complex environment from remote sensing image

    Science.gov (United States)

    Hao, Qiwei; Ni, Guoqiang; Guo, Pan; Chen, Xiaomei; Tang, Yi

    2009-11-01

    In this paper, a new method is proposed for airport recognition in complex environments. The algorithm takes all advantages of essential characteristics of the airport target. Structural characteristics of the airport are used to establish assumption process. Improved Hough transformation (HT) is used to check out those right straight-lines which stand for actual position and direction of runways. Morphological processing is used to remove road segments and isolated points. Finally, we combine these segments carefully to describe the whole airport area, and then our automatic recognition of airport target is realized.

  12. The method of measurement system software automatic validation using business rules management system

    Science.gov (United States)

    Zawistowski, Piotr

    2015-09-01

    The method of measurement system software automatic validation using business rules management system (BRMS) is discussed in this paper. The article contains a description of the new approach to measurement systems execution validation, a description of the implementation of the system that supports mentioned validation and examples documenting the correctness of the approach. In the new approach BRMS are used for measurement systems execution validation. Such systems have not been used for software execution validation nor for measurement systems. The benefits of using them for the listed purposes are discussed as well.

  13. Method of Measuring Fixture Automatic Design and Assembly for Auto-Body Part

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A method of 3-D measuring fixture automatic assembly for auto-body part is presented. Locating constraint mapping technique and assembly rule-based reasoning are applied. Calculating algorithm of the position and pose for the part model, fixture configuration and fixture elements in virtual auto-body assembly space are given. Transforming fixture element from itself coordinate system space to assembly space with homogeneous transformation matrix is realized. Based on the second development technique of unigraphics(UG), the automated assembly is implemented with application program interface (API) function. Lastly the automated assembly of measuring fixture for rear longeron as a case is implemented.

  14. An automatic method for atom identification in scanning tunnelling microscopy images of Fe-chalcogenide superconductors.

    Science.gov (United States)

    Perasso, A; Toraci, C; Massone, A M; Piana, M; Gerbi, A; Buzio, R; Kawale, S; Bellingeri, E; Ferdeghini, C

    2015-12-01

    We describe a computational approach for the automatic recognition and classification of atomic species in scanning tunnelling microscopy images. The approach is based on a pipeline of image processing methods in which the classification step is performed by means of a Fuzzy Clustering algorithm. As a representative example, we use the computational tool to characterize the nanoscale phase separation in thin films of the Fe-chalcogenide superconductor FeSex Te1-x , starting from synthetic data sets and experimental topographies. We quantify the stoichiometry fluctuations on length scales from tens to a few nanometres. PMID:26291960

  15. COMPARISON OF RNA EXTRACTION METHODS FOR Passiflora edulis SIMS LEAVES

    Directory of Open Access Journals (Sweden)

    ANNY CAROLYNE DA LUZ

    2016-02-01

    Full Text Available ABSTRACT Functional genomic analyses require intact RNA; however, Passiflora edulis leaves are rich in secondary metabolites that interfere with RNA extraction primarily by promoting oxidative processes and by precipitating with nucleic acids. This study aimed to analyse three RNA extraction methods, Concert™ Plant RNA Reagent (Invitrogen, Carlsbad, CA, USA, TRIzol® Reagent (Invitrogen and TRIzol® Reagent (Invitrogen/ice -commercial products specifically designed to extract RNA, and to determine which method is the most effective for extracting RNA from the leaves of passion fruit plants. In contrast to the RNA extracted using the other 2 methods, the RNA extracted using TRIzol® Reagent (Invitrogen did not have acceptable A260/A280 and A260/A230 ratios and did not have ideal concentrations. Agarose gel electrophoresis showed a strong DNA band for all of the Concert™ method extractions but not for the TRIzol® and TRIzol®/ice methods. The TRIzol® method resulted in smears during electrophoresis. Due to its low levels of DNA contamination, ideal A260/A280 and A260/A230 ratios and superior sample integrity, RNA from the TRIzol®/ice method was used for reverse transcription-polymerase chain reaction (RT-PCR, and the resulting amplicons were highly similar. We conclude that TRIzol®/ice is the preferred method for RNA extraction for P. edulis leaves.

  16. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry

    International Nuclear Information System (INIS)

    The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)

  17. Evaluation of in vitro antioxidant potential of different polarities stem crude extracts by different extraction methods of Adenium obesum

    OpenAIRE

    Mohammad Amzad Hossain; Tahiya Hilal Ali Alabri; Amira Hamood Salim Al Musalami; Md. Sohail Akhtar; Sadri Said

    2014-01-01

    Objective: To select best extraction method for the isolated antioxidant compounds from the stems of Adenium obesum. Methods: Two methods used for the extraction are Soxhlet and maceration methods. Methanol solvent was used for both extraction method. The methanol crude extract was defatted with water and extracted successively with hexane, chloroform, ethyl acetate and butanol solvents. The antioxidant potential for all crude extracts were determined by using 1, 1-diphenyl...

  18. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    Energy Technology Data Exchange (ETDEWEB)

    Mazzurana, M [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Sandrini, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Vaccari, A [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Malacarne, C [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Cristoforetti, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Pontalti, R [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy)

    2003-10-07

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight.

  19. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    International Nuclear Information System (INIS)

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight

  20. [An automatic non-invasive method for the measurement of systolic, diastolic and mean blood pressure].

    Science.gov (United States)

    Morel, D; Suter, P

    1981-01-01

    A new automatic apparatus for the measurement of arterial pressure by a non-invasive technique was compared with direct intra-arterial measurement in 20 adult patients in a surgical intensive care unit. The apparatus works on the basis of the principle of oscillometry. Blood pressure is determined with a microprocessor by analysis of the amplitude of the oscillations produced by a cuff which is inflated then deflated automatically. Thus mean arterial pressure corresponds to the maximum amplitude. Systolic and diastolic pressures are deduced by extrapolation to zero of the amplitudes on either side of the maximum reading. Mean arterial pressure (AP) proved to be very reliable within the limits studied: 8.0 - 14.7 kPa (60 - 110 mmHg) with a difference in mean direct AP and indirect AP of 0,09 +/- 0.9 kPa SD (0.71 +/- 7 mmHg) and a coefficient of linear correlation between the two methods of r = 0.82. This non-invasive technique determined systolic arterial pressure (sAP) in a less reliable fashion than AP when compared with the invasive technique, with a tendency to flatten the extreme values. The correlation coefficient here was 0.68. Finally, diastolic arterial pressure (dAP) showed a better degree of agreement through with a difference in mean indirect AP and mean direct AP of 1.0 +/- 0.8 kPa (7.6 +/- 6.0 mmHg). These results indicate a good degree of agreement for measurements of mean arterial pressure, clinically the most important, between the two methods used. Measurements of diastolic pressure and above all of diastolic pressure seemed to be less in agreement. This difference could be due to an error in determination of the automatic apparatus tested or to the peripheral site (radial artery) of the intra-arterial catheter used, itself falsifying the humeral arterial pressure. PMID:6113805

  1. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H. [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21202 (United States); Siemens Healthcare XP Division, Erlangen 91052 (Germany); Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21202 (United States)

    2012-10-15

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within {approx}200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  2. DEVELOPMENT AND METHOD VALIDATION OF AESCULUS HIPPOCASTANUM EXTRACT

    Directory of Open Access Journals (Sweden)

    Biradar sanjivkumar

    2012-07-01

    Full Text Available Aesculus hippocastanum is highly regarded for their medicinal properties in the indigenous system of medicine. The objectives of the present study include the validation of Aesculus hippocastanum extract. Authenticated extract of seeds of the plant was collected and the method was developed for the validation. In this the extract was subjected to check the Accuracy, Precision, Linearity and Specificity. For the validation UV spectrophotometer was used. The proposed UV validation method for the extract is accurate, linear, precise, linear, specific and within the range. Further isolation and in-vitro studies are needed.

  3. 挖掘专利知识实现关键词自动抽取%Mining Patent Knowledge for Automatic Keyword Extraction

    Institute of Scientific and Technical Information of China (English)

    陈忆群; 周如旗; 朱蔚恒; 李梦婷; 印鉴

    2016-01-01

    expression and professional authority .T his paper uses patent data set as the external knowledge repository serves for keyword extraction .An algorithm is designed to construct the background knowledge repository based on patent data set , also a method for automatic keyword extraction with novel word features is provided . This paper discusses the characters of patent data ,mines the relation between different patent files to construct background knowledge repository for target document , and finally achieves keyword extraction . The related patent files of target document are used to construct background knowledge repository . The information of patent inventors ,assignees ,citations and classification are used to mining the hidden knowledge and relationship between different patent files .And the related knowledge is imported to extend the background knowledge repository . Novel word features are derived according to the different background knowledge supplied by patent data .The word features reflecting the document’s background knowledge offer valuable indications on individual words’ importance in the target document .The keyword extraction problem can then be regarded as a classification problem and the support vector machine (SVM) is used to extract the keywords .Experiments have been done using patent data set and open data set . Experimental results have proved that using these novel word features ,the novel approach can achieve superior performance in keyword extraction to other state‐of‐the‐art approaches .

  4. Effects of Different Extraction Methods and Conditions on the Phenolic Composition of Mate Tea Extracts

    Directory of Open Access Journals (Sweden)

    Jelena Vladic

    2012-03-01

    Full Text Available A simple and rapid HPLC method for determination of chlorogenic acid (5-O-caffeoylquinic acid in mate tea extracts was developed and validated. The chromatography used isocratic elution with a mobile phase of aqueous 1.5% acetic acid-methanol (85:15, v/v. The flow rate was 0.8 mL/min and detection by UV at 325 nm. The method showed good selectivity, accuracy, repeatability and robustness, with detection limit of 0.26 mg/L and recovery of 97.76%. The developed method was applied for the determination of chlorogenic acid in mate tea extracts obtained by ethanol extraction and liquid carbon dioxide extraction with ethanol as co-solvent. Different ethanol concentrations were used (40, 50 and 60%, v/v and liquid CO2 extraction was performed at different pressures (50 and 100 bar and constant temperature (27 ± 1 °C. Significant influence of extraction methods, conditions and solvent polarity on chlorogenic acid content, antioxidant activity and total phenolic and flavonoid content of mate tea extracts was established. The most efficient extraction solvent was liquid CO2 with aqueous ethanol (40% as co-solvent using an extraction pressure of 100 bar.

  5. Comparison of Methods for Protein Extraction from Pine Needles

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Extraction of proteins from pine needles for proteomic analysis has long been a challenge for scientists. We compared three different protein extraction methods including sucrose, Tris-HCl and trichloroacetic acid (TCA)/acetone (TCA method) to determine their efficiency in separating pine needle proteins by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and two-dimensional PAGE (2D-PAGE). Proteins were then separated by SDS-PAGE. Among three methods the method using sucrose extraction buffer showed the highest efficiency and highest quality in separating proteins. In addition, clearer and more stable strips were detected by SDS-PAGE using sucrose extraction buffer. When the proteins extracted using sucrose extraction buffer were separated by 2D-PAGE, more than 300 protein spots, with isoelectric points (PI) ranging from 4.0 to 7.0 and molecular weights (MW) from 6.5 to 97.4 kD, were observed. This confirmed that the method with sucrose extraction buffer was an efficient and reliable method for extracting proteins from pine needles.

  6. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  7. Linking attentional processes and conceptual problem solving: Visual cues facilitate the automaticity of extracting relevant information from diagrams

    Directory of Open Access Journals (Sweden)

    Amy eRouinfar

    2014-09-01

    Full Text Available This study investigated links between lower-level visual attention processes and higher-level problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80 individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. The study produced two major findings. First, short duration visual cues can improve problem solving performance on a variety of insight physics problems, including transfer problems not sharing the surface features of the training problems, but instead sharing the underlying solution path. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem. Instead, the cueing effects were caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, these short duration visual cues when administered repeatedly over multiple training problems resulted in participants becoming more efficient at extracting the relevant information on the transfer problem, showing that such cues can improve the automaticity with which solvers extract relevant information from a problem. Both of these results converge on the conclusion that lower-order visual processes driven by attentional cues can influence higher-order cognitive processes

  8. A New Automatic Method to Identify Galaxy Mergers I. Description and Application to the STAGES Survey

    CERN Document Server

    Hoyos, Carlos; Gray, Meghan E; Maltby, David T; Bell, Eric F; Barazza, Fabio D; Boehm, Asmus; Haussler, Boris; Jahnke, Knud; Jogee, Sharda; Lane, Kyle P; McIntosh, Daniel H; Wolf, Christian

    2011-01-01

    We present an automatic method to identify galaxy mergers using the morphological information contained in the residual images of galaxies after the subtraction of a Sersic model. The removal of the bulk signal from the host galaxy light is done with the aim of detecting the fainter minor mergers. The specific morphological parameters that are used in the merger diagnostic suggested here are the Residual Flux Fraction and the asymmetry of the residuals. The new diagnostic has been calibrated and optimized so that the resulting merger sample is very complete. However, the contamination by non-mergers is also high. If the same optimization method is adopted for combinations of other structural parameters such as the CAS system, the merger indicator we introduce yields merger samples of equal or higher statistical quality than the samples obtained through the use of other structural parameters. We explore the ability of the method presented here to select minor mergers by identifying a sample of visually classif...

  9. Automatic inspection of electron beam weld for stainless steel using phased array method

    International Nuclear Information System (INIS)

    The CEA laboratory of Non destructive testing of Valduc implements various techniques of controls (radiography, sealing by tracer gas helium, ultrasounds...) to check the quality of the welding and health matter of materials. To have a perfect command of the manufacture of the welding and to detect any anomaly during the manufacturing process (lacks of penetration, defects of joining, porosities...), it developed in partnership with company METALSCAN an ultrasonic technique of imagery phased array designed to the complete and automatic control of homogeneous stainless steel welding carried out by electron beam. To achieve this goal, an acoustic study by simulation with software CIVA was undertaken in order to determine the optimal characteristics of the phased array probes (their number and their site). Finally, the developed method allows, on the one hand, to locate lacks of fusion of welding equivalents to flat holes with bottom 0,5 mms in diameter, and on the other hand, to detect lacks of penetration of 0,1 mm. In order to ensure a perfect reproducibility of controls, a mechanical system ensuring the setting in rotation of the part, allows to inspect the whole of the welding. The results are then analyzed automatically using application software ensuring the traceability of controls. The method was first of all validated using parts spread out, then it was brought into service after confrontation of the results obtained on real defects with other techniques (metallographic radiography and characterizations). (authors)

  10. 本体的自动构建方法%The methods of ontology automatic building

    Institute of Scientific and Technical Information of China (English)

    解峥; 王盼卿; 彭成

    2015-01-01

    The method of information integration based on ontology is the most effective way to solve the semantic heterogeneity,but the traditional ontology construction requires a ot ofmanpower material resources. With the help of artificial intelligence technology and ealizeautomatic build of ontology, such as WordNet knowledge base will save a lot of social costs, will be the focus of the present and future aspects of building ontology research. In this paper, the mainstream in the world today paper summarizes the method of building ontology automatically, it is concluded that the future main direction of ontology automatic building technology.%基于本体的信息集成方法是解决语义异构的最有效途径,但是传统的本体构建需要大量的人力物力。借助人工智能技术和WordNet等知识库实现本体的自动构建,将节省大量的社会成本,将是现在以及未来的本体构建方面研究的重点。文中对当今世界上主流的本体自动构建方法进行归纳总结,得出未来本体自动构建技术的主要发展方向。

  11. Method for Real Time Text Extraction of Digital Manga Comic

    Directory of Open Access Journals (Sweden)

    Kohei Arai, Herman Tolle

    2011-08-01

    Full Text Available Manga is one of popular item in Japan and also in the rest of the world.Hundreds of manga printed everyday in Japan and some of printed manga bookwas digitized into web manga. People then make translation of Japaneselanguage on manga into other language -in conventional way- to share thepleasure of reading manga through the internet. In this paper, we propose anautomatic method for detect and extract Japanese character within a mangacomic page for online language translation process. Japanese character textextraction method is based on our comic frame content extraction method usingblob extraction function. Experimental results from 15 comic pages show that ourproposed method has 100% accuracy of flat comic frame extraction and comicballoon detection, and 93.75% accuracy of Japanese character text extraction.

  12. Scale parameter-estimating method for adaptive fingerprint pore extraction model

    Science.gov (United States)

    Yi, Yao; Cao, Liangcai; Guo, Wei; Luo, Yaping; He, Qingsheng; Jin, Guofan

    2011-11-01

    Sweat pores and other level 3 features have been proven to provide more discriminatory information about fingerprint characteristics, which is useful for personal identification especially in law enforcement applications. With the advent of high resolution (>=1000 ppi) fingerprint scanning equipment, sweat pores are attracting increasing attention in automatic fingerprint identification system (AFIS), where the extraction of pores is a critical step. This paper presents a scale parameter-estimating method in filtering-based pore extraction procedure. Pores are manually extracted from a 1000 ppi grey-level fingerprint image. The size and orientation of each detected pore are extracted together with local ridge width and orientation. The quantitative relation between the pore parameters (size and orientation) and local image parameters (ridge width and orientation) is statistically obtained. The pores are extracted by filtering fingerprint image with the new pore model, whose parameters are determined by local image parameters and the statistically established relation. Experiments conducted on high resolution fingerprints indicate that the new pore model gives good performance in pore extraction.

  13. Automatic Sleep Staging using Multi-dimensional Feature Extraction and Multi-kernel Fuzzy Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2014-01-01

    Full Text Available This paper employed the clinical Polysomnographic (PSG data, mainly including all-night Electroencephalogram (EEG, Electrooculogram (EOG and Electromyogram (EMG signals of subjects, and adopted the American Academy of Sleep Medicine (AASM clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM were learned and the multi-kernel FSVM (MK-FSVM was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.

  14. Improved method for the feature extraction of laser scanner using genetic clustering

    Institute of Scientific and Technical Information of China (English)

    Yu Jinxia; Cai Zixing; Duan Zhuohua

    2008-01-01

    Feature extraction of range images provided by ranging sensor is a key issue of pattern recognition. To automatically extract the environmental feature sensed by a 2D ranging sensor laser scanner, an improved method based on genetic clustering VGA-clustering is presented. By integrating the spatial neighbouring information of range data into fuzzy clustering algorithm, a weighted fuzzy clustering algorithm (WFCA) instead of standard clustering algorithm is introduced to realize feature extraction of laser scanner. Aimed at the unknown clustering number in advance, several validation index functions are used to estimate the validity of different clustering al-gorithms and one validation index is selected as the fitness function of genetic algorithm so as to determine the accurate clustering number automatically. At the same time, an improved genetic algorithm IVGA on the basis of VGA is proposed to solve the local optimum of clustering algorithm, which is implemented by increasing the population diversity and improving the genetic operators of elitist rule to enhance the local search capacity and to quicken the convergence speed. By the comparison with other algorithms, the effectiveness of the algorithm introduced is demonstrated.

  15. Effect of Extraction Methods on Polysaccharide of Clitocybe maxima Stipe

    OpenAIRE

    Junchen Chen; Pufu Lai; Hengsheng Shen; Hengguang Zhen; Rutao Fang

    2013-01-01

    Clitocybe maxima (Gartn. ex Mey. Fr.) Quél. is a favorable edible fungi species. The proportion of its stipe is about 45% of entire fruit biomass, which is a low value defined byproduct. To increase its value added utilization, three extraction methods (as hot water, microwave-assisted and complex-enzyme-hydrolysis-assist) were conducted. The extraction effect on the polysaccharide of Clitocybe maxima stipe was compared and the processing conditions in extraction were optimized. The content o...

  16. Method and apparatus for continuous flow injection extraction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hartenstein, Steven D. (Idaho Falls, ID); Siemer, Darryl D. (Idaho Falls, ID)

    1992-01-01

    A method and apparatus for a continuous flow injection batch extraction aysis system is disclosed employing extraction of a component of a first liquid into a second liquid which is a solvent for a component of the first liquid, and is immiscible with the first liquid, and for separating the first liquid from the second liquid subsequent to extraction of the component of the first liquid.

  17. Methods for microbial DNA extraction from soil for PCR amplification

    OpenAIRE

    Yeates C; Gillings, MR; Davison AD; Altavilla N; Veal DA

    1998-01-01

    Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1). DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol pr...

  18. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra;

    2016-01-01

    therefore developed an interactive annotation tool, EXTRACT, which helps curators identify and extract standard-compliant terms for annotation of metagenomic records and other samples. Behind its web-based user interface, the system combines published methods for named entity recognition of environment...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/....

  19. Semi-automatic template matching based extraction of hyperbolic signatures in ground-penetrating radar images

    Science.gov (United States)

    Sagnard, Florence; Tarel, Jean-Philippe

    2015-04-01

    In civil engineering applications, ground-penetrating radar (GPR) is one of the main non destructive technique based on the refraction and reflection of electromagnetic waves to probe the underground and particularly detect damages (cracks, delaminations, texture changes…) and buried objects (utilities, rebars…). An UWB ground-coupled radar operating in the frequency band [0.46;4] GHz and made of bowtie slot antennas has been used because, comparing to a air-launched radar, it increases energy transfer of electromagnetic radiation in the sub-surface and penetration depth. This paper proposes an original adaptation of the generic template matching algorithm to GPR images to recognize, localize and characterize with parameters a specific pattern associated with a hyperbola signature in the two main polarizations. The processing of a radargram (Bscan) is based on four main steps. The first step consists in pre-processing and scaling. The second step uses template matching to isolate and localize individual hyperbola signatures in an environment containing unwanted reflections, noise and overlapping signatures. The algorithm supposes to generate and collect a set of reference hyperbola templates made of a small reflection pattern in the vicinity of the apex in order to further analyze multiple time signals of embedded targets in an image. The standard Euclidian distance between the template shifted and a local zone in the radargram allows to obtain a map of distances. A user-defined threshold allows to select a reduced number of zones having a high similarity measure. In a third step, each zone is analyzed to detect minimum or maximum discrete amplitudes belonging to the first arrival times of a hyperbola signature. In the fourth step, the extracted discrete data (i,j) are fitted by a parametric hyperbola modeling based on the straight ray path hypothesis and using a constraint least square criterion associated with parameter ranges, that are the position, the

  20. A realistic assessment of methods for extracting gene/protein interactions from free text

    Directory of Open Access Journals (Sweden)

    Shepherd Adrian J

    2009-07-01

    Full Text Available Abstract Background The automated extraction of gene and/or protein interactions from the literature is one of the most important targets of biomedical text mining research. In this paper we present a realistic evaluation of gene/protein interaction mining relevant to potential non-specialist users. Hence we have specifically avoided methods that are complex to install or require reimplementation, and we coupled our chosen extraction methods with a state-of-the-art biomedical named entity tagger. Results Our results show: that performance across different evaluation corpora is extremely variable; that the use of tagged (as opposed to gold standard gene and protein names has a significant impact on performance, with a drop in F-score of over 20 percentage points being commonplace; and that a simple keyword-based benchmark algorithm when coupled with a named entity tagger outperforms two of the tools most widely used to extract gene/protein interactions. Conclusion In terms of availability, ease of use and performance, the potential non-specialist user community interested in automatically extracting gene and/or protein interactions from free text is poorly served by current tools and systems. The public release of extraction tools that are easy to install and use, and that achieve state-of-art levels of performance should be treated as a high priority by the biomedical text mining community.

  1. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization

    International Nuclear Information System (INIS)

    There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. They demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows them to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images

  2. Prenatal express-diagnosis by the method of QF-PCR and automatic microelectroforesis with microarrays

    Institute of Scientific and Technical Information of China (English)

    Zaporozhan VN; Bubnov VV; Marichereda VG; Verbitskaya TG; Belous OB

    2011-01-01

    The modern molecular-genetic methods have been implementing actively into the medical practiee.They improve diagnostic accuracy,help to prognosticate the course of oncological diseases,optimize the results of prenatal diagnosis,decrease mothers' anxiety and improve the clinical outcomes of pregnancy.There are used the various traditional approaches e.g.cariotyping,FISH and more contemporary-real-time PCR,comparative genomic hybridization (CGH) or chromosomal microarray analysis (CMA),Quantitative Fluorescent PCR (QF-PCR). For expressing diagnosis of triploidy by 21st and 18th chromosomes there was used QFPCR technologies with the consequent quantative analysis on the automatic capillary microelectrophoresis on the microarrays Experion DNA1K.There was determined that diagnostic accuracy of QF-PCR was comparable with existing routine methods,but it had some advantages including expressity and could be recommended for implementation into practical medicine.

  3. An adaptive spatial clustering method for automatic brain MR image segmentation

    Institute of Scientific and Technical Information of China (English)

    Jingdan Zhang; Daoqing Dai

    2009-01-01

    In this paper, an adaptive spatial clustering method is presented for automatic brain MR image segmentation, which is based on a competitive learning algorithm-self-organizing map (SOM). We use a pattern recognition approach in terms of feature generation and classifier design. Firstly, a multi-dimensional feature vector is constructed using local spatial information. Then, an adaptive spatial growing hierarchical SOM (ASGHSOM) is proposed as the classifier, which is an extension of SOM, fusing multi-scale segmentation with the competitive learning clustering algorithm to overcome the problem of overlapping grey-scale intensities on boundary regions. Furthermore, an adaptive spatial distance is integrated with ASGHSOM, in which local spatial information is considered in the cluster-ing process to reduce the noise effect and the classification ambiguity. Our proposed method is validated by extensive experiments using both simulated and real MR data with varying noise level, and is compared with the state-of-the-art algorithms.

  4. Quantitative Study on Nonmetallic Inclusion Particles in Steels by Automatic Image Analysis With Extreme Values Method

    Institute of Scientific and Technical Information of China (English)

    Cássio Barbosa; José Brant de Campos; J(ǒ)neo Lopes do Nascimento; Iêda Maria Vieira Caminha

    2009-01-01

    The presence of nonmetallic inclusion particles which appear during steelmaking process is harmful to the properties of steels, which is mainly as a function of some aspects such as size, volume fraction, shape, and distribution of these particles. The automatic image analysis technique is one of the most important tools for the quantitative determination of these parameters. The classical Student approach and the Extreme Values Method (EVM) were used for the inclusion size and shape determination and the evaluation of distance between the inclusion particles. The results thus obtained indicated that there were significant differences in the characteristics of the inclusion particles in the analyzed products. Both methods achieved results with some differences, indicating that EVM could be used as a faster and more reliable statistical methodology.

  5. Towards Automatic Extraction of Social Networks of Organizations in PubMed Abstracts

    CERN Document Server

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-01-01

    Social Network Analysis (SNA) of organizations can attract great interest from government agencies and scientists for its ability to boost translational research and accelerate the process of converting research to care. For SNA of a particular disease area, we need to identify the key research groups in that area by mining the affiliation information from PubMed. This not only involves recognizing the organization names in the affiliation string, but also resolving ambiguities to identify the article with a unique organization. We present here a process of normalization that involves clustering based on local sequence alignment metrics and local learning based on finding connected components. We demonstrate the application of the method by analyzing organizations involved in angiogenensis treatment, and demonstrating the utility of the results for researchers in the pharmaceutical and biotechnology industries or national funding agencies.

  6. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    Directory of Open Access Journals (Sweden)

    Zečević, N.

    2008-12-01

    Full Text Available There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon black4. mass flow rate of hydrocarbon oil feedstock5. type and quantity of additive for adjustment the structure of oil-furnace carbon black6. quantity and position of the quench water for cooling the reaction of oil-furnace carbon black.The control of oil-furnace carbon black adsorption capacity is made with mass flow rate of hydrocarbon feedstock, which is the most important inlet process factor. Oil-furnace carbon black adsorption capacity in industrial process is determined with laboratory analyze of iodine adsorption number. It is shown continuously and automatically method for controlling iodine adsorption number in carbon black oil-furnaces to get as much as possible efficient control of adsorption capacity. In the proposed method it can be seen the correlation between qualitatively-quantitatively composition of the process tail gasses in the production of oil-furnace carbon black and relationship between air for combustion and hydrocarbon feedstock. It is shown that the ratio between air for combustion and hydrocarbon oil feedstock is depended of adsorption capacity summarized by iodine adsorption number, regarding to BMCI index of hydrocarbon oil feedstock.The mentioned correlation can be seen through the figures from 1. to 4. From the whole composition of the process tail gasses the best correlation for continuously and automatically control of iodine adsorption number is show the volume fraction of methane. The volume fraction of methane in the

  7. An automatic seismic signal detection method based on fourth-order statistics and applications

    Institute of Scientific and Technical Information of China (English)

    Liu Xi-Qiang; Cai Yin; Zhao Rui; Zhao Yin-Gang; Qu Bao-An; Feng Zhi-Jun; Li Hong

    2014-01-01

    Real-time, automatic, and accurate determination of seismic signals is critical for rapid earthquake reporting and early warning. In this study, we present a correction trigger function (CTF) for automatically detecting regional seismic events and a fourth-order statistics algorithm with the Akaike information criterion (AIC) for determining the direct wave phase, based on the differences, or changes, in energy, frequency, and amplitude of the direct P- or S-waves signal and noise. Simulations suggest for that the proposed fourth-order statistics result in high resolution even for weak signal and noise variations at different amplitude, frequency, and polarization characteristics. To improve the precision of establishing the S-waves onset,fi rst a specifi c segment of P-wave seismograms is selected and the polarization characteristics of the data are obtained. Second, the S-wave seismograms that contained the specifi c segment of P-wave seismograms are analyzed by S-wave polarizationfi ltering. Finally, the S-wave phase onset times are estimated. The proposed algorithm was used to analyze regional earthquake data from the Shandong Seismic Network. The results suggest that compared with conventional methods, the proposed algorithm greatly decreased false and missed earthquake triggers, and improved the detection precision of direct P- and S-wave phases.

  8. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-01-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83–0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments. PMID:27001047

  9. A semi-automatic computer-aided method for surgical template design

    Science.gov (United States)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  10. An efficient method for DNA extraction from Cladosporioid fungi

    NARCIS (Netherlands)

    Moslem, M.A.; Bahkali, A.H.; Abd-Elsalam, K.A.; Wit, de P.J.G.M.

    2010-01-01

    We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on

  11. An Improved Method for Extraction and Separation of Photosynthetic Pigments

    Science.gov (United States)

    Katayama, Nobuyasu; Kanaizuka, Yasuhiro; Sudarmi, Rini; Yokohama, Yasutsugu

    2003-01-01

    The method for extracting and separating hydrophobic photosynthetic pigments proposed by Katayama "et al." ("Japanese Journal of Phycology," 42, 71-77, 1994) has been improved to introduce it to student laboratories at the senior high school level. Silica gel powder was used for removing water from fresh materials prior to extracting pigments by a…

  12. A RAPID PCR-QUALITY DNA EXTRACTION METHOD IN FISH

    Institute of Scientific and Technical Information of China (English)

    LI Zhong; LIANG Hong-Wei; ZOU Gui-Wei

    2012-01-01

    PCR has been a general preferred method for biological research in fish, and previous research have enabled us to extract and purify PCR-quality DNA templates in laboratories[1-4]. The same problem among these procedures is waiting for tissue digesting for a long time. The overabundance time spent on PCR-quality DNA extraction restricts the efficiency of PCR assay, especially in large-scale PCR amplification, such as SSR-based genetic-mapping construction [5,6], identification of germ plasm resource[7,8] and evolution research [9,10], etc. In this study, a stable and rapid PCR-quality DNA extraction method was explored, using a modified alkaline lysis protocol. Extracting DNA for PCR only takes approximately 25 minutes. This stable and rapid DNA extraction method could save much laboratory time and promotes.%PCR has been a general preferred method for biological research in fish,and previous research have enabled us to extract and purify PCR-quality DNA templates in laboratories [1-4].The same problem among these procedures is waiting for tissue digesting for a long time.The overabundance time spent on PCR-quality DNA extraction restricts the efficiency of PCR assay,especially in large-scale PCR amplification,such as SSR-based genetic-mapping construction [5,6],identification of germ plasm resource[7,8] and evolution research [9,10],etc.In this study,a stable and rapid PCR-quality DNA extraction method was explored,using a modified alkaline lysis protocol.Extracting DNA for PCR only takes approximately 25 minutes.This stable and rapid DNA extraction method could save much laboratory time and promotes.

  13. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  14. ISS Contingency Attitude Control Recovery Method for Loss of Automatic Thruster Control

    Science.gov (United States)

    Bedrossian, Nazareth; Bhatt, Sagar; Alaniz, Abran; McCants, Edward; Nguyen, Louis; Chamitoff, Greg

    2008-01-01

    In this paper, the attitude control issues associated with International Space Station (ISS) loss of automatic thruster control capability are discussed and methods for attitude control recovery are presented. This scenario was experienced recently during Shuttle mission STS-117 and ISS Stage 13A in June 2007 when the Russian GN&C computers, which command the ISS thrusters, failed. Without automatic propulsive attitude control, the ISS would not be able to regain attitude control after the Orbiter undocked. The core issues associated with recovering long-term attitude control using CMGs are described as well as the systems engineering analysis to identify recovery options. It is shown that the recovery method can be separated into a procedure for rate damping to a safe harbor gravity gradient stable orientation and a capability to maneuver the vehicle to the necessary initial conditions for long term attitude hold. A manual control option using Soyuz and Progress vehicle thrusters is investigated for rate damping and maneuvers. The issues with implementing such an option are presented and the key issue of closed-loop stability is addressed. A new non-propulsive alternative to thruster control, Zero Propellant Maneuver (ZPM) attitude control method is introduced and its rate damping and maneuver performance evaluated. It is shown that ZPM can meet the tight attitude and rate error tolerances needed for long term attitude control. A combination of manual thruster rate damping to a safe harbor attitude followed by a ZPM to Stage long term attitude control orientation was selected by the Anomaly Resolution Team as the alternate attitude control method for such a contingency.

  15. Extracting natural dyes from wool—an evaluation of extraction methods

    OpenAIRE

    Manhita, Ana; Ferreira, Teresa; Candeias, António; Barrocas Dias, Cristina

    2011-01-01

    The efficiency of eight different procedures used for the extraction of natural dyes was evaluated using contemporary wool samples dyed with cochineal, madder, woad, weld, brazilwood and logwood. Comparison was made based on the LC-DAD peak areas of the natural dye’s main components which had been extracted from the wool samples. Among the tested methods, an extraction procedure with Na2EDTA in water/DMF (1:1, v/v) proved to be the most suitable for the extraction of the studied dyes, ...

  16. An interactive tool for semi-automatic feature extraction of hyperspectral data

    Science.gov (United States)

    Kovács, Zoltán; Szabó, Szilárd

    2016-09-01

    The spectral reflectance of the surface provides valuable information about the environment, which can be used to identify objects (e.g. land cover classification) or to estimate quantities of substances (e.g. biomass). We aimed to develop an MS Excel add-in - Hyperspectral Data Analyst (HypDA) - for a multipurpose quantitative analysis of spectral data in VBA programming language. HypDA was designed to calculate spectral indices from spectral data with user defined formulas (in all possible combinations involving a maximum of 4 bands) and to find the best correlations between the quantitative attribute data of the same object. Different types of regression models reveal the relationships, and the best results are saved in a worksheet. Qualitative variables can also be involved in the analysis carried out with separability and hypothesis testing; i.e. to find the wavelengths responsible for separating data into predefined groups. HypDA can be used both with hyperspectral imagery and spectrometer measurements. This bivariate approach requires significantly fewer observations than popular multivariate methods; it can therefore be applied to a wide range of research areas.

  17. A method for extracting $cos\\alpha$

    CERN Document Server

    Grinstein, B; Rothstein, I Z; Grinstein, Benjamin; Nolte, Detlef R.; Rothstein, Ira Z.

    2000-01-01

    We show that it is possible to extract the weak mixing angle alpha via a measurement of the rate for B^+(-) -> \\pi^+(-) e^+e^-. The sensitivity to cos(alpha) results from the interference between the long and short distance contributions. The short distance contribution can be computed, using heavy quark symmetry, in terms of semi-leptonic form factors. More importantly, we show that, using Ward identities and a short distance operator product expansion, the long distance contribution can be calculated without recourse to light cone wave functions when the invariant mass of the lepton pair, q^2, is much larger than LQCDs. We find that for q^2 > 2 GeV^2 the branching fraction is approximately 1 * 10^{-8}|V_{td}/0.008|^2. The shape of the differential rate is very sensitive to the value of cos(alpha) at small values of q^2 with dGamma /dq^2 varying up to 50% in the interval -1< cos(alpha)< 1 at q^2= 2 GeV^2. The size of the variation depends upon the ratio V_{ub}/V_{td}.

  18. Noncontact optical imaging in mice with full angular coverage and automatic surface extraction

    Science.gov (United States)

    Meyer, Heiko; Garofalakis, Anikitos; Zacharakis, Giannis; Psycharakis, Stylianos; Mamalaki, Clio; Kioussis, Dimitris; Economou, Eleftherios N.; Ntziachristos, Vasilis; Ripoll, Jorge

    2007-06-01

    During the past decade, optical imaging combined with tomographic approaches has proved its potential in offering quantitative three-dimensional spatial maps of chromophore or fluorophore concentration in vivo. Due to its direct application in biology and biomedicine, diffuse optical tomography (DOT) and its fluorescence counterpart, fluorescence molecular tomography (FMT), have benefited from an increase in devoted research and new experimental and theoretical developments, giving rise to a new imaging modality. The most recent advances in FMT and DOT are based on the capability of collecting large data sets by using CCDs as detectors, and on the ability to include multiple projections through recently developed noncontact approaches. For these to be implemented, we have developed an imaging setup that enables three-dimensional imaging of arbitrary shapes in fluorescence or absorption mode that is appropriate for small animal imaging. This is achieved by implementing a noncontact approach both for sources and detectors and coregistering surface geometry measurements using the same CCD camera. A thresholded shadowgrammetry approach is applied to the geometry measurements to retrieve the surface mesh. We present the evaluation of the system and method in recovering three-dimensional surfaces from phantom data and live mice. The approach is used to map the measured in vivo fluorescence data onto the tissue surface by making use of the free-space propagation equations, as well as to reconstruct fluorescence concentrations inside highly scattering tissuelike phantom samples. Finally, the potential use of this setup for in vivo small animal imaging and its impact on biomedical research is discussed.

  19. A Fast and Fully Automatic Method for Cerebrovascular Segmentation on Time-of-Flight (TOF) MRA Image

    OpenAIRE

    Gao, Xin; Uchiyama, Yoshikazu; Zhou, Xiangrong; HARA, TAKESHI; Asano, Takahiko; Fujita, Hiroshi

    2010-01-01

    The precise three-dimensional (3-D) segmentation of cerebral vessels from magnetic resonance angiography (MRA) images is essential for the detection of cerebrovascular diseases (e.g., occlusion, aneurysm). The complex 3-D structure of cerebral vessels and the low contrast of thin vessels in MRA images make precise segmentation difficult. We present a fast, fully automatic segmentation algorithm based on statistical model analysis and improved curve evolution for extracting the 3-D cerebral ve...

  20. A comparison of DNA extraction methods using Petunia hybrida tissues.

    Science.gov (United States)

    Tamari, Farshad; Hinkley, Craig S; Ramprashad, Naderia

    2013-09-01

    Extraction of DNA from plant tissue is often problematic, as many plants contain high levels of secondary metabolites that can interfere with downstream applications, such as the PCR. Removal of these secondary metabolites usually requires further purification of the DNA using organic solvents or other toxic substances. In this study, we have compared two methods of DNA purification: the cetyltrimethylammonium bromide (CTAB) method that uses the ionic detergent hexadecyltrimethylammonium bromide and chloroform-isoamyl alcohol and the Edwards method that uses the anionic detergent SDS and isopropyl alcohol. Our results show that the Edwards method works better than the CTAB method for extracting DNA from tissues of Petunia hybrida. For six of the eight tissues, the Edwards method yielded more DNA than the CTAB method. In four of the tissues, this difference was statistically significant, and the Edwards method yielded 27-80% more DNA than the CTAB method. Among the different tissues tested, we found that buds, 4 days before anthesis, had the highest DNA concentrations and that buds and reproductive tissue, in general, yielded higher DNA concentrations than other tissues. In addition, DNA extracted using the Edwards method was more consistently PCR-amplified than that of CTAB-extracted DNA. Based on these results, we recommend using the Edwards method to extract DNA from plant tissues and to use buds and reproductive structures for highest DNA yields. PMID:23997658

  1. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    Directory of Open Access Journals (Sweden)

    Miguel García-Remesal

    2013-01-01

    Full Text Available Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets to 93.0% (routes of exposure, while recall values range from 82.6% (routes of exposure to 87.4% (toxic effects. These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER-dependent tasks, such as for instance augmented reading or semantic searches. This research is a “proof of concept” that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine.

  2. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    Science.gov (United States)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  3. Comparison of four methods of DNA extraction from rice

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ Polyphenols, teroens, and resins make it difficult to obtain high quality genomic DNA from rice. Four extraction methods were compared in our study, and CTAB precipitation was the most practical one.

  4. Automatic detection method for mura defects on display film surface using modified Weber's law

    Science.gov (United States)

    Kim, Myung-Muk; Lee, Seung-Ho

    2014-07-01

    We propose a method that automatically detects mura defects on display film surfaces using a modified version of Weber's law. The proposed method detects mura defects regardless of their properties and shapes by identifying regions perceived by human vision as mura using the brightness of pixel and image distribution ratio of mura in an image histogram. The proposed detection method comprises five stages. In the first stage, the display film surface image is acquired and a gray-level shift performed. In the second and third stages, the image histogram is acquired and analyzed, respectively. In the fourth stage, the mura range is acquired. This is followed by postprocessing in the fifth stage. Evaluations of the proposed method conducted using 200 display film mura image samples indicate a maximum detection rate of ˜95.5%. Further, the results of application of the Semu index for luminance mura in flat panel display (FPD) image quality inspection indicate that the proposed method is more reliable than a popular conventional method.

  5. An improved automatic detection method for earthquake-collapsed buildings from ADS40 image

    Institute of Scientific and Technical Information of China (English)

    GUO HuaDong; LU LinLin; MA JianWen; PESARESI Martino; YUAN FangYan

    2009-01-01

    Earthquake-collapsed building identification is important in earthquake damage assessment and is evidence for mapping seismic intensity. After the May 12th Wenchuan major earthquake occurred,experts from CEODE and IPSC collaborated to make a rapid earthquake damage assessment. A crucial task was to identify collapsed buildings from ADS40 images in the earthquake region. The difficulty was to differentiate collapsed buildings from concrete bridges,dry gravels,and landslide-induced rolling stones since they had a similar gray level range in the image. Based on the IPSC method,an improved automatic identification technique was developed and tested in the study area,a portion of Beichuan County. Final results showed that the technique's accuracy was over 95%. Procedures and results of this experiment are presented in this article. Theory of this technique indicates that it could be applied to collapsed building identification caused by other disasters.

  6. Research on automatic current sharing control methods for control power supply

    Directory of Open Access Journals (Sweden)

    Dai Xian Bin

    2016-01-01

    Full Text Available High-power switching devices in control power supply have different saturated forward voltage drops and the inconsistency of turning on/off times and they lead to the inconsistency in the external characteristics of the inverter modules in parallel operation. Modules with good performance in external characteristics undertake more currents and lead to overloading status and modules with bad performance in external characteristics stay in light-loading status, which increases the thermal stress of module undertaking more currents and influences the service life of high-power switching devices. Based on the simulation analysis of the small-signal module using control power supply automatic current sharing method, it is able to find out the characteristics of current-sharing loop control, namely, slow response speed of the current-sharing loop, which is beneficial for improving the stability of the entire control power supply system.

  7. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    Energy Technology Data Exchange (ETDEWEB)

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  8. Evaluation of a Meta-1-based automatic indexing method for medical documents.

    Science.gov (United States)

    Wagner, M M; Cooper, G F

    1992-08-01

    This paper describes MetaIndex, an automatic indexing program that creates symbolic representations of documents for the purpose of document retrieval. MetaIndex uses a simple transition network parser to recognize a language that is derived from the set of main concepts in the Unified Medical Language System Metathesaurus (Meta-1). MetaIndex uses a hierarchy of medical concepts, also derived from Meta-1, to represent the content of documents. The goal of this approach is to improve document retrieval performance by better representation of documents. An evaluation method is described, and the performance of MetaIndex on the task of indexing the Slice of Life medical image collection is reported.

  9. Forward gated-diode method for parameter extraction of MOSFETs*

    Institute of Scientific and Technical Information of China (English)

    Zhang Chenfei; Ma Chenyue; Guo Xinjie; Zhang Xiufang; He Jin; Wang Guozeng; Yang Zhang; Liu Zhiwei

    2011-01-01

    The forward gated-diode method is used to extract the dielectric oxide thickness and body doping concentration of MOSFETs, especially when both of the variables are unknown previously. First, the dielectric oxide thickness and the body doping concentration as a function of forward gated-diode peak recombination-generation (R-G) current are derived from the device physics. Then the peak R-G current characteristics of the MOSFETs with different dielectric oxide thicknesses and body doping concentrations are simulated with ISE-Dessis for parameter extraction. The results from the simulation data demonstrate excellent agreement with those extracted from the forward gated-diode method.

  10. A Robust Digital Watermark Extracting Method Based on Neural Network

    Institute of Scientific and Technical Information of China (English)

    GUOLihua; YANGShutang; LIJianhua

    2003-01-01

    Since watermark removal software, such as StirMark, has succeeded in washing watermarks away for most of the known watermarking systems, it is necessary to improve the robustness of watermarking systems. A watermark extracting method based on the error Back propagation (BP) neural network is presented in this paper, which can efficiently improve the robustness of watermarking systems. Experiments show that even if the watermarking systems are attacked by the StirMark software, the extracting method based on neural network can still efficiently extract the whole watermark information.

  11. A PCR amplification method without DNA extraction.

    Science.gov (United States)

    Li, Hongwei; Xu, Haiyue; Zhao, Chunjiang; Sulaiman, Yiming; Wu, Changxin

    2011-02-01

    To develop a simple and inexpensive method for direct PCR amplification of animal DNA from tissues, we optimized different components and their concentration in lysis buffer systems. Finally, we acquired the optimized buffer system composed of 10 mmol tris(hydroxymethyl)aminomethane (Tris)-Cl (pH 8.0), 2 mmol ethylene diamine tetraacetic (EDTA) (pH 8.0), 0.2 mol NaCl and 200 μg/mL Proteinase K. Interestingly, the optimized buffer is also very effective when working with common human sample types, including blood, buccal cells and hair. The direct PCR method requires fewer reagents (Tris-Cl, EDTA, Protease K and NaCl) and less incubation time (only 35 min). The cost of treating every sample is less than $0.02, and all steps can be completed on a thermal cycler in a 96-well format. So, the proposed method will significantly improve high-throughput PCR-based molecular assays in animal systems and in common human sample types.

  12. Extracting natural dyes from wool--an evaluation of extraction methods.

    Science.gov (United States)

    Manhita, Ana; Ferreira, Teresa; Candeias, António; Dias, Cristina Barrocas

    2011-05-01

    The efficiency of eight different procedures used for the extraction of natural dyes was evaluated using contemporary wool samples dyed with cochineal, madder, woad, weld, brazilwood and logwood. Comparison was made based on the LC-DAD peak areas of the natural dye's main components which had been extracted from the wool samples. Among the tested methods, an extraction procedure with Na(2)EDTA in water/DMF (1:1, v/v) proved to be the most suitable for the extraction of the studied dyes, which presented a wide range of chemical structures. The identification of the natural dyes used in the making of an eighteenth century Arraiolos carpet was possible using the Na(2)EDTA/DMF extraction of the wool embroidery samples and an LC-DAD-MS methodology. The effectiveness of the Na(2)EDTA/DMF extraction method was particularly observed in the extraction of weld dye components. Nine flavone derivatives previously identified in weld extracts could be identified in a single historical sample, confirming the use of this natural dye in the making of Arraiolos carpets. Indigo and brazilwood were also identified in the samples, and despite the fact that these natural dyes were referred in the historical recipes of Arraiolos dyeing, it is the first time that the use of brazilwood is confirmed. Mordant analysis by ICP-MS identified the widespread use of alum in the dyeing process, but in some samples with darker hues, high amounts of iron were found instead. PMID:21416400

  13. A Circular Statistical Method for Extracting Rotation Measures

    Indian Academy of Sciences (India)

    S. Sarala; Pankaj Jain

    2002-03-01

    We propose a new method for the extraction of Rotation Measures from spectral polarization data. The method is based on maximum likelihood analysis and takes into account the circular nature of the polarization data. The method is unbiased and statistically more efficient than the standard 2 procedure.

  14. A new automatic method for registering of point clouds%提取平面标靶及配准点云的自动化实现

    Institute of Scientific and Technical Information of China (English)

    周绍光; 田慧; 李浩

    2012-01-01

    Registration of point clouds plays an essential role to process the data acquired with 3D laser scanner. One traditional half automatic registration scheme based on targets needs to scan each target separately. In this paper, we develop a new automatic registration method that converts point clouds of single station to two-dimensional images by the center projection principle. It utilizes digital image processing technology to extract targets automatically and calculates the coordinates of its center point with photogrammetry so as to achieve point clouds registration automatically. Experimental results show the effectiveness and reliability of this method.%点云配准是三维激光扫描数据处理过程中不可或缺的一个环节,利用标靶进行配准是经典的手段之一,此类方案在单独扫描标靶的基础上进行半自动化配准。本文给出一种自动配准策略,用中心投影原理将单站扫描的点云转换为深度影像,借助数字图像处理技术完成标靶的自动提取,拟合获得标靶中心点的坐标,并借用摄影测量学的知识实现点云的自动化配准。实验证明了本文方法的可靠性。

  15. An Automatic Statistical Method to detect the Breast Border in a Mammogram

    Directory of Open Access Journals (Sweden)

    Wai Tak (Arthur Hung

    2007-03-01

    Full Text Available Segmentation is an image processing technique to divide an image into several meaningful objects. Edge enhancement and border detection are important components of image segmentation. A mammogram is a soft x-ray of a woman's breast, which is read by radiologists to detect breast cancer. Recently, digital mammography is also available. In order to do computer aided detection on mammogram, the image has to be either in digital form or digitized. A preprocessing step to a digital/digitized mammogram is to detect the breast border so as to minimize the area to search for breast lesion. An enclosed curve is used to define the breast area. In this paper we propose a modified measure of class separability and used it to select the best segmentation result objectively, which leads to an improved border detection method. This new method is then used to analyze a test set of 35 mammograms. The breast border of these 35 mammograms was also traced manually twice to test for their repeatability using Hung's method1. The borders obtained from the proposed automatic border detection method are shown to be of better quality than the corresponding ones traced manually.

  16. A Novel Neural Network Based Method Developed for Digit Recognition Applied to Automatic Speed Sign Recognition

    Directory of Open Access Journals (Sweden)

    Hanene Rouabeh

    2016-02-01

    Full Text Available This Paper presents a new hybrid technique for digit recognition applied to the speed limit sign recognition task. The complete recognition system consists in the detection and recognition of the speed signs in RGB images. A pretreatment is applied to extract the pictogram from a detected circular road sign, and then the task discussed in this work is employed to recognize digit candidates. To realize a compromise between performances, reduced execution time and optimized memory resources, the developed method is based on a conjoint use of a Neural Network and a Decision Tree. A simple Network is employed firstly to classify the extracted candidates into three classes and secondly a small Decision Tree is charged to determine the exact information. This combination is used to reduce the size of the Network as well as the memory resources utilization. The evaluation of the technique and the comparison with existent methods show the effectiveness.

  17. A Karnaugh-Map based fingerprint minutiae extraction method

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Singla

    2010-07-01

    Full Text Available Fingerprint is one of the most promising method among all the biometric techniques and has been used for thepersonal authentication for a long time because of its wide acceptance and reliability. Features (Minutiae are extracted fromthe fingerprint in question and are compared with the features already stored in the database for authentication. Crossingnumber (CN is the most commonly used minutiae extraction method for fingerprints. In this paper, a new Karnaugh-Mapbased fingerprint minutiae extraction method has been proposed and discussed. In the proposed algorithm the 8 neighborsof a pixel in a 33 window are arranged as 8 bits of a byte and corresponding hexadecimal (hex value is calculated. Thesehex values are simplified using standard Karnaugh-Map (K-map technique to obtain the minimized logical expression.Experiments conducted on the FVC2002/Db1_a database reveals that the developed method is better than the crossingnumber (CN method.

  18. Fully automatic lung segmentation and rib suppression methods to improve nodule detection in chest radiographs.

    Science.gov (United States)

    Soleymanpour, Elaheh; Pourreza, Hamid Reza; Ansaripour, Emad; Yazdi, Mehri Sadooghi

    2011-07-01

    Computer-aided Diagnosis (CAD) systems can assist radiologists in several diagnostic tasks. Lung segmentation is one of the mandatory steps for initial detection of lung cancer in Posterior-Anterior chest radiographs. On the other hand, many CAD schemes in projection chest radiography may benefit from the suppression of the bony structures that overlay the lung fields, e.g. ribs. The original images are enhanced by an adaptive contrast equalization and non-linear filtering. Then an initial estimation of lung area is obtained based on morphological operations and then it is improved by growing this region to find the accurate final contour, then for rib suppression, we use oriented spatial Gabor filter. The proposed method was tested on a publicly available database of 247 chest radiographs. Results show that this method outperformed greatly with accuracy of 96.25% for lung segmentation, also we will show improving the conspicuity of lung nodules by rib suppression with local nodule contrast measures. Because there is no additional radiation exposure or specialized equipment required, it could also be applied to bedside portable chest x-rays. In addition to simplicity of these fully automatic methods, lung segmentation and rib suppression algorithms are performed accurately with low computation time and robustness to noise because of the suitable enhancement procedure.

  19. Method for improved extraction of DNA from Nocardia asteroides.

    OpenAIRE

    Loeffelholz, M. J.; Scholl, D R

    1989-01-01

    In a variation of standard DNA extraction methods, Nocardia asteroides was repeatedly exposed to sodium dodecyl sulfate at 60 degrees C for 30 min; each extraction was followed by centrifugation, removal of the nucleic acid-rich supernatant, and suspension of the cell pellet in fresh sodium dodecyl sulfate. The pooled supernatants contained a substantially higher amount of DNA than the first supernatant alone. The possible implications of this procedure on the development of DNA probes are di...

  20. Automatic generation of a view to geographical database

    OpenAIRE

    Dunkars, Mats

    2001-01-01

    This thesis concerns object oriented modelling and automatic generalisation of geographic information. The focus however is not on traditional paper maps, but on screen maps that are automatically generated from a geographical database. Object oriented modelling is used to design screen maps that are equipped with methods that automatically extracts information from a geographical database, generalises the information and displays it on a screen. The thesis consists of three parts: a theoreti...

  1. Airway Segmentation and Centerline Extraction from Thoracic CT - Comparison of a New Method to State of the Art Commercialized Methods.

    Directory of Open Access Journals (Sweden)

    Pall Jens Reynisson

    centerlines. Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature.The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system.

  2. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    Science.gov (United States)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  3. UMLS-based automatic image indexing.

    Science.gov (United States)

    Sneiderman, C; Sneiderman, Charles Alan; Demner-Fushman, D; Demner-Fushman, Dina; Fung, K W; Fung, Kin Wah; Bray, B; Bray, Bruce

    2008-01-01

    To date, most accurate image retrieval techniques rely on textual descriptions of images. Our goal is to automatically generate indexing terms for an image extracted from a biomedical article by identifying Unified Medical Language System (UMLS) concepts in image caption and its discussion in the text. In a pilot evaluation of the suggested image indexing method by five physicians, a third of the automatically identified index terms were found suitable for indexing.

  4. A New Method to Extract Text from Natural Scenes

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper presents a new method for text detection, location and binarization fron natural scenes. Several morphological steps are used to detect the general positian of the text, including English, Chinese and Japanese characters. Next bounding boxes are processed by a new "Expand, Break and Merge" (EBM) method to get the precise text areas. Finally, text is binarized by a hybrid method based on Otsu and Niblack. This new approach can extract different kinds of text from complicated natural scenes. It is insensitive to noise, distortedness, and text orientation. It also has good performance on extracting texts in various sizes.

  5. Comparison of extraction methods for analysis of flavonoids in onions

    OpenAIRE

    Soeltoft, Malene; Knuthsen, Pia; Nielsen, John

    2008-01-01

    Onions are known to contain high levels of flavonoids and a comparison of the efficiency, reproducibility and detection limits of various extraction methods has been made in order to develop fast and reliable analytical methods for analysis of flavonoids in onions. Conventional and classical methods are time- and solvent-consuming and the presence of light and oxygen during sample preparation facilitate degradation reactions. Thus, classical methods were compared with microwave (irradiatio...

  6. Automatic Calibration Method of Voxel Size for Cone-beam 3D-CT Scanning System

    CERN Document Server

    Yang, Min; Liu, Yipeng; Men, Fanyong; Li, Xingdong; Liu, Wenli; Wei, Dongbo

    2013-01-01

    For cone-beam three-dimensional computed tomography (3D-CT) scanning system, voxel size is an important indicator to guarantee the accuracy of data analysis and feature measurement based on 3D-CT images. Meanwhile, the voxel size changes with the movement of the rotary table along X-ray direction. In order to realize the automatic calibration of the voxel size, a new easily-implemented method is proposed. According to this method, several projections of a spherical phantom are captured at different imaging positions and the corresponding voxel size values are calculated by non-linear least square fitting. Through these interpolation values, a linear equation is obtained, which reflects the relationship between the rotary table displacement distance from its nominal zero position and the voxel size. Finally, the linear equation is imported into the calibration module of the 3D-CT scanning system, and when the rotary table is moving along X-ray direction, the accurate value of the voxel size is dynamically expo...

  7. Automatic Method for Identifying Photospheric Bright Points and Granules Observed by Sunrise

    CERN Document Server

    Javaherian, Mohsen; Amiri, Ali; Ziaei, Shervin

    2014-01-01

    In this study, we propose methods for the automatic detection of photospheric features (bright points and granules) from ultra-violet (UV) radiation, using a feature-based classifier. The methods use quiet-Sun observations at 214 nm and 525 nm images taken by Sunrise on 9 June 2009. The function of region growing and mean shift procedure are applied to segment the bright points (BPs) and granules, respectively. Zernike moments of each region are computed. The Zernike moments of BPs, granules, and other features are distinctive enough to be separated using a support vector machine (SVM) classifier. The size distribution of BPs can be fitted with a power-law slope -1.5. The peak value of granule sizes is found to be about 0.5 arcsec^2. The mean value of the filling factor of BPs is 0.01, and for granules it is 0.51. There is a critical scale for granules so that small granules with sizes smaller than 2.5 arcsec^2 cover a wide range of brightness, while the brightness of large granules approaches unity. The mean...

  8. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  9. An object-based classification method for automatic detection of lunar impact craters from topographic data

    Science.gov (United States)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  10. BMAA extraction of cyanobacteria samples: which method to choose?

    Science.gov (United States)

    Lage, Sandra; Burian, Alfred; Rasmussen, Ulla; Costa, Pedro Reis; Annadotter, Heléne; Godhe, Anna; Rydberg, Sara

    2016-01-01

    β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW).

  11. Influence of Extraction Methods on the Yield of Steviol Glycosides and Antioxidants in Stevia rebaudiana Extracts.

    Science.gov (United States)

    Periche, Angela; Castelló, Maria Luisa; Heredia, Ana; Escriche, Isabel

    2015-06-01

    This study evaluated the application of ultrasound techniques and microwave energy, compared to conventional extraction methods (high temperatures at atmospheric pressure), for the solid-liquid extraction of steviol glycosides (sweeteners) and antioxidants (total phenols, flavonoids and antioxidant capacity) from dehydrated Stevia leaves. Different temperatures (from 50 to 100 °C), times (from 1 to 40 min) and microwave powers (1.98 and 3.30 W/g extract) were used. There was a great difference in the resulting yields according to the treatments applied. Steviol glycosides and antioxidants were negatively correlated; therefore, there is no single treatment suitable for obtaining the highest yield in both groups of compounds simultaneously. The greatest yield of steviol glycosides was obtained with microwave energy (3.30 W/g extract, 2 min), whereas, the conventional method (90 °C, 1 min) was the most suitable for antioxidant extraction. Consequently, the best process depends on the subsequent use (sweetener or antioxidant) of the aqueous extract of Stevia leaves.

  12. Influence of Extraction Methods on the Yield of Steviol Glycosides and Antioxidants in Stevia rebaudiana Extracts.

    Science.gov (United States)

    Periche, Angela; Castelló, Maria Luisa; Heredia, Ana; Escriche, Isabel

    2015-06-01

    This study evaluated the application of ultrasound techniques and microwave energy, compared to conventional extraction methods (high temperatures at atmospheric pressure), for the solid-liquid extraction of steviol glycosides (sweeteners) and antioxidants (total phenols, flavonoids and antioxidant capacity) from dehydrated Stevia leaves. Different temperatures (from 50 to 100 °C), times (from 1 to 40 min) and microwave powers (1.98 and 3.30 W/g extract) were used. There was a great difference in the resulting yields according to the treatments applied. Steviol glycosides and antioxidants were negatively correlated; therefore, there is no single treatment suitable for obtaining the highest yield in both groups of compounds simultaneously. The greatest yield of steviol glycosides was obtained with microwave energy (3.30 W/g extract, 2 min), whereas, the conventional method (90 °C, 1 min) was the most suitable for antioxidant extraction. Consequently, the best process depends on the subsequent use (sweetener or antioxidant) of the aqueous extract of Stevia leaves. PMID:25726419

  13. Microscale extraction method for HPLC carotenoid analysis in vegetable matrices

    Directory of Open Access Journals (Sweden)

    Sidney Pacheco

    2014-10-01

    Full Text Available In order to generate simple, efficient analytical methods that are also fast, clean, and economical, and are capable of producing reliable results for a large number of samples, a micro scale extraction method for analysis of carotenoids in vegetable matrices was developed. The efficiency of this adapted method was checked by comparing the results obtained from vegetable matrices, based on extraction equivalence, time required and reagents. Six matrices were used: tomato (Solanum lycopersicum L., carrot (Daucus carota L., sweet potato with orange pulp (Ipomoea batatas (L. Lam., pumpkin (Cucurbita moschata Duch., watermelon (Citrullus lanatus (Thunb. Matsum. & Nakai and sweet potato (Ipomoea batatas (L. Lam. flour. Quantification of the total carotenoids was made by spectrophotometry. Quantification and determination of carotenoid profiles were formulated by High Performance Liquid Chromatography with photodiode array detection. Microscale extraction was faster, cheaper and cleaner than the commonly used one, and advantageous for analytical laboratories.

  14. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Some techniques and methods for deriving water information from SPOT -4 (XI) image were investigatedand discussed in this paper. An algorithm of decision-tree (DT) classification which includes several classifiers based onthe spectral responding characteristics of water bodies and other objects, was developed and put forward to delineate wa-ter bodies. Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary informa-tion of DEM and slope (DTDS) was also designed for water bodies extraction. In addition, supervised classificationmethod of maximum-likelyhood classification (MLC), and unsupervised method of interactive self-organizing dada analy-sis technique (ISODATA) were used to extract waterbodies for comparison purpose. An index was designed and used toassess the accuracy of different methods adopted in the research. Results have shown that water extraction accuracy wasvariable with respect to the various techniques applied. It was low using ISODATA, very high using DT algorithm andmuch higher using both DTDS and MLC.

  15. METHOD TO EXTRACT BLEND SURFACE FEATURE IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    Lü Zhen; Ke Yinglin; Sun Qing; Kelvin W; Huang Xiaoping

    2003-01-01

    A new method of extraction of blend surface feature is presented. It contains two steps: segmentation and recovery of parametric representation of the blend. The segmentation separates the points in the blend region from the rest of the input point cloud with the processes of sampling point data, estimation of local surface curvature properties and comparison of maximum curvature values. The recovery of parametric representation generates a set of profile curves by marching throughout the blend and fitting cylinders. Compared with the existing approaches of blend surface feature extraction, the proposed method reduces the requirement of user interaction and is capable of extracting blend surface with either constant radius or variable radius. Application examples are presented to verify the proposed method.

  16. Spectrophotometric validation of assay method for selected medicinal plant extracts

    Directory of Open Access Journals (Sweden)

    Matthew Arhewoh

    2014-09-01

    Full Text Available Objective: To develop UV spectrophotometric assay validation methods for some selected medicinal plant extracts.Methods: Dried, powdered leaves of Annona muricata (AM and Andrographis paniculata (AP as well as seeds of Garcinia kola (GK and Hunteria umbellata (HU were separately subjected to maceration using distilled water. Different concentrations of the extracts were scanned spectrophotometrically to obtain wavelengths of maximum absorbance. The different extracts were then subjected to validation studies following international guidelines at the respective wavelengths obtained.Results: The results showed linearity at peak wavelengths of maximum absorbance of 292, 280, 274 and 230 nm for GK, HU, AM and AP, respectively. The calibration curves for the different concentrations of the extract gave R2 values ranging from 0.9831 for AM to 0.9996 for AP the inter-day and intra-day precision study showed that the relative standard deviation (% was ≤ 10% for all the extracts.Conclusion: The aqueous extracts and isolates of these plants can be assayed and monitored using these wavelengths.

  17. Analysis of medicinal plant extracts by neutron activation method

    International Nuclear Information System (INIS)

    This dissertation has presented the results from analysis of medicinal plant extracts using neutron activation method. Instrumental neutron activation analysis was applied to the determination of the elements Al, Br, Ca, Ce, Cl, Cr, Cs, Fe, K, La, Mg, Mn, Na, Rb, Sb, Sc and Zn in medicinal extracts obtained from Achyrolcline satureoides DC, Casearia sylvestris, Centella asiatica, Citrus aurantium L., Solano lycocarpum, Solidago microglossa, Stryphnondedron barbatiman and Zingiber officinale R. plants. The elements Hg and Se were determined using radiochemical separation by means of retention of Se in HMD inorganic exchanger and solvent extraction of Hg by bismuth diethyl-dithiocarbamate solution. Precision and accuracy of the results have been evaluated by analysing reference materials. The therapeutic action of some elements found in plant extracts analyzed was briefly discussed

  18. Automatic method for synchronizing workpiece frames in twin-robot nondestructive testing system

    Science.gov (United States)

    Lu, Zongxing; Xu, Chunguang; Pan, Qinxue; Meng, Fanwu; Li, Xinliang

    2015-07-01

    The workpiece frames relative to each robot base frame should be known in advance for the proper operation of twin-robot nondestructive testing system. However, when two robots are separated from the workpieces, the twin robots cannot reach the same point to complete the process of workpiece frame positioning. Thus, a new method is proposed to solve the problem of coincidence between workpiece frames. Transformation between two robot base frames is initiated by measuring the coordinate values of three non-collinear calibration points. The relationship between the workpiece frame and that of the slave robot base frame is then determined according to the known transformation of two robot base frames, as well as the relationship between the workpiece frame and that of the master robot base frame. Only one robot is required to actually measure the coordinate values of the calibration points on the workpiece. This requirement is beneficial when one of the robots cannot reach and measure the calibration points. The coordinate values of the calibration points are derived by driving the robot hand to the points and recording the values of top center point(TCP) coordinates. The translation and rotation matrices relate either the two robot base frames or the workpiece and master robot. The coordinated are solved using the measured values of the calibration points according to the Cartesian transformation principle. An optimal method is developed based on exponential mapping of Lie algebra to ensure that the rotation matrix is orthogonal. Experimental results show that this method involves fewer steps, offers significant advantages in terms of operation and time-saving. A method used to synchronize workpiece frames in twin-robot system automatically is presented.

  19. Automatic Method for Synchronizing Workpiece Frames in Twin-robot Nondestructive Testing System

    Institute of Scientific and Technical Information of China (English)

    LU Zongxing; XU Chunguang; PAN Qinxue; MENG Fanwu; LI Xinliang

    2015-01-01

    The workpiece frames relative to each robot base frame should be known in advance for the proper operation of twin-robot nondestructive testing system. However, when two robots are separated from the workpieces, the twin robots cannot reach the same point to complete the process of workpiece frame positioning. Thus, a new method is proposed to solve the problem of coincidence between workpiece frames. Transformation between two robot base frames is initiated by measuring the coordinate values of three non-collinear calibration points. The relationship between the workpiece frame and that of the slave robot base frame is then determined according to the known transformation of two robot base frames, as well as the relationship between the workpiece frame and that of the master robot base frame. Only one robot is required to actually measure the coordinate values of the calibration points on the workpiece. This requirement is beneficial when one of the robots cannot reach and measure the calibration points. The coordinate values of the calibration points are derived by driving the robot hand to the points and recording the values of top center point(TCP) coordinates. The translation and rotation matrices relate either the two robot base frames or the workpiece and master robot. The coordinated are solved using the measured values of the calibration points according to the Cartesian transformation principle. An optimal method is developed based on exponential mapping of Lie algebra to ensure that the rotation matrix is orthogonal. Experimental results show that this method involves fewer steps, offers significant advantages in terms of operation and time-saving. A method used to synchronize workpiece frames in twin-robot system automatically is presented.

  20. A HYBRID METHOD FOR AUTOMATIC SPEECH RECOGNITION PERFORMANCE IMPROVEMENT IN REAL WORLD NOISY ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Urmila Shrawankar

    2013-01-01

    Full Text Available It is a well known fact that, speech recognition systems perform well when the system is used in conditions similar to the one used to train the acoustic models. However, mismatches degrade the performance. In adverse environment, it is very difficult to predict the category of noise in advance in case of real world environmental noise and difficult to achieve environmental robustness. After doing rigorous experimental study it is observed that, a unique method is not available that will clean the noisy speech as well as preserve the quality which have been corrupted by real natural environmental (mixed noise. It is also observed that only back-end techniques are not sufficient to improve the performance of a speech recognition system. It is necessary to implement performance improvement techniques at every step of back-end as well as front-end of the Automatic Speech Recognition (ASR model. Current recognition systems solve this problem using a technique called adaptation. This study presents an experimental study that aims two points, first is to implement the hybrid method that will take care of clarifying the speech signal as much as possible with all combinations of filters and enhancement techniques. The second point is to develop a method for training all categories of noise that can adapt the acoustic models for a new environment that will help to improve the performance of the speech recognizer under real world environmental mismatched conditions. This experiment confirms that hybrid adaptation methods improve the ASR performance on both levels, (Signal-to-Noise Ratio SNR improvement as well as word recognition accuracy in real world noisy environment.

  1. Effect of Extraction Method on the Phenolic and Cyanogenic Glucoside Profile of Flaxseed Extracts and their Antioxidant Capacity

    OpenAIRE

    Waszkowiak, Katarzyna; Gliszczyńska-Świgło, Anna; Barthet, Veronique; Skręty, Joanna

    2015-01-01

    The application of flaxseed extracts as food ingredients is a subject of interest to food technologists and nutritionists. Therefore, the influence of the extraction method on the content and composition of beneficial compounds as well as anti-nutrients is important. In the study, the effects of two solvent extraction methods, aqueous and 60 % ethanolic, on phenolic and cyanogenic glucoside profiles of flaxseed extract were determined and compared. The impact of extracted phenolic compounds o...

  2. Optimization of the Phenol -Chloroform Silica DNA Extraction Method in Ancient Bones DNA Extraction

    Directory of Open Access Journals (Sweden)

    Morteza Sadeghi

    2014-04-01

    Full Text Available Introduction: DNA extraction from the ancient bones tissues is currently very difficult. Phenol chloroform silica method is one of the methods currently used for this aim. The purpose of this study was to optimize the assessment method. Methods: DNA of 62 bone tissues (average 3-11 years was first extracted with phenol chloroform silica methods and then with changing of some parameters of the methods the extracted DNA was amplified in eight polymorphisms area including FES, F13, D13S317, D16, D5S818, vWA and CD4. Results from samples gained by two methods were compared in acrylamide gel. Results: The average of PCR yield for new method and common method in eight polymorphism regions was 75%, 78%, 81%, 76%, 85%, 71%, 89%, 86% and 64%, 39%, 70%, 49%, 68%, 76%, 71% and 28% respectively. The average of DNA in optimized (in 35l silica density and common method were 267.5 µg/ml with 1.12 purity and 192.76 g/ml with 0.84 purity respectively. Conclusions: According to the findings of this study, it is estimated that longer EDTA attendance is an efficient agent in removing calcium and also adequate density of silica particles can be efficient in removal of PCR inhibitors.

  3. Comparison of DNA and RNA extraction methods for mummified tissues.

    Science.gov (United States)

    Konomi, Nami; Lebwohl, Eve; Zhang, David

    2002-12-01

    Nucleic acids extracted from mummified tissues are valuable materials for the study of ancient human beings. Significant difficulty in extracting nucleic acids from mummified tissues has been reported due to chemical modification and degradation. The goal of this study was to determine a method that is more efficient for DNA and RNA extraction from mummified tissues. Twelve mummy specimens were analyzed with 9 different nucleic acid extraction methods, including guanidium thiocyanate (GTC) and proteinase K/detergent based methods prepared in our laboratory or purchased. Glyceraldehyde 3-phosphate dehydrogenase DNA and beta-actin RNA were used as markers for the presence of adequate DNA and RNA, respectively, for PCR and RT-PCR amplification. Our results show that 5 M GTC is more efficient of releasing nucleic acids from mummified tissue than proteinase K/detergent, and phenol/chloroform extraction with an additional chloroform step is more efficient than phenol/chloroform along. We were able to isolate DNAs from all 12 specimens and RNAs from 8 of 12 specimens, and the nucleic acids were sufficient for PCR and RT-PCR analysis. We further tested hepatitis viruses including hepatitis B virus, hepatitis C virus, hepatitis G virus, and TT virus DNA, and fail to detect these viruses in all 12 specimens.

  4. Correction method for line extraction in vision measurement.

    Directory of Open Access Journals (Sweden)

    Mingwei Shao

    Full Text Available Over-exposure and perspective distortion are two of the main factors underlying inaccurate feature extraction. First, based on Steger's method, we propose a method for correcting curvilinear structures (lines extracted from over-exposed images. A new line model based on the Gaussian line profile is developed, and its description in the scale space is provided. The line position is analytically determined by the zero crossing of its first-order derivative, and the bias due to convolution with the normal Gaussian kernel function is eliminated on the basis of the related description. The model considers over-exposure features and is capable of detecting the line position in an over-exposed image. Simulations and experiments show that the proposed method is not significantly affected by the exposure level and is suitable for correcting lines extracted from an over-exposed image. In our experiments, the corrected result is found to be more precise than the uncorrected result by around 45.5%. Second, we analyze perspective distortion, which is inevitable during line extraction owing to the projective camera model. The perspective distortion can be rectified on the basis of the bias introduced as a function of related parameters. The properties of the proposed model and its application to vision measurement are discussed. In practice, the proposed model can be adopted to correct line extraction according to specific requirements by employing suitable parameters.

  5. Automatic leveling procedure by use of the spring method in measurement of three-dimensional surface roughness

    Science.gov (United States)

    Kurokawa, Syuhei; Ariura, Yasutsune; Yamamoto, Tatsuyuki

    2008-12-01

    Leveling of specimen surfaces is very important in measurement of surface roughness. If the surface is not leveled, the measured roughness has large distortion and less vertical measurement range. It is convenient to utilize some automatic leveling procedures instead of manual leveling which needs longer adjustment time. In automatic leveling, a new algorithm is proposed, which is named the spring method superior to the least square method. The spring method has an advantage that a part of tentative data points is used to calculate the surface inclination, so the obtained results are less influenced by local pits for example. As examples, the spring method was applied to actual engineered surfaces, which were milled, shot-peened, and ground surfaces, and also an artificial ditched surface. The results went well for the calculation of the surface inclinations and consequently the specimen surfaces were leveled with less distortion and large vertical measurement range can be achieved. It is also found the least square method is a special case of the spring method with using all sampling data points. That means the spring method is a comprehensive procedure including the least square method. This must become a very strong and robust method in automatic leveling algorithm

  6. Self-organizing criticality and the method of automatic search of critical points

    International Nuclear Information System (INIS)

    We discuss the method of automatic search of critical point (MASCP) in the context of self-organizing criticality (SOC). The system analyzed is a contact process that presents a non-equilibrium phase transition between two states: active state and inactive state (the so-called absorbing state). The lattice sites represent infected and healthy individuals. We apply the technique MASCP to the propagation of epidemy in an unidimensional lattice at the criticality (space-domain). We take the technique MASCP to study SOC behavior. The time-series of density of infected individuals is analyzed using two complementary tools: Fourier analysis and detrended fluctuation analysis. We find numeric evidence that the time evolution that drives the system to the critical point in MASCP is not a SOC problem, but Gaussian noise. A SOC problem is characterized by an interaction-dominated system that goes spontaneously to the critical point. In fact MASCP goes by itself to a stationary point but it is not an interaction-dominated process, but a mean-field interaction process

  7. Photoplethysmography-Based Method for Automatic Detection of Premature Ventricular Contractions.

    Science.gov (United States)

    Solosenko, Andrius; Petrenas, Andrius; Marozas, Vaidotas

    2015-10-01

    This work introduces a method for detection of premature ventricular contractions (PVCs) in photoplethysmogram (PPG). The method relies on 6 features, characterising PPG pulse power, and peak-to-peak intervals. A sliding window approach is applied to extract the features, which are later normalized with respect to an estimated heart rate. Artificial neural network with either linear and non-linear outputs was investigated as a feature classifier. PhysioNet databases, namely, the MIMIC II and the MIMIC, were used for training and testing, respectively. After annotating the PPGs with respect to synchronously recorded electrocardiogram, two main types of PVCs were distinguished: with and without the observable PPG pulse. The obtained sensitivity and specificity values for both considered PVC types were 92.4/99.9% and 93.2/99.9%, respectively. The achieved high classification results form a basis for a reliable PVC detection using a less obtrusive approach than the electrocardiography-based detection methods. PMID:26513800

  8. SU-E-I-24: Method for CT Automatic Exposure Control Verification

    Energy Technology Data Exchange (ETDEWEB)

    Gracia, M; Olasolo, J; Martin, M; Bragado, L; Gallardo, N; Miquelez, S; Maneru, F; Lozares, S; Pellejero, S; Rubio, A [Complejo Hospitalario de Navarra, Pamplona, Navarra (Spain)

    2015-06-15

    Purpose: Design of a phantom and a simple method for the automatic exposure control (AEC) verification in CT. This verification is included in the computed tomography (CT) Spanish Quality Assurance Protocol. Methods: The phantom design is made from the head and the body phantom used for the CTDI measurement and PMMA plates (35×35 cm2) of 10 cm thickness. Thereby, three different thicknesses along the longitudinal axis are obtained which permit to evaluate the longitudinal AEC performance. Otherwise, the existent asymmetry in the PMMA layers helps to assess angular and 3D AEC operation.Recent acquisition in our hospital (August 2014) of Nomex electrometer (PTW), together with the 10 cm pencil ionization chamber, led to register dose rate as a function of time. Measurements with this chamber fixed at 0° and 90° on the gantry where made on five multidetector-CTs from principal manufacturers. Results: Individual analysis of measurements shows dose rate variation as a function of phantom thickness. The comparative analysis shows that dose rate is kept constant in the head and neck phantom while the PMMA phantom exhibits an abrupt variation between both results, being greater results at 90° as the thickness of the phantom is 3.5 times larger than in the perpendicular direction. Conclusion: Proposed method is simple, quick and reproducible. Results obtained let a qualitative evaluation of the AEC and they are consistent with the expected behavior. A line of future development is to quantitatively study the intensity modulation and parameters of image quality, and a possible comparative study between different manufacturers.

  9. Automatic diagnosis for prostate cancer using run-length matrix method

    Science.gov (United States)

    Sun, Xiaoyan; Chuang, Shao-Hui; Li, Jiang; McKenzie, Frederic

    2009-02-01

    Prostate cancer is the most common type of cancer and the second leading cause of cancer death among men in US1. Quantitative assessment of prostate histology provides potential automatic classification of prostate lesions and prediction of response to therapy. Traditionally, prostate cancer diagnosis is made by the analysis of prostate-specific antigen (PSA) levels and histopathological images of biopsy samples under microscopes. In this application, we utilize a texture analysis method based on the run-length matrix for identifying tissue abnormalities in prostate histology. A tissue sample was collected from a radical prostatectomy, H&E fixed, and assessed by a pathologist as normal tissue or prostatic carcinoma (PCa). The sample was then subsequently digitized at 50X magnification. We divided the digitized image into sub-regions of 20 X 20 pixels and classified each sub-region as normal or PCa by a texture analysis method. In the texture analysis, we computed texture features for each of the sub-regions based on the Gray-level Run-length Matrix(GL-RLM). Those features include LGRE, HGRE and RPC from the run-length matrix, mean and standard deviation of the pixel intensity. We utilized a feature selection algorithm to select a set of effective features and used a multi-layer perceptron (MLP) classifier to distinguish normal from PCa. In total, the whole histological image was divided into 42 PCa and 6280 normal regions. Three-fold cross validation results show that the proposed method achieves an average classification accuracy of 89.5% with a sensitivity and specificity of 90.48% and 89.49%, respectively.

  10. Comparison of RNA extraction methods in Thai aromatic coconut water

    Directory of Open Access Journals (Sweden)

    Nopporn Jaroonchon

    2015-10-01

    Full Text Available Many researches have reported that nucleic acid in coconut water is in free form and at very low yields which makes it difficult to process in molecular studies. Our research attempted to compare two extraction methods to obtain a higher yield of total RNA in aromatic coconut water and monitor its change at various fruit stages. The first method used ethanol and sodium acetate as reagents; the second method used lithium chloride. We found that extraction using only lithium chloride gave a higher total RNA yield than the method using ethanol to precipitate nucleic acid. In addition, the total RNA from both methods could be used in amplification of betaine aldehyde dehydrogenase2 (Badh2 genes, which is involved in coconut aroma biosynthesis, and could be used to perform further study as we expected. From the molecular study, the nucleic acid found in coconut water increased with fruit age.

  11. Automatic Mapping Extraction from Multiecho T2-Star Weighted Magnetic Resonance Images for Improving Morphological Evaluations in Human Brain

    Directory of Open Access Journals (Sweden)

    Shaode Yu

    2013-01-01

    Full Text Available Mapping extraction is useful in medical image analysis. Similarity coefficient mapping (SCM replaced signal response to time course in tissue similarity mapping with signal response to TE changes in multiecho T2-star weighted magnetic resonance imaging without contrast agent. Since different tissues are with different sensitivities to reference signals, a new algorithm is proposed by adding a sensitivity index to SCM. It generates two mappings. One measures relative signal strength (SSM and the other depicts fluctuation magnitude (FMM. Meanwhile, the new method is adaptive to generate a proper reference signal by maximizing the sum of contrast index (CI from SSM and FMM without manual delineation. Based on four groups of images from multiecho T2-star weighted magnetic resonance imaging, the capacity of SSM and FMM in enhancing image contrast and morphological evaluation is validated. Average contrast improvement index (CII of SSM is 1.57, 1.38, 1.34, and 1.41. Average CII of FMM is 2.42, 2.30, 2.24, and 2.35. Visual analysis of regions of interest demonstrates that SSM and FMM show better morphological structures than original images, T2-star mapping and SCM. These extracted mappings can be further applied in information fusion, signal investigation, and tissue segmentation.

  12. A semi-automatic method to determine electrode positions and labels from gel artifacts in EEG/fMRI-studies.

    Science.gov (United States)

    de Munck, Jan C; van Houdt, Petra J; Verdaasdonk, Ruud M; Ossenblok, Pauly P W

    2012-01-01

    The analysis of simultaneous EEG and fMRI data is generally based on the extraction of regressors of interest from the EEG, which are correlated to the fMRI data in a general linear model setting. In more advanced approaches, the spatial information of EEG is also exploited by assuming underlying dipole models. In this study, we present a semi automatic and efficient method to determine electrode positions from electrode gel artifacts, facilitating the integration of EEG and fMRI in future EEG/fMRI data models. In order to visualize all electrode artifacts simultaneously in a single view, a surface rendering of the structural MRI is made using a skin triangular mesh model as reference surface, which is expanded to a "pancake view". Then the electrodes are determined with a simple mouse click for each electrode. Using the geometry of the skin surface and its transformation to the pancake view, the 3D coordinates of the electrodes are reconstructed in the MRI coordinate frame. The electrode labels are attached to the electrode positions by fitting a template grid of the electrode cap in which the labels are known. The correspondence problem between template and sample electrodes is solved by minimizing a cost function over rotations, shifts and scalings of the template grid. The crucial step here is to use the solution of the so-called "Hungarian algorithm" as a cost function, which makes it possible to identify the electrode artifacts in arbitrary order. The template electrode grid has to be constructed only once for each cap configuration. In our implementation of this method, the whole procedure can be performed within 15 min including import of MRI, surface reconstruction and transformation, electrode identification and fitting to template. The method is robust in the sense that an electrode template created for one subject can be used without identification errors for another subject for whom the same EEG cap was used. Furthermore, the method appears to be

  13. Calculation of radon concentration in water by toluene extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Masaaki [Tokyo Metropolitan Isotope Research Center (Japan)

    1997-02-01

    Noguchi method and Horiuchi method have been used as the calculation method of radon concentration in water. Both methods have two problems in the original, that is, the concentration calculated is changed by the extraction temperature depend on the incorrect solubility data and the concentration calculated are smaller than the correct values, because the radon calculation equation does not true to the gas-liquid equilibrium theory. However, the two problems are solved by improving the radon equation. I presented the Noguchi-Saito equation and the constant B of Horiuchi-Saito equation. The calculating results by the improved method showed about 10% of error. (S.Y.)

  14. Development of 99mTc extraction-recovery by solvent extraction method

    International Nuclear Information System (INIS)

    99mTc is used as a radiopharmaceutical in the medical field for the diagnosis, and manufactured from 99Mo, the parent nuclide. In this study, the solvent extraction with MEK was selected, and preliminary experiments were carried out using Re instead of 99mTc. Two tests were carried out in the experiments; the one is the Re extraction test with MEK from Re-Mo solution, the other is the Re recovery test from the Re-MEK. As to the Re extraction test, and it was clear that the Re extraction yield was more than 90%. Two kinds of Re recovery tests, which are an evaporation method using the evaporator and an adsorption/elution method using the alumina column, were carried out. As to the evaporation method, the Re concentration in the collected solution increased more than 150 times. As to the adsorption/elution method, the Re concentration increased in the eluted solution more than 20 times. (author)

  15. An automatic Planetary Boundary Layer height retrieval method with compact EZ backscattering Lidar

    Science.gov (United States)

    Loaec, S.; Sauvage, L.; Boquet, M.; Lolli, S.; Rouget, V.

    2009-09-01

    Bigger strongly urbanized cities in the world are often exposed to atmospheric pollution events. To understand the chemical and physical processes that are taking place in these areas it is necessary to describe correctly the Planetary Boundary Layer (PBL) dynamics and the PBL height evolution. For these proposals, a compact and rugged eye safe UV Lidar, the EZLIDAR™, was developed together by CEA/LMD and LEOSPHERE (France) to study and investigate structural and optical properties of clouds and aerosols and PBL time evolution. EZLIDAR™ has been validated by different remote and in-situ instruments as MPL Type-4 Lidar manufactured by NASA at ARM/SGP site or the LNA (Lidar Nuage Aerosol) at the Laboratoire de Metereologie Dynamique LMD (France) and during several intercomparison campaigns. EZLIDAR™ algorithm retrieves automatically the PBL height in real-time. The method is based on the detection of the slope of the signal linked to a sharp change in concentration of the aerosols. Once detected, the different layers are filtered on a 15mn sample and classified between nocturnal, convective or residual layer, depending on the time and date. This method has been validated against those retrieved by the algorithm STRAT from data acquired at IPSL, France, showing 95% of correlation. In this paper are presented the results of the intercomparison campaign that took place in Orleans, France and Mace Head, Ireland in the framework of ICOS (Integrated Carbon Observation System) project, where the EZ Lidar™ worked under all weather conditions, clear sky, fog, low clouds, during the whole month of October 2008. Moreover, thanks to its 3D scanning capability, the EZLIDAR was able to provide the variability of the PBL height around the site, enabling the scientists to estimate the flux intensities that play a key role in the radiative transfer budget and in the atmospheric pollutants dispersion.

  16. Extraction Methods of Spanish Broom (Spartium Junceum L.

    Directory of Open Access Journals (Sweden)

    Drago Katović

    2011-12-01

    Full Text Available Effects of different extraction methods of the Spanish Broom shoots were measured and compared with the purpose of obtaining composite material. The content of cellulose, lignin, pentosan and ash in the Spanish Broom fibers was determined. SEM analyses were performed.

  17. Single corn kernel aflatoxin B1 extraction and analysis method

    Science.gov (United States)

    Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...

  18. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  19. Spindle extraction method for ISAR image based on Radon transform

    Science.gov (United States)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  20. Fast Marching and Runge-Kutta Based Method for Centreline Extraction of Right Coronary Artery in Human Patients.

    Science.gov (United States)

    Cui, Hengfei; Wang, Desheng; Wan, Min; Zhang, Jun-Mei; Zhao, Xiaodan; Tan, Ru San; Huang, Weimin; Xiong, Wei; Duan, Yuping; Zhou, Jiayin; Luo, Tong; Kassab, Ghassan S; Zhong, Liang

    2016-06-01

    The CT angiography (CTA) is a clinically indicated test for the assessment of coronary luminal stenosis that requires centerline extractions. There is currently no centerline extraction algorithm that is automatic, real-time and very accurate. Therefore, we sought to (i) develop a hybrid approach by incorporating fast marching and Runge-Kutta based methods for the extraction of coronary artery centerlines from CTA; (ii) evaluate the accuracy of the present method compared to Van's method by using ground truth centerline as a reference; (iii) evaluate the coronary lumen area of our centerline method in comparison with the intravascular ultrasound (IVUS) as the standard of reference. The proposed method was found to be more computationally efficient, and performed better than the Van's method in terms of overlap measures (i.e., OV: [Formula: see text] vs. [Formula: see text]; OF: [Formula: see text] vs. [Formula: see text]; and OT: [Formula: see text] vs. [Formula: see text], all [Formula: see text]). In comparison with IVUS derived coronary lumen area, the proposed approach was more accurate than the Van's method. This hybrid approach by incorporating fast marching and Runge-Kutta based methods could offer fast and accurate extraction of centerline as well as the lumen area. This method may garner wider clinical potential as a real-time coronary stenosis assessment tool. PMID:27140197

  1. 10 CFR Appendix J to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Science.gov (United States)

    2010-01-01

    ... with an adaptive control system. Therefore, pursuant to 10 CFR 430.27, a waiver must be obtained to... adaptive control systems, must submit a petition for waiver pursuant to 10 CFR 430.27 to establish an... of Automatic and Semi-Automatic Clothes Washers J Appendix J to Subpart B of Part 430...

  2. 基于自适应图像分割自动提取道路中心线%The automatic road centerline extraction based on adaptive image segmentation

    Institute of Scientific and Technical Information of China (English)

    汤瑞华

    2015-01-01

    道路是城区地理空间信息中最重要的基础设施之一,从高分辨影像中自动、快速的提取道路特征,是快速更新城市道路网信息的重要途径。文中在分析道路基本特征的基础上,选择基于自适应结构元素的形态分析算法提取初始道路区域;引入面积和长宽比等形状指数,得到较精确的道路信息;最后,采用Hilditch细化算法,并进行优化处理。实验证明,该道路提取过程中无需人工设置参数,且能够得到具有较高完整性和正确性的道路中心线。%The road is one of the most important infrastructures in urban geospatial information .Automatic and fast road extraction from high resolution image is an important way of updating the urban road network fast .Thus a new automatic method is proposed to extract the road feature .First ,the adaptive structure element is adopted in the image segmentation .And then ,the precise road features can be extracted under the constrains of area and length‐breadth ratio (referred to as L/B) .Final ,the road line is obtained by using Hilditch algorithm and optimizing process .The experiments indicate that the road extraction without artificial set parameters in the process can get the centerline of the road with high completeness and correctness .

  3. Research of Anti-Noise Image Salient Region Extraction Method

    Directory of Open Access Journals (Sweden)

    Bing XU

    2014-01-01

    Full Text Available The existing image salient region extraction technology is mostly suitable for processing noise-free images, and there is a lack of studies on the impact of noise on images. In this study the adaptive kernel function was employed in image salient region detection. The salient property of a region was determined by the dissimilarities between the pixels of the image region and its surroundings. The dissimilarity was measured as a decreasing function associated with adaptive kernel regression. The proposed algorithm used multi-scale fusion method to obtain the salient region of the whole image. As adaptive kernel function has strong anti-noise characteristics, the proposed algorithm was characterized with the same robustness. A numerical simulation experiment was conducted on salient region extraction of images with noise and without noise. A comparison between this study’s results and two existing salient region extraction methods revealed that the proposed method in this study was superior in its extraction accuracy of image salient regions and could reduce interference of image noise.

  4. New Multipole Method for 3-D Capacitance Extraction

    Institute of Scientific and Technical Information of China (English)

    Zhao-Zhi Yang; Ze-Yi Wang

    2004-01-01

    This paper describes an effcient improvement of the multipole accelerated boundary element method for 3-D capacitance extraction.The overall relations between the positions of 2-D boundary elements are considered instead of only the relations between the center-points of the elements,and a new method of cube partitioning is introduced.Numerical results are presented to demonstrate that the method is accurate and has nearly linear computational growth as O(n),where n is the number of panels/boundary elements.The proposed method is more accurate and much faster than Fastcap.

  5. A wavelet based method for automatic detection of slow eye movements: a pilot study.

    Science.gov (United States)

    Magosso, Elisa; Provini, Federica; Montagna, Pasquale; Ursino, Mauro

    2006-11-01

    Electro-oculographic (EOG) activity during the wake-sleep transition is characterized by the appearance of slow eye movements (SEM). The present work describes an algorithm for the automatic localisation of SEM events from EOG recordings. The algorithm is based on a wavelet multiresolution analysis of the difference between right and left EOG tracings, and includes three main steps: (i) wavelet decomposition down to 10 detail levels (i.e., 10 scales), using Daubechies order 4 wavelet; (ii) computation of energy in 0.5s time steps at any level of decomposition; (iii) construction of a non-linear discriminant function expressing the relative energy of high-scale details to both high- and low-scale details. The main assumption is that the value of the discriminant function increases above a given threshold during SEM episodes due to energy redistribution toward higher scales. Ten EOG recordings from ten male patients with obstructive sleep apnea syndrome were used. All tracings included a period from pre-sleep wakefulness to stage 2 sleep. Two experts inspected the tracings separately to score SEMs. A reference set of SEM (gold standard) were obtained by joint examination by both experts. Parameters of the discriminant function were assigned on three tracings (design set) to minimize the disagreement between the system classification and classification by the two experts; the algorithm was then tested on the remaining seven tracings (test set). Results show that the agreement between the algorithm and the gold standard was 80.44+/-4.09%, the sensitivity of the algorithm was 67.2+/-7.37% and the selectivity 83.93+/-8.65%. However, most errors were not caused by an inability of the system to detect intervals with SEM activity against NON-SEM intervals, but were due to a different localisation of the beginning and end of some SEM episodes. The proposed method may be a valuable tool for computerized EOG analysis. PMID:16497535

  6. Chlorobiphenyls in sewage sludge; comparison of extraction methods

    Energy Technology Data Exchange (ETDEWEB)

    Rahman, M.S. [Institute of Food and Radiation Biology, Bangladesh Atomic Energy Commission, Dhaka (Bangladesh); Parreno, M. [National Polytechnical School, Quito (Ecuador); Bossi, R. [National Environmental Research Institute, Roskilde (Denmark); Paya-Perez, A.B.; Larsen, B. [Environment Institute, EC Joint Research Center, Ispra (Italy)

    1998-03-01

    Six extraction methods for the analysis of PCBs (CB-28, CB-52, CB-101, CB-118, CB-138, CB-153 and CB-180) in sewage sludge were tested. A certified reference material (CRM 392) was used for the evaluation of the performance of the methods. Soxhlet-Dean-Starch with toluene as solvent, Soxhlet with hexane:acetone (2:3), cold digestion/saponification with 2 mol/L KOH in methanol followed by partition with hexane, and sonicated liquid-solid extraction with hexane:acetone (1:1) produced accurate results (97%, 93%, 104%, and 88%, respectively) with acceptable precisions (6.2%, 6.8%, 15% and 12%, respectively). Results in close agreement with the certified value for all congeners were obtained by treatment with BF{sub 3}-methanol prior to partition with dichloromethane. However, this is a tedious procedure and involves the use of hazardous compounds. Cyclic steam distillation produced results with an accuracy of around 80% and a good precision (5.2%). The very low consumption of solvents and other expensive chemicals by this technique and the possibility of analyzing the extract directly without clean-up make it an interesting alternative to the more sophisticated methods. Column elution with dichloromethane was found to be less efficient (61%), but it is a rapid, direct method with a low consumption of solvents and it may therefore serve as screening method. (orig.) With 2 figs., 1 tab., 35 refs.

  7. Inter-watershed and Its Automatic Extraction Based on DEMs%域间流域及自动提取方法研究

    Institute of Scientific and Technical Information of China (English)

    孙建伟; 汤国安

    2013-01-01

    目前,基于DEM及常规GIS软件进行的流域自动分割方法,往往忽视了一般流域与域间流域的差异性,并且未在属性上区分二者。本文强调流域划分必须充分,明确域间流域的概念与基本特征(包括域间流域的数量与面积、空间分布及空间形态特征)。鉴此,本文提出了基于DEM的域间流域自动提取方法,并以陕北黄土高原丘陵沟壑地区为实证。结果显示:通过对汇流阈值、地形特征、数据边界效应等因素的影响分析,可实现对域间流域的快速准确提取。另外,本文还对此分析了域间流域与一般流域在水文、空间形态及空间分布方面的特征差异。%Watershed delineation based on DEMs in the GIS environment is a fundamental work for hydrological analysis. But the feature difference between normal and inter-watershed watershed has long been ignored. This text believes that existence of inter-watershed must be taken into account during watershed delineation and analy-sis. In this paper, an overall explanation of basic concepts and characteristics of inter-watershed is presented, and a method of extracting inter-watershed automatically based on DEMs is introduced. Experimental results in the Loess Plateau region show that inter-watershed could be extracted accurately and quickly after a comprehensive consideration of water accumulation threshold, terrain feature and the boundary of data. Generally speaking, flat topography may lead to the wrong position of outlets of watersheds, which can be corrected with help of rivers’ DLG data. And if boundary of the data is not the dividing crest, it’s difficult to determine whether a watershed whose dividing line consists of data’s boundary is inter-watershed or not. Comparison of inter-watershed and normal watershed in terms of spatial form, spatial distribution and hydrological characteristics based on experi-mental results show that inter-watershed performs

  8. An efficient and cost-effective method for DNA extraction from athalassohaline soil using a newly formulated cell extraction buffer

    OpenAIRE

    Narayan, Avinash; Jain, Kunal; Shah, Amita R.; Madamwar, Datta

    2016-01-01

    The present study describes the rapid and efficient indirect lysis method for environmental DNA extraction from athalassohaline soil by newly formulated cell extraction buffer. The available methods are mostly based on direct lysis which leads to DNA shearing and co-extraction of extra cellular DNA that influences the community and functional analysis. Moreover, during extraction of DNA by direct lysis from athalassohaline soil, it was observed that, upon addition of poly ethylene glycol (PEG...

  9. A new method for stable lead isotope extraction from seawater

    Energy Technology Data Exchange (ETDEWEB)

    Zurbrick, Cheryl M., E-mail: CZurbric@ucsc.edu [WIGS, Department of Microbiology and Environmental Toxicology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Gallon, Céline [Institute of Marine Sciences, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Flegal, A. Russell [WIGS, Department of Microbiology and Environmental Toxicology, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Institute of Marine Sciences, University of California Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States)

    2013-10-24

    Graphical abstract: -- Highlights: •We present a relatively fast (2.5–6.5 h), semi-automated system to extract Pb from seawater. •Extraction requires few chemicals and has a relatively low blank (0.7 pmol kg{sup −1}). •We compare analyses of Pb isotopes by HR ICP-MS with those by MC-ICP-MS. -- Abstract: A new technique for stable lead (Pb) isotope extraction from seawater is established using Toyopearl AF-Chelate 650 M{sup ®} resin (Tosoh Bioscience LLC). This new method is advantageous because it is semi-automated and relatively fast; in addition it introduces a relatively low blank by minimizing the volume of chemicals used in the extraction. Subsequent analyses by HR ICP-MS have a good relative external precision (2σ) of 3.5‰ for {sup 206}Pb/{sup 207}Pb, while analyses by MC-ICP-MS have a better relative external precision of 0.6‰. However, Pb sample concentrations limit MC-ICP-MS analyses to {sup 206}Pb, {sup 207}Pb, and {sup 208}Pb. The method was validated by processing the common Pb isotope reference material NIST SRM-981 and several GEOTRACES intercalibration samples, followed by analyses by HR ICP-MS, all of which showed good agreement with previously reported values.

  10. Evaluation of in vitro antioxidant potential of different polarities stem crude extracts by different extraction methods of Adenium obesum

    Institute of Scientific and Technical Information of China (English)

    Mohammad Amzad Hossain; Tahiya Hilal Ali Alabri; Amira Hamood Salim Al Musalami; Md. Sohail Akhtar; Sadri Said

    2014-01-01

    Objective: To select best extraction method for the isolated antioxidant compounds from the stems of Adenium obesum.Methods:Two methods used for the extraction are Soxhlet and maceration methods. Methanol solvent was used for both extraction method. The methanol crude extract was defatted with water and extracted successively with hexane, chloroform, ethyl acetate and butanol solvents. The antioxidant potential for all crude extracts were determined by using 1, 1-diphenyl-2-picrylhydrazyl method.Results:The percentage of extraction yield by Soxhlet method is higher compared to maceration method. The antioxidant potential for methanol and its derived fractions by Soxhlet extractor method was highest in ethyl acetate and lowest in hexane crude extracts and found in the order of ethyl acetate>butanol>water>chloroform>methanol>hexane. However, the antioxidant potential for methanol and its derived fractions by maceration method was highest in butanol and lowest in hexane followed in the order of butanol>methanol>chloroform>water>ethyl acetate>hexane.Conclusions:The results showed that isolate antioxidant compounds effected on the extraction method and condition of extraction.

  11. Review on off extraction methods from microalgae%微藻油脂提取方法研究进展

    Institute of Scientific and Technical Information of China (English)

    贺赐安; 余旭亚; 赵鹏; 王琳

    2012-01-01

    微藻作为生物质资源进行开发,其油脂的提取是关键.介绍了微藻油脂在生物柴油与生物活性化合物上的应用;对油脂提取研究中有机溶剂的选择及细胞破碎处理方法进行综述;对超声波或微波辅助萃取、液体加压萃取、自动酸解萃取和酶法水解等提取方法的研究进展情况进行了介绍.%Fast and effective oils extraction from microalgae was a key process which constrained the development of microalgae as a source of biomass energy. The application of microalgae oils in developing biodiesel and screening bioactivity compounds was introduced. The choice of organic solvent and cell disruption in oil extraction process from microalgae were reviewed. The study status and research advance of extraction methods of microalgae such as ultrasonic or microwave assisted extraction, pressurized liquid extraction , automatic acid hydrolysis extraction and enzymatic hydrolysis extraction, etc were summarized.

  12. Battery-powered transport systems. Possible methods of automatically charging drive batteries

    Energy Technology Data Exchange (ETDEWEB)

    1981-03-01

    In modern driverless transport systems, not only easy maintenance of the drive battery is important but also automatic charging during times of standstill. Some systems are presented; one system is pointed out in particular in which 100 batteries can be charged at the same time.

  13. Automatic all position welding for horizontally fixed tubes by tungsten inert gas arc welding method

    International Nuclear Information System (INIS)

    The welding of fixed tubes is mostly all position welding in restricted places, accordingly much skill is required. The automation of welding is necessary because of the requirement for the reliability of welded joints, the difficulty of securing skilled workers, and welding quality. The development and production of the automatic welders for TIG welding of tubes have been carried out by Mitsubishi Electric Corp., and the application to various purposes was attempted. The TIG welding for the automatic welding of tubes is advantageous, because backside bead can be formed stably, spatter does not arise, welding is stable for every metal, and the mechanism of the automatic welders is simple. But it is not suitable for the welding of zinc-plated tubes, and the rate of deposition is relatively small. It is applied to the welding of boiler tubes, nuclear energy equipments and pipings, chemical equipments and pipings, and aluminum pipings. The specifications and the construction of the TIG tube welders are shown. The preparation of joints and the control of welding conditions are important for guaranteeing the welding results in automatic welding, therefore sufficient consultation with welder makers about these points is required. The welding defects apt to arise are the bad form of backside beads, blowholes, and the insufficient melting of intermediate layers, and the countermeasures to them must be taken. (Kako, I.)

  14. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    DUJin-kang; FENGXue-zhi; 等

    2002-01-01

    Some techniques and methods for deriving water information from SPOT-4(XI) image were investigated and discussed in this paper.An algorithmoif decision-tree(DT) classification which includes several classifiers based on the spectral responding characteristics of water bodies and other objects,was developed and put forward to delineate water bodies.Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary information of DEM and slope(DTDS) was also designed for water bodies extraction.In addition,supervised classification method of maximum-likelyhood classification(MLC),and unsupervised method of interactive self -organizing dada analysis technique(ISODATA) were used to extract waterbodies for comparison purpose.An index was designed and used to assess the accuracy of different methods abopted in the research.Results have shown that water extraction accuracy was variable with respect to the various techniques applied.It was low using ISODATA,very high using DT algorithm and much higher using both DTDS and MLC.

  15. SAR Data Fusion Imaging Method Oriented to Target Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yang Wei

    2015-02-01

    Full Text Available To deal with the difficulty for target outlines extracting precisely due to neglect of target scattering characteristic variation during the processing of high-resolution space-borne SAR data, a novel fusion imaging method is proposed oriented to target feature extraction. Firstly, several important aspects that affect target feature extraction and SAR image quality are analyzed, including curved orbit, stop-and-go approximation, atmospheric delay, and high-order residual phase error. Furthermore, the corresponding compensation methods are addressed as well. Based on the analysis, the mathematical model of SAR echo combined with target space-time spectrum is established for explaining the space-time-frequency change rule of target scattering characteristic. Moreover, a fusion imaging strategy and method under high-resolution and ultra-large observation angle range conditions are put forward to improve SAR quality by fusion processing in range-doppler and image domain. Finally, simulations based on typical military targets are used to verify the effectiveness of the fusion imaging method.

  16. 深度图像自动配准点云的方法研究%A method of automatically registering point cloud data based on range images

    Institute of Scientific and Technical Information of China (English)

    田慧; 周绍光; 李浩

    2012-01-01

    点云配准是三维激光扫描数据处理过程中不可或缺的一个环节,利用标靶进行配准是经典的手段之一,此类方案在单独扫描标靶的基础上进行半自动化配准.本文给出一种配准策略,利用中心投影原理将单站扫描的点云转换为深度影像,借助教字图像处理技术完成标靶的自动提取,拟合获得标靶中心点的坐标,并借用摄影测量学的知识实现点云的自动化配准.实验证明了本文方法的有效性.%Point cloud registration plays an essential role to process the data acquired with 3D laser scanner. One traditional registration scheme is based on targets that need to be scanned separately at each station. In this paper, an automatic registration strategy was developed that converted single station point clouds to range images by the center projection principle, utilized digital image processing technology to extract target automatically, calculated the coordinates of its center point, and made use of the knowledge of pho-togrammetry to achieve point cloud registration automatically. Experimental results showed the effectiveness of this method.

  17. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    Science.gov (United States)

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination.

  18. Development of an Analytical Method Based on Temperature Controlled Solid-Liquid Extraction Using an Ionic Liquid as Solid Solvent

    Directory of Open Access Journals (Sweden)

    Zhongwei Pan

    2015-12-01

    Full Text Available At the present paper, an analytical method based on temperature controlled solid-liquid extraction (TC-SLE utilizing a synthesized ionic liquid, (N-butylpyridinium hexafluorophosphate, [BPy]PF6, as solid solvent and phenanthroline (PT as an extractant was developed to determine micro levels of Fe2+ in tea by PT spectrophotometry. TC-SLE was carried out in two continuous steps: Fe2+ can be completely extracted by PT-[BPy]PF6 or back-extracted at 80 °C and the two phases were separated automatically by cooling to room temperature. Fe2+, after back-extraction, needs 2 mol/L HNO3 as stripping agent and the whole process was determined by PT spectrophotometry at room temperature. The extracted species was neutral Fe(PTmCl2 (m = 1 according to slope analysis in the Fe2+-[BPy]PF6-PT TC-SLE system. The calibration curve was Y = 0.20856X − 0.000775 (correlation coefficient = 0.99991. The linear calibration range was 0.10–4.50 μg/mL and the limit of detection for Fe2+ is 7.0 × 10−2 μg/mL. In this method, the contents of Fe2+ in Tieguanyin tea were determined with RSDs (n = 5 3.05% and recoveries in range of 90.6%–108.6%.

  19. Cyclopamine bioactivity by extraction method from Veratrum californicum.

    Science.gov (United States)

    Turner, Matthew W; Cruz, Roberto; Mattos, Jared; Baughman, Nic; Elwell, Jordan; Fothergill, Jenny; Nielsen, Anna; Brookhouse, Jessica; Bartlett, Ashton; Malek, Petr; Pu, Xinzhu; King, Matthew D; McDougal, Owen M

    2016-08-15

    Veratrum californicum, commonly referred to as corn lily or Californian false hellebore, grows in high mountain meadows and produces the steroidal alkaloid cyclopamine, a potent inhibitor of the Hedgehog (Hh) signaling pathway. The Hh pathway is a crucial regulator of many fundamental processes during vertebrate embryonic development. However, constitutive activation of the Hh pathway contributes to the progression of various cancers. In the present study, a direct correlation was made between the extraction efficiency for cyclopamine from root and rhizome by eight methods, and the associated biological activity in Shh-Light II cells using the Dual-Glo® Luciferase Assay System. Alkaloid recovery ranged from 0.39 to 8.03mg/g, with ethanol soak being determined to be the superior method for obtaining biologically active cyclopamine. Acidic ethanol and supercritical extractions yielded degraded or contaminated cyclopamine with lower antagonistic activity towards Hh signaling. PMID:27338657

  20. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  1. One-step column chromatographic extraction with gradient elution followed by automatic separation of volatiles, flavonoids and polysaccharides from Citrus grandis.

    Science.gov (United States)

    Han, Han-Bing; Li, Hui; Hao, Rui-Lin; Chen, Ya-Fei; Ni, He; Li, Hai-Hang

    2014-02-15

    Citrus grandis Tomentosa is widely used in traditional Chinese medicine and health foods. Its functional components include volatiles, flavonoids and polysaccharides which cannot be effectively extracted through traditional methods. A column chromatographic extraction with gradient elution was developed for one-step extraction of all bioactive substances from C. grandis. Dried material was loaded into a column with petroleum ether: ethanol (8:2, PE) and sequentially eluted with 2-fold PE, 3-fold ethanol: water (6:4) and 8-fold water. The elutes was separated into an ether fraction containing volatiles and an ethanol-water fraction containing flavonoids and polysaccharides. The later was separated into flavonoids and polysaccharides by 80% ethanol precipitation of polysaccharides. Through this procedure, volatiles, flavonoids and polysaccharides in C. grandis were simultaneously extracted at 98% extraction rates and simply separated at higher than 95% recovery rates. The method provides a simple and high-efficient extraction and separation of wide range bioactive substances.

  2. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    International Nuclear Information System (INIS)

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms

  3. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    Energy Technology Data Exchange (ETDEWEB)

    Felfer, P., E-mail: peter.felfer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ceguerra, A.V., E-mail: anna.ceguerra@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ringer, S.P., E-mail: simon.ringer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Cairney, J.M., E-mail: julie.cairney@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia)

    2015-03-15

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms.

  4. Extractive method for obtaining gas inclusions from ice

    International Nuclear Information System (INIS)

    Doubtless important for glaciological investigations of firn and ice is the knowledge about the chemical composition of gases included in ice. A method for quantitative extraction of gases from about 30 kg ice under vacuum is presented in this paper. The procedure was tested with ice cores from a thermoelectrical drill hole near Soviet Antarctic station Novolazarevskaya. The chemical compositions of inclusion gases and the specific gas contents from 6 horizons are pointed out by a table and some graphics. (author)

  5. An Iterative Method for Extracting Chinese Unknown Words

    Institute of Scientific and Technical Information of China (English)

    HE Shan; ZHU Jie

    2001-01-01

    An iterative method for extractingunknown words from a Chinese text corpus is pro-posed in this paper. Unlike traditional non-iterativesegmentation-detection approaches, which use onlyknown words for segmentation, the proposed methoditeratively extracts new words and adds them into thelexicon. Then the augmented dictionary, which in-cludes known words and potential unknown words, isused in the next iteration to re-segment the input cor-pus. Experiments show that both the precision andrecall rates of segmentation are improved.

  6. Methods for extracting social network data from chatroom logs

    Science.gov (United States)

    Osesina, O. Isaac; McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.; Bartley, Cecilia; Tudoreanu, M. Eduard

    2012-06-01

    Identifying social network (SN) links within computer-mediated communication platforms without explicit relations among users poses challenges to researchers. Our research aims to extract SN links in internet chat with multiple users engaging in synchronous overlapping conversations all displayed in a single stream. We approached this problem using three methods which build on previous research. Response-time analysis builds on temporal proximity of chat messages; word context usage builds on keywords analysis and direct addressing which infers links by identifying the intended message recipient from the screen name (nickname) referenced in the message [1]. Our analysis of word usage within the chat stream also provides contexts for the extracted SN links. To test the capability of our methods, we used publicly available data from Internet Relay Chat (IRC), a real-time computer-mediated communication (CMC) tool used by millions of people around the world. The extraction performances of individual methods and their hybrids were assessed relative to a ground truth (determined a priori via manual scoring).

  7. Evaluation of DNA extraction methods for freshwater eukaryotic microalgae.

    Science.gov (United States)

    Eland, Lucy E; Davenport, Russell; Mota, Cesar R

    2012-10-15

    The use of molecular methods to investigate microalgal communities of natural and engineered freshwater resources is in its infancy, with the majority of previous studies carried out by microscopy. Inefficient or differential DNA extraction of microalgal community members can lead to bias in downstream community analysis. Three commercially available DNA extraction kits have been tested on a range of pure culture freshwater algal species with diverse cell walls and mixed algal cultures taken from eutrophic waste stabilization ponds (WSP). DNA yield and quality were evaluated, along with DNA suitability for amplification of 18S rRNA gene fragments by polymerase chain reaction (PCR). QiagenDNeasy(®) Blood and Tissue kit (QBT), was found to give the highest DNA yields and quality. Denaturant Gradient Gel Electrophoresis (DGGE) was used to assess the diversity of communities from which DNA was extracted. No significant differences were found among kits when assessing diversity. QBT is recommended for use with WSP samples, a conclusion confirmed by further testing on communities from two tropical WSP systems. The fixation of microalgal samples with ethanol prior to DNA extraction was found to reduce yields as well as diversity and is not recommended.

  8. A hybrid method for pancreas extraction from CT image based on level set methods.

    Science.gov (United States)

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  9. A moving foreground objects extraction method under camouflage effect

    Science.gov (United States)

    Zhu, Zhen-zhen; Li, Jing-yue; Yang, Si-si; Zhou, Hong

    2015-07-01

    This paper discusses the problem of segmenting foreground objects with apertures or discontinuities under camouflage effect and the optical physics model is introduced into foreground detection. A moving foreground objects extraction method based on color invariants is proposed in which color invariants are used as descriptors to model the background and do the foreground segmentation. It makes full use of the color spectral information and spatial configuration. Experimental results demonstrate that the proposed method performs well in various situations of color similarity and meets the demand of real-time performance.

  10. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Directory of Open Access Journals (Sweden)

    Bo Huang

    2015-10-01

    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  11. Comparison of HMM and DTW methods in automatic recognition of pathological phoneme pronunciation

    OpenAIRE

    Wielgat, Robert; Zielinski, Tomasz P.; Swietojanski, Pawel; Zoladz, Piotr; Król, Daniel; Wozniak, Tomasz; Grabias, Stanislaw

    2007-01-01

    In the paper recently proposed Human Factor Cepstral Coefficients (HFCC) are used to automatic recognition of pathological phoneme pronunciation in speech of impaired children and efficiency of this approach is compared to application of the standard Mel-Frequency Cepstral Coefficients (MFCC) as a feature vector. Both dynamic time warping (DTW), working on whole words or embedded phoneme patterns, and hidden Markov models (HMM) are used as classifiers in the presented research. Obtained resul...

  12. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    OpenAIRE

    Zečević, N.

    2008-01-01

    There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon blac...

  13. Biodiesel Production from Microalgae by Extraction – Transesterification Method

    Directory of Open Access Journals (Sweden)

    Nguyen Thi Phuong Thao

    2013-11-01

    Full Text Available The environmental impact of using petroleum fuels has led to a quest to find a suitable alternative fuel source. In this study, microalgae were explored as a highly potential feedstock to produce biodiesel fuel. Firstly, algal oil is extracted from algal biomass by using organic solvents (n–hexan.  Lipid is contained in microalgae up to 60% of their weight. Then, Biodiesel is created through a chemical reaction known as transesterification between algal oil and alcohol (methanol with strong acid (such as H2SO4 as the catalyst. The extraction – transesterification method resulted in a high biodiesel yield (10 % of algal biomass and high FAMEs content (5.2 % of algal biomass. Biodiesel production from microalgae was studied through experimental investigation of transesterification conditions such as reaction time, methanol to oil ration and catalyst dosage which are deemed to have main impact on reaction conversion efficiency. All the parameters which were characterized for purified biodiesel such as free glycerin, total glycerin, flash point, sulfur content were analyzed according to ASTM standardDoi: http://dx.doi.org/10.12777/wastech.1.1.6-9Citation:  Thao, N.T.P., Tin, N.T., and Thanh, B.X. 2013. Biodiesel Production from Microalgae by Extraction – Transesterification Method. Waste Technology 1(1:6-9. Doi: http://dx.doi.org/10.12777/wastech.1.1.6-9

  14. A comparison of single channel fetal ECG extraction methods.

    Science.gov (United States)

    Behar, Joachim; Johnson, Alistair; Clifford, Gari D; Oster, Julien

    2014-06-01

    The abdominal electrocardiogram (ECG) provides a non-invasive method for monitoring the fetal cardiac activity in pregnant women. However, the temporal and frequency overlap between the fetal ECG (FECG), the maternal ECG (MECG) and noise results in a challenging source separation problem. This work seeks to compare temporal extraction methods for extracting the fetal signal and estimating fetal heart rate. A novel method for MECG cancelation using an echo state neural network (ESN) based filtering approach was compared with the least mean square (LMS), the recursive least square (RLS) adaptive filter and template subtraction (TS) techniques. Analysis was performed using real signals from two databases composing a total of 4 h 22 min of data from nine pregnant women with 37,452 reference fetal beats. The effects of preprocessing the signals was empirically evaluated. The results demonstrate that the ESN based algorithm performs best on the test data with an F1 measure of 90.2% as compared to the LMS (87.9%), RLS (88.2%) and the TS (89.3%) techniques. Results suggest that a higher baseline wander high pass cut-off frequency than traditionally used for FECG analysis significantly increases performance for all evaluated methods. Open source code for the benchmark methods are made available to allow comparison and reproducibility on the public domain data. PMID:24604619

  15. 移动车载激光点云的道路标线自动识别与提取%Automatic Road Marking Detection and Extraction Based on LiDAR Point Clouds from Vehicle- Borne MMS

    Institute of Scientific and Technical Information of China (English)

    邹晓亮; 缪剑; 郭锐增; 李星全; 赵桂华

    2012-01-01

    The research focuses on LiDAR point clouds of road surface acquired from vehicle - borne mobile mapping system - Land- Mark. An automatic road marking detection and extraction method is proposed. Combining LiDAR features of retro, angle and distance with the properties of traffic marking, point clouds of road marking is extracted. The road marking is best fitted in a least squares poly- nomial fitting method and CAD map is generated for automatic detection. Based on the experimental data from Sick laser scanner mounted on LandMark system, the experimental results show the method is feasible and available.%对移动车载激光测量LandMark系统获取的路面激光点云数据进行研究,结合激光点云的回波反射率、扫描角,以及量测距离等特征信息与道路标线的属性信息,提出了一种基于车载激光点云的道路标线自动识别与提取算法。从点云中提取道路标线,采用最小二乘线性最优拟合算法对提取的标线点云进行拟合,生成道路标线的CAD轮廓线,实现道路标线的自动化识别。以移动车载LandMark系统的Sick激光扫描仪获取的路面激光点云为例进行实验,实验结果表明该方法的可行性和有效性。

  16. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  17. Optimization of Microwave Assisted Extraction of Andrographolide from Andrographis paniculata and its Comparison with Refluxation Extraction Method

    Directory of Open Access Journals (Sweden)

    Manvitha Mohan

    2013-05-01

    Full Text Available A new method using Microwave assisted extraction technique has been developed for extraction of Andrographolide from Andrographis paniculata. Andrographis paniculata is a well-known plant of Ayurveda which is also known as king of bitters because of different bitter principles present in different parts of the plant and exhibits a wide spectrum of biological activities. The MAE conditions such as irradiation time, temperature and coarseness of powder were optimitized by means of orthogonal array design. The results suggested that the selected parameters were statistically significant. A comparative study was carried out to know the yield of andrographolide from the extracts prepared by microwave assisted extraction and refluxation using methanol and water as solvents. The amount of Andrographolide was estimated using HPTLC method. The results indicate that the extracts prepared by Microwave assisted extraction contained more amount of andrographolide when compared with refluxation extraction method.

  18. New learning subspace method for image feature extraction

    Institute of Scientific and Technical Information of China (English)

    CAO Jian-hai; LI Long; LU Chang-hou

    2006-01-01

    A new method of Windows Minimum/Maximum Module Learning Subspace Algorithm(WMMLSA) for image feature extraction is presented. The WMMLSM is insensitive to the order of the training samples and can regulate effectively the radical vectors of an image feature subspace through selecting the study samples for subspace iterative learning algorithm,so it can improve the robustness and generalization capacity of a pattern subspace and enhance the recognition rate of a classifier. At the same time,a pattern subspace is built by the PCA method. The classifier based on WMMLSM is successfully applied to recognize the pressed characters on the gray-scale images. The results indicate that the correct recognition rate on WMMLSM is higher than that on Average Learning Subspace Method,and that the training speed and the classification speed are both improved. The new method is more applicable and efficient.

  19. Advanced Extraction Methods for Actinide/Lanthanide Separations

    Energy Technology Data Exchange (ETDEWEB)

    Scott, M.J.

    2005-12-01

    The separation of An(III) ions from chemically similar Ln(III) ions is perhaps one of the most difficult problems encountered during the processing of nuclear waste. In the 3+ oxidation states, the metal ions have an identical charge and roughly the same ionic radius. They differ strictly in the relative energies of their f- and d-orbitals, and to separate these metal ions, ligands will need to be developed that take advantage of this small but important distinction. The extraction of uranium and plutonium from nitric acid solution can be performed quantitatively by the extraction with the TBP (tributyl phosphate). Commercially, this process has found wide use in the PUREX (plutonium uranium extraction) reprocessing method. The TRUEX (transuranium extraction) process is further used to coextract the trivalent lanthanides and actinides ions from HLLW generated during PUREX extraction. This method uses CMPO [(N, N-diisobutylcarbamoylmethyl) octylphenylphosphineoxide] intermixed with TBP as a synergistic agent. However, the final separation of trivalent actinides from trivalent lanthanides still remains a challenging task. In TRUEX nitric acid solution, the Am(III) ion is coordinated by three CMPO molecules and three nitrate anions. Taking inspiration from this data and previous work with calix[4]arene systems, researchers on this project have developed a C3-symmetric tris-CMPO ligand system using a triphenoxymethane platform as a base. The triphenoxymethane ligand systems have many advantages for the preparation of complex ligand systems. The compounds are very easy to prepare. The steric and solubility properties can be tuned through an extreme range by the inclusion of different alkoxy and alkyl groups such as methyoxy, ethoxy, t-butoxy, methyl, octyl, t-pentyl, or even t-pentyl at the ortho- and para-positions of the aryl rings. The triphenoxymethane ligand system shows promise as an improved extractant for both tetravalent and trivalent actinide recoveries form

  20. Automatic methods for long-term tracking and the detection and decoding of communication dances in honeybees

    Directory of Open Access Journals (Sweden)

    Fernando eWario

    2015-09-01

    Full Text Available The honeybee waggle dance communication system is an intriguing example of abstract animal communication and has been investigated thoroughly throughout the last seven decades. Typically, observables such as durations or angles are extracted manually directly from the observation hive or from video recordings to quantify dance properties, particularly to determine where bees have foraged. In recent years, biology has profited from automation, improving measurement precision, removing human bias, and accelerating data collection. As a further step, we have developed technologies to track all individuals of a honeybee colony and detect and decode communication dances automatically. In strong contrast to conventional approaches that focus on a small subset of the hive life, whether this regards time, space, or animal identity, our more inclusive system will help the understanding of the dance comprehensively in its spatial, temporal, and social context. In this contribution, we present full specifications of the recording setup and the software for automatic recognition and decoding of tags and dances, and we discuss potential research directions that may benefit from automation. Lastly, to exemplify the power of the methodology, we show experimental data and respective analyses for a continuous, experimental recording of nine weeks duration.

  1. Use of stochastic methods for robust parameter extraction from impedance spectra

    Energy Technology Data Exchange (ETDEWEB)

    Bueschel, Paul, E-mail: paul.bueschel@etit.tu-chemnitz.de; Troeltzsch, Uwe; Kanoun, Olfa

    2011-09-30

    The fitting of impedance models to measured data is an essential step in impedance spectroscopy (IS). Due to often complicated, nonlinear models, big number of parameters, large search spaces and presence of noise, an automated determination of the unknown parameters is a challenging task. The stronger the nonlinear behavior of a model, the weaker is the convergence of the corresponding regression and the probability to trap into local minima increases during parameter extraction. For fast measurements or automatic measurement systems these problems became the limiting factors of use. We compared the usability of stochastic algorithms, evolution, simulated annealing and particle filter with the widely used tool LEVM for parameter extraction for IS. The comparison is based on one reference model by J.R. Macdonald and a battery model used with noisy measurement data. The results show different performances of the algorithms for these two problems depending on the search space and the model used for optimization. The obtained results by particle filter were the best for both models. This method delivers the most reliable result for both cases even for the ill posed battery model.

  2. A Robust Rigid Skeleton Extraction Method from Noisy Visual Hull Model

    Directory of Open Access Journals (Sweden)

    Xiaojun Wu

    2015-04-01

    Full Text Available The existing skeleton extraction algorithms from a coarse and noisy model cannot achieve a satisfactory skeleton, let alone the joints’ central position in a markerless motion capture system (e.g., a rigid skeleton. To solve this problem, we propose a rigid skeleton extraction algorithm from a noisy visual hull model with phantom volumes. Firstly, we reconstruct the subject visual hull and the corresponding volumetric model from a multiple-view synchronized video sequence. Secondly, the curve skeleton of the volume model is computed based on the theory of repulsive force fields. Thirdly, we propose a criterion for linking a curved skeleton to link the different skeleton limbs using a back-tracking method. At the same time, we obtain the distance and angle threshold values adaptively using a binary search algorithm. Finally, after achieving a smooth curve skeleton, we determine the joints’ central positions in the skeleton using a priori information of the human body to form a rigid skeleton. Experimental results show that the proposed algorithm can obtain a desirable rigid skeleton with good robustness, less sensitivity to noise, and using an automatic procedure.

  3. Chloroform extraction of iodine in seawater: method development

    Science.gov (United States)

    Seidler, H. B.; Glimme, A.; Tumey, S.; Guilderson, T. P.

    2012-12-01

    While 129I poses little to no radiological health hazard, the isotopic ratio of 129I to stable iodine is very useful as a nearly conservative tracer for ocean mixing processes. The unfortunate disaster at the Fukushima Daiichi nuclear power plant released many radioactive materials into the environment, including 129I. The release allows the studying of oceanic processes through the tracking of 129I. However, with such a low iodine (~0.5 micromolar) and 129I concentrations (research and worked towards maximum efficiency of the process while boosting the recovery of iodine. During development, we assessed each methodological change qualitatively using a color scale (I2 in CHCl3) and quantitatively using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). The "optimized method" yielded a 20-40% increase in recovery of the iodine compared to the base method (80-85% recovery vs. 60%). Lastly, the "optimized method" was tested by AMS for fractionation of the extracted iodine.

  4. Method of automatic tuning pf preset coefficient of electron gain of photoelectron multiplier

    CERN Document Server

    Smirnov, O Yu

    2002-01-01

    Paper describes technique to time the preset coefficient of electron gain of photoelectron multiplier (PEM) ensuring high accuracy at minimal involvement of an operator. Subsequent to rough setting of voltage in PEM the automatic system tunes high voltage so that coefficient of electron gain of PEM corresponds to the preset one within the limits of the required accuracy (up to 2%). The technique was efficiently used to tune two thousands of PEMs for the Borexino solar neutrino detector in the Gran Sasso National Laboratory, Italy

  5. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  6. Method of extracting heat from dry geothermal reservoirs

    Science.gov (United States)

    Potter, R.M.; Robinson, E.S.; Smith, M.C.

    1974-01-22

    Hydraulic fracturing is used to interconnect two or more holes that penetrate a previously dry geothermal reservoir, and to produce within the reservoir a sufficiently large heat-transfer surface so that heat can be extracted from the reservoir at a usefully high rate by a fluid entering it through one hole and leaving it through another. Introduction of a fluid into the reservoir to remove heat from it and establishment of natural (unpumped) convective circulation through the reservoir to accomplish continuous heat removal are important and novel features of the method. (auth)

  7. A Statistical Approach to Automatic Speech Summarization

    Science.gov (United States)

    Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex

    2003-12-01

    This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  8. A Statistical Approach to Automatic Speech Summarization

    Directory of Open Access Journals (Sweden)

    Chiori Hori

    2003-02-01

    Full Text Available This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG. We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.

  9. Quality and characteristics of ginseng seed oil treated using different extraction methods

    OpenAIRE

    Lee, Myung-Hee; Kim, Sung-Soo; Cho, Chang-Won; Choi, Sang-Yoon; In, Gyo; Kim, Kyung-Tack

    2013-01-01

    Ginseng seed oil was prepared using compressed, solvent, and supercritical fluid extraction methods of ginseng seeds, and the extraction yield, color, phenolic compounds, fatty acid contents, and phytosterol contents of the ginseng seed oil were analyzed. Yields were different depending on the roasting pretreatment and extraction method. Among the extraction methods, the yield of ginseng seed oil from supercritical fluid extraction under the conditions of 500 bar and 65℃ was the highest, at 1...

  10. Coconut oil extraction by the traditional Java method : An investigation of its potential application in aqueous Jatropha oil extraction

    NARCIS (Netherlands)

    Marasabessy, Ahmad; Moeis, Maelita R.; Sanders, Johan P. M.; Weusthuis, Ruud A.

    2010-01-01

    A traditional Java method of coconut oil extraction assisted by paddy crabs was investigated to find out if crabs or crab-derived components can be used to extract oil from Jatropha curcas seed kernels. Using the traditional Java method the addition of crab paste liberated 54% w w(-1) oil from grate

  11. Coconut oil extraction by the Java method: An investigation of its potential application in aqueous Jatropha oil extraction

    NARCIS (Netherlands)

    Marasabessy, A.; Moeis, M.R.; Sanders, J.P.M.; Weusthuis, R.A.

    2010-01-01

    A traditional Java method of coconut oil extraction assisted by paddy crabs was investigated to find out if crabs or crab-derived components can be used to extract oil from Jatropha curcas seed kernels. Using the traditional Java method the addition of crab paste liberated 54% w w-1 oil from grated

  12. Method Specific Calibration Corrects for DNA Extraction Method Effects on Relative Telomere Length Measurements by Quantitative PCR

    Science.gov (United States)

    Holland, Rebecca; Underwood, Sarah; Fairlie, Jennifer; Psifidi, Androniki; Ilska, Joanna J.; Bagnall, Ainsley; Whitelaw, Bruce; Coffey, Mike; Banos, Georgios; Nussey, Daniel H.

    2016-01-01

    Telomere length (TL) is increasingly being used as a biomarker in epidemiological, biomedical and ecological studies. A wide range of DNA extraction techniques have been used in telomere experiments and recent quantitative PCR (qPCR) based studies suggest that the choice of DNA extraction method may influence average relative TL (RTL) measurements. Such extraction method effects may limit the use of historically collected DNA samples extracted with different methods. However, if extraction method effects are systematic an extraction method specific (MS) calibrator might be able to correct for them, because systematic effects would influence the calibrator sample in the same way as all other samples. In the present study we tested whether leukocyte RTL in blood samples from Holstein Friesian cattle and Soay sheep measured by qPCR was influenced by DNA extraction method and whether MS calibration could account for any observed differences. We compared two silica membrane-based DNA extraction kits and a salting out method. All extraction methods were optimized to yield enough high quality DNA for TL measurement. In both species we found that silica membrane-based DNA extraction methods produced shorter RTL measurements than the non-membrane-based method when calibrated against an identical calibrator. However, these differences were not statistically detectable when a MS calibrator was used to calculate RTL. This approach produced RTL measurements that were highly correlated across extraction methods (r > 0.76) and had coefficients of variation lower than 10% across plates of identical samples extracted by different methods. Our results are consistent with previous findings that popular membrane-based DNA extraction methods may lead to shorter RTL measurements than non-membrane-based methods. However, we also demonstrate that these differences can be accounted for by using an extraction method-specific calibrator, offering researchers a simple means of accounting for

  13. Research on a Chnese Text Automatic Classification Method%一种中文文本自动分类方法的研究

    Institute of Scientific and Technical Information of China (English)

    尹桂秀

    2002-01-01

    This article introduces a Chinese text automatic classification method, including its principle and classification process. The article focuses on some key theoretical problems, such as word classification, keyword collection and keyword matching.

  14. Alternative and efficient extraction methods for marine-derived compounds.

    Science.gov (United States)

    Grosso, Clara; Valentão, Patrícia; Ferreres, Federico; Andrade, Paula B

    2015-05-01

    Marine ecosystems cover more than 70% of the globe's surface. These habitats are occupied by a great diversity of marine organisms that produce highly structural diverse metabolites as a defense mechanism. In the last decades, these metabolites have been extracted and isolated in order to test them in different bioassays and assess their potential to fight human diseases. Since traditional extraction techniques are both solvent- and time-consuming, this review emphasizes alternative extraction techniques, such as supercritical fluid extraction, pressurized solvent extraction, microwave-assisted extraction, ultrasound-assisted extraction, pulsed electric field-assisted extraction, enzyme-assisted extraction, and extraction with switchable solvents and ionic liquids, applied in the search for marine compounds. Only studies published in the 21st century are considered.

  15. Spectra in the chaotic region: Methods for extracting dynamic information

    Energy Technology Data Exchange (ETDEWEB)

    Gomez Llorente, J.M.; Zakrzewski, J.; Taylor, H.S.; Kulander, K.C.

    1989-02-01

    Nonlinear dynamics is applied to chaotic unassignable atomic and molecular spectra with the aim of extracting detailed information about regular dynamic motions that exist over short intervals of time. It is shown how this motion can be extracted from high resolution spectra by doing low resolution studies or by Fourier transforming limited regions of the spectrum. These motions mimic those of periodic orbits (PO) and are inserts into the dominant chaotic motion. Considering these inserts and the PO as a dynamically decoupled region of space, resonant scattering theory and stabilization methods enable us to compute ladders of resonant states which interact with the chaotic quasicontinuum computed in principle from basis sets placed off the PO. The interaction of the resonances with the quasicontinuum explains the low resolution spectra seen in such experiments. It also allows one to associate low resolution features with a particular PO. The motion on the PO thereby supplies the molecular movements whose quantization causes the low resolution spectra. Characteristic properties of the periodic orbit based resonances are discussed. The method is illustrated on the photoabsorption spectrum of the hydrogen atom in a strong magnetic field and on the photodissociation spectrum of H/sup +//sub 3/ . Other molecular systems which are currently under investigation using this formalism are also mentioned.

  16. A Method for Automatic Thread Demoulding Using Step Motor and Servo Motor with Synchronization between the Two Systems in Injection Mould

    OpenAIRE

    Guo-Wei Chang; Jun-Min Yang

    2013-01-01

    This study offers a method applied in injection mould with automatic thread demoulding by using servo control and the combination of mechanical structures. In injection moulding, using hydraulic motor to drive gears and cores with thread for automatically taking plastic-injected parts with thread off from the cores is a common method for thread demoulding. The thread demoulding is completed by two movements. One of them is realized by the rotation of the threaded cores driven by hydraulic mot...

  17. Discriminative tonal feature extraction method in mandarin speech recognition

    Institute of Scientific and Technical Information of China (English)

    HUANG Hao; ZHU Jie

    2007-01-01

    To utilize the supra-segmental nature of Mandarin tones, this article proposes a feature extraction method for hidden markov model (HMM) based tone modeling. The method uses linear transforms to project F0 (fundamental frequency) features of neighboring syllables as compensations, and adds them to the original F0 features of the current syllable. The transforms are discriminatively trained by using an objective function termed as "minimum tone error", which is a smooth approximation of tone recognition accuracy. Experiments show that the new tonal features achieve 3.82% tone recognition rate improvement, compared with the baseline, using maximum likelihood trained HMM on the normal F0 features. Further experiments show that discriminative HMM training on the new features is 8.78% better than the baseline.

  18. Progressive extraction method applied to isotopic exchange of carbon-14

    International Nuclear Information System (INIS)

    Isotopic exchange in natural settings is essentially an irreversible process, so that it progresses continuously until there is complete isotopic equilibrium. In soils, this process involves interaction between isotopes in the liquid and solid phases, and complete isotopic equilibrium may take a very long time. Measurements after partial isotopic exchange have been used to characterize the labile fraction of elements in soils. We describe a method to characterize the extent of isotopic exchange, with application here to incorporation of inorganic carbon-14 (14C) into mineral carbonates and organic matter in soils. The procedure uses a continuous addition of extractant, acid, or H2O2in the examples presented here, coupled with sequential sampling. The method has been applied to demonstrate the degree of isotopic exchange in soil. The same strategy could be applied to many other elements, including plant nutrients. (author)

  19. Extraction, chromatographic and mass spectrometric methods for lipid analysis.

    Science.gov (United States)

    Pati, Sumitra; Nie, Ben; Arnold, Robert D; Cummings, Brian S

    2016-05-01

    Lipids make up a diverse subset of biomolecules that are responsible for mediating a variety of structural and functional properties as well as modulating cellular functions such as trafficking, regulation of membrane proteins and subcellular compartmentalization. In particular, phospholipids are the main constituents of biological membranes and play major roles in cellular processes like transmembrane signaling and structural dynamics. The chemical and structural variety of lipids makes analysis using a single experimental approach quite challenging. Research in the field relies on the use of multiple techniques to detect and quantify components of cellular lipidomes as well as determine structural features and cellular organization. Understanding these features can allow researchers to elucidate the biochemical mechanisms by which lipid-lipid and/or lipid-protein interactions take place within the conditions of study. Herein, we provide an overview of essential methods for the examination of lipids, including extraction methods, chromatographic techniques and approaches for mass spectrometric analysis.

  20. Various Extraction Methods for Obtaining Stilbenes from Grape Cane of Vitis vinifera L.

    OpenAIRE

    Ivo Soural; Naděžda Vrchotová; Jan Tříska; Josef Balík; Štěpán Horník; Petra Cuřínová; Jan Sýkora

    2015-01-01

    Grape cane, leaves and grape marc are waste products from viticulture, which can be used to obtain secondary stilbene derivatives with high antioxidant value. The presented work compares several extraction methods: maceration at laboratory temperature, extraction at elevated temperature, fluidized-bed extraction, Soxhlet extraction, microwave-assisted extraction, and accelerated solvent extraction. To obtain trans-resveratrol, trans-ε-viniferin and r2-viniferin from grape cane of the V. ...