WorldWideScience

Sample records for automatable method extract

  1. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Development of automatic extraction method of left ventricular contours on long axis view MR cine images

    International Nuclear Information System (INIS)

    Utsunomiya, Shinichi; Iijima, Naoto; Yamasaki, Kazunari; Fujita, Akinori

    1995-01-01

    In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)

  3. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like...

  4. Study of Burn Scar Extraction Automatically Based on Level Set Method using Remote Sensing Data

    Science.gov (United States)

    Liu, Yang; Dai, Qin; Liu, JianBo; Liu, ShiBin; Yang, Jin

    2014-01-01

    Burn scar extraction using remote sensing data is an efficient way to precisely evaluate burn area and measure vegetation recovery. Traditional burn scar extraction methodologies have no well effect on burn scar image with blurred and irregular edges. To address these issues, this paper proposes an automatic method to extract burn scar based on Level Set Method (LSM). This method utilizes the advantages of the different features in remote sensing images, as well as considers the practical needs of extracting the burn scar rapidly and automatically. This approach integrates Change Vector Analysis (CVA), Normalized Difference Vegetation Index (NDVI) and the Normalized Burn Ratio (NBR) to obtain difference image and modifies conventional Level Set Method Chan-Vese (C-V) model with a new initial curve which results from a binary image applying K-means method on fitting errors of two near-infrared band images. Landsat 5 TM and Landsat 8 OLI data sets are used to validate the proposed method. Comparison with conventional C-V model, OSTU algorithm, Fuzzy C-mean (FCM) algorithm are made to show that the proposed approach can extract the outline curve of fire burn scar effectively and exactly. The method has higher extraction accuracy and less algorithm complexity than that of the conventional C-V model. PMID:24503563

  5. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  6. A dual growing method for the automatic extraction of individual trees from mobile laser scanning data

    Science.gov (United States)

    Li, Lin; Li, Dalin; Zhu, Haihong; Li, You

    2016-10-01

    Street trees interlaced with other objects in cluttered point clouds of urban scenes inhibit the automatic extraction of individual trees. This paper proposes a method for the automatic extraction of individual trees from mobile laser scanning data, according to the general constitution of trees. Two components of each individual tree - a trunk and a crown can be extracted by the dual growing method. This method consists of coarse classification, through which most of artifacts are removed; the automatic selection of appropriate seeds for individual trees, by which the common manual initial setting is avoided; a dual growing process that separates one tree from others by circumscribing a trunk in an adaptive growing radius and segmenting a crown in constrained growing regions; and a refining process that draws a singular trunk from the interlaced other objects. The method is verified by two datasets with over 98% completeness and over 96% correctness. The low mean absolute percentage errors in capturing the morphological parameters of individual trees indicate that this method can output individual trees with high precision.

  7. An automatic glioma grading method based on multi-feature extraction and fusion.

    Science.gov (United States)

    Zhan, Tianming; Feng, Piaopiao; Hong, Xunning; Lu, Zhenyu; Xiao, Liang; Zhang, Yudong

    2017-07-20

    An accurate assessment of tumor malignancy grade in the preoperative situation is important for clinical management. However, the manual grading of gliomas from MRIs is both a tiresome and time consuming task for radiologists. Thus, it is a priority to design an automatic and effective computer-aided diagnosis (CAD) tool to assist radiologists in grading gliomas. To design an automatic computer-aided diagnosis for grading gliomas using multi-sequence magnetic resonance imaging. The proposed method consists of two steps: (1) the features of high and low grade gliomas are extracted from multi-sequence magnetic resonance images, and (2) then, a KNN classifier is trained to grade the gliomas. In the feature extraction step, the intensity, volume, and local binary patterns (LBP) of the gliomas are extracted, and PCA is used to reduce the data dimension. The proposed "Intensity-Volume-LBP-PCA-KNN" method is validated on the MICCAI 2015 BraTS challenge dataset, and an average grade accuracy of 87.59% is obtained. The proposed method is an effective method for automatically grading gliomas and can be applied to real situations.

  8. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  9. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  10. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  11. Cluster based statistical feature extraction method for automatic bleeding detection in wireless capsule endoscopy video.

    Science.gov (United States)

    Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A; Zhu, Wei-Ping; Ahmad, M Omair

    2018-03-01

    Wireless capsule endoscopy (WCE) is capable of demonstrating the entire gastrointestinal tract at an expense of exhaustive reviewing process for detecting bleeding disorders. The main objective is to develop an automatic method for identifying the bleeding frames and zones from WCE video. Different statistical features are extracted from the overlapping spatial blocks of the preprocessed WCE image in a transformed color plane containing green to red pixel ratio. The unique idea of the proposed method is to first perform unsupervised clustering of different blocks for obtaining two clusters and then extract cluster based features (CBFs). Finally, a global feature consisting of the CBFs and differential CBF is used to detect bleeding frame via supervised classification. In order to handle continuous WCE video, a post-processing scheme is introduced utilizing the feature trends in neighboring frames. The CBF along with some morphological operations is employed to identify bleeding zones. Based on extensive experimentation on several WCE videos, it is found that the proposed method offers significantly better performance in comparison to some existing methods in terms of bleeding detection accuracy, sensitivity, specificity and precision in bleeding zone detection. It is found that the bleeding detection performance obtained by using the proposed CBF based global feature is better than the feature extracted from the non-clustered image. The proposed method can reduce the burden of physicians in investigating WCE video to detect bleeding frame and zone with a high level of accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  13. A method for automatic feature points extraction of human vertebrae three-dimensional model

    Science.gov (United States)

    Wu, Zhen; Wu, Junsheng

    2017-05-01

    A method for automatic extraction of the feature points of the human vertebrae three-dimensional model is presented. Firstly, the statistical model of vertebrae feature points is established based on the results of manual vertebrae feature points extraction. Then anatomical axial analysis of the vertebrae model is performed according to the physiological and morphological characteristics of the vertebrae. Using the axial information obtained from the analysis, a projection relationship between the statistical model and the vertebrae model to be extracted is established. According to the projection relationship, the statistical model is matched with the vertebrae model to get the estimated position of the feature point. Finally, by analyzing the curvature in the spherical neighborhood with the estimated position of feature points, the final position of the feature points is obtained. According to the benchmark result on multiple test models, the mean relative errors of feature point positions are less than 5.98%. At more than half of the positions, the error rate is less than 3% and the minimum mean relative error is 0.19%, which verifies the effectiveness of the method.

  14. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  15. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  16. A Method for Automatic Extracting Intracranial Region in MR Brain Image

    Science.gov (United States)

    Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro

    It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.

  17. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    Science.gov (United States)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  18. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron...... hydroxide and manganese dioxide co-precipitation and evaporation were compared and the applicability of different techniques was discussed in order to evaluate and establish the optimal method for in vivo radioassay program. The analytical results indicate that the various sample pre......-precipitation step, yet, the occurrence of sulfur compounds in the processed sample deteriorated the analytical performance of the ensuing extraction chromatographic separation with chemical yields of...

  19. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  20. Automatic detection of wheezes by evaluation of multiple acoustic feature extraction methods and C-weighted SVM

    Science.gov (United States)

    Sosa, Germán. D.; Cruz-Roa, Angel; González, Fabio A.

    2015-01-01

    This work addresses the problem of lung sound classification, in particular, the problem of distinguishing between wheeze and normal sounds. Wheezing sound detection is an important step to associate lung sounds with an abnormal state of the respiratory system, usually associated with tuberculosis or another chronic obstructive pulmonary diseases (COPD). The paper presents an approach for automatic lung sound classification, which uses different state-of-the-art sound features in combination with a C-weighted support vector machine (SVM) classifier that works better for unbalanced data. Feature extraction methods used here are commonly applied in speech recognition and related problems thanks to the fact that they capture the most informative spectral content from the original signals. The evaluated methods were: Fourier transform (FT), wavelet decomposition using Wavelet Packet Transform bank of filters (WPT) and Mel Frequency Cepstral Coefficients (MFCC). For comparison, we evaluated and contrasted the proposed approach against previous works using different combination of features and/or classifiers. The different methods were evaluated on a set of lung sounds including normal and wheezing sounds. A leave-two-out per-case cross-validation approach was used, which, in each fold, chooses as validation set a couple of cases, one including normal sounds and the other including wheezing sounds. Experimental results were reported in terms of traditional classification performance measures: sensitivity, specificity and balanced accuracy. Our best results using the suggested approach, C-weighted SVM and MFCC, achieve a 82.1% of balanced accuracy obtaining the best result for this problem until now. These results suggest that supervised classifiers based on kernel methods are able to learn better models for this challenging classification problem even using the same feature extraction methods.

  1. Automatic Feature Extraction from Planetary Images

    Science.gov (United States)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  2. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    Science.gov (United States)

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  3. Automatic target extraction in complicated background for camera calibration

    Science.gov (United States)

    Guo, Xichao; Wang, Cheng; Wen, Chenglu; Cheng, Ming

    2016-03-01

    In order to perform high precise calibration of camera in complex background, a novel design of planar composite target and the corresponding automatic extraction algorithm are presented. Unlike other commonly used target designs, the proposed target contains the information of feature point coordinate and feature point serial number simultaneously. Then based on the original target, templates are prepared by three geometric transformations and used as the input of template matching based on shape context. Finally, parity check and region growing methods are used to extract the target as final result. The experimental results show that the proposed method for automatic extraction and recognition of the proposed target is effective, accurate and reliable.

  4. Automatic extraction of left ventricle in SPECT myocardial perfusion imaging

    International Nuclear Information System (INIS)

    Liu Li; Zhao Shujun; Yao Zhiming; Wang Daoyu

    1999-01-01

    An automatic method of extracting left ventricle from SPECT myocardial perfusion data was introduced. This method was based on the least square analysis of the positions of all short-axis slices pixels from the half sphere-cylinder myocardial model, and used a iterative reconstruction technique to automatically cut off the non-left ventricular tissue from the perfusion images. Thereby, this technique provided the bases for further quantitative analysis

  5. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  6. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  7. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  8. Automatic extraction of legal concepts and definitions

    NARCIS (Netherlands)

    Winkels, R.; Hoekstra, R.

    2012-01-01

    In this paper we present the results of an experiment in automatic concept and definition extraction from written sources of law using relatively simple natural language and standard semantic web technology. The software was tested on six laws from the tax domain.

  9. Automatically extracting class diagrams from spreadsheets

    NARCIS (Netherlands)

    Hermans, F.; Pinzger, M.; Van Deursen, A.

    2010-01-01

    The use of spreadsheets to capture information is widespread in industry. Spreadsheets can thus be a wealthy source of domain information. We propose to automatically extract this information and transform it into class diagrams. The resulting class diagram can be used by software engineers to

  10. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform.

    Science.gov (United States)

    Yu, Xiao; Ding, Enjie; Chen, Chunxu; Liu, Xiaoming; Li, Li

    2015-11-03

    Because roller element bearings (REBs) failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC) to select salient features from the marginal spectrum of vibration signals by Hilbert-Huang Transform (HHT). In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS) into window spectrums, following which Rand Index (RI) criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs). Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines). The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU). The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500-800 and a m range of 50-300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR) = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault classification accuracy and a

  11. Toward an automatic method for extracting cancer- and other disease-related point mutations from the biomedical literature.

    Science.gov (United States)

    Doughty, Emily; Kertesz-Farkas, Attila; Bodenreider, Olivier; Thompson, Gary; Adadey, Asa; Peterson, Thomas; Kann, Maricel G

    2011-02-01

    A major goal of biomedical research in personalized medicine is to find relationships between mutations and their corresponding disease phenotypes. However, most of the disease-related mutational data are currently buried in the biomedical literature in textual form and lack the necessary structure to allow easy retrieval and visualization. We introduce a high-throughput computational method for the identification of relevant disease mutations in PubMed abstracts applied to prostate (PCa) and breast cancer (BCa) mutations. We developed the extractor of mutations (EMU) tool to identify mutations and their associated genes. We benchmarked EMU against MutationFinder--a tool to extract point mutations from text. Our results show that both methods achieve comparable performance on two manually curated datasets. We also benchmarked EMU's performance for extracting the complete mutational information and phenotype. Remarkably, we show that one of the steps in our approach, a filter based on sequence analysis, increases the precision for that task from 0.34 to 0.59 (PCa) and from 0.39 to 0.61 (BCa). We also show that this high-throughput approach can be extended to other diseases. Our method improves the current status of disease-mutation databases by significantly increasing the number of annotated mutations. We found 51 and 128 mutations manually verified to be related to PCa and Bca, respectively, that are not currently annotated for these cancer types in the OMIM or Swiss-Prot databases. EMU's retrieval performance represents a 2-fold improvement in the number of annotated mutations for PCa and BCa. We further show that our method can benefit from full-text analysis once there is an increase in Open Access availability of full-text articles. Freely available at: http://bioinf.umbc.edu/EMU/ftp.

  12. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    International Nuclear Information System (INIS)

    Parekh, V; Jacobs, MA

    2016-01-01

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  13. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, V [The Johns Hopkins University, Computer Science. Baltimore, MD (United States); Jacobs, MA [The Johns Hopkins University School of Medicine, Dept of Radiology and Oncology. Baltimore, MD (United States)

    2016-06-15

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  14. Extraction method

    International Nuclear Information System (INIS)

    Stary, J.; Kyrs, M.; Navratil, J.; Havelka, S.; Hala, J.

    1975-01-01

    Definitions of the basic terms and of relations are given and the knowledge is described of the possibilities of the extraction of elements, oxides, covalent-bound halogenides and heteropolyacids. Greatest attention is devoted to the detailed analysis of the extraction of chelates and ion associates using diverse agents. For both types of compounds detailed conditions are given of the separation and the effects of the individual factors are listed. Attention is also devoted to extractions using mixtures of organic agents, the synergic effects thereof, and to extractions in non-aqueous solvents. The effects of radiation on extraction and the main types of apparatus used for extractions carried out in the laboratory are described. (L.K.)

  15. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  16. Rapid, potentially automatable, method extract biomarkers for HPLC/ESI/MS/MS to detect and identify BW agents

    Energy Technology Data Exchange (ETDEWEB)

    White, D.C. [Univ. of Tennessee, Knoxville, TN (United States). Center for Environmental Biotechnology]|[Oak Ridge National Lab., TN (United States). Environmental Science Div.; Burkhalter, R.S.; Smith, C. [Univ. of Tennessee, Knoxville, TN (United States). Center for Environmental Biotechnology; Whitaker, K.W. [Microbial Insights, Inc., Rockford, TN (United States)

    1997-12-31

    The program proposes to concentrate on the rapid recovery of signature biomarkers based on automated high-pressure, high-temperature solvent extraction (ASE) and/or supercritical fluid extraction (SFE) to produce lipids, nucleic acids and proteins sequentially concentrated and purified in minutes with yields especially from microeukaryotes, Gram-positive bacteria and spores. Lipids are extracted in higher proportions greater than classical one-phase, room temperature solvent extraction without major changes in lipid composition. High performance liquid chromatography (HPLC) with or without derivatization, electrospray ionization (ESI) and highly specific detection by mass spectrometry (MS) particularly with (MS){sup n} provides the detection, identification and because the signature lipid biomarkers are both phenotypic as well as genotypic biomarkers, insights into potential infectivity of BW agents. Feasibility has been demonstrated with detection, identification, and determination of infectious potential of Cryptosporidium parvum at the sensitivity of a single oocyst (which is unculturable in vitro) and accurate identification and prediction, pathogenicity, and drug-resistance of Mycobacteria spp.

  17. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  18. Automatic building extraction using LiDAR and aerial photographs

    Directory of Open Access Journals (Sweden)

    Melis Uzar

    Full Text Available This paper presents an automatic building extraction approach using LiDAR data and aerial photographs from a multi-sensor system positioned at the same platform. The automatic building extraction approach consists of segmentation, analysis and classification steps based on object-based image analysis. The chessboard, contrast split and multi-resolution segmentation methods were used in the segmentation step. The determined object primitives in segmentation, such as scale parameter, shape, completeness, brightness, and statistical parameters, were used to determine threshold values for classification in the analysis step. The rule-based classification was carried out with defined decision rules based on determined object primitives and fuzzy rules. In this study, hierarchical classification was preferred. First, the vegetation and ground classes were generated; the building class was then extracted. The NDVI, slope and Hough images were generated and used to avoid confusing the building class with other classes. The intensity images generated from the LiDAR data and morphological operations were utilized to improve the accuracy of the building class. The proposed approach achieved an overall accuracy of approximately 93% for the target class in a suburban neighborhood, which was the study area. Moreover, completeness (96.73% and correctness (95.02% analyses were performed by comparing the automatically extracted buildings and reference data.

  19. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  20. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  1. Automatic sentence extraction for the detection of scientific paper relations

    Science.gov (United States)

    Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.

    2018-03-01

    The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.

  2. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data fr...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  3. Automatic extraction of drug indications from FDA drug labels.

    Science.gov (United States)

    Khare, Ritu; Wei, Chih-Hsuan; Lu, Zhiyong

    2014-01-01

    Extracting computable indications, i.e. drug-disease treatment relationships, from narrative drug resources is the key for building a gold standard drug indication repository. The two steps to the extraction problem are disease named-entity recognition (NER) to identify disease mentions from a free-text description and disease classification to distinguish indications from other disease mentions in the description. While there exist many tools for disease NER, disease classification is mostly achieved through human annotations. For example, we recently resorted to human annotations to prepare a corpus, LabeledIn, capturing structured indications from the drug labels submitted to FDA by pharmaceutical companies. In this study, we present an automatic end-to-end framework to extract structured and normalized indications from FDA drug labels. In addition to automatic disease NER, a key component of our framework is a machine learning method that is trained on the LabeledIn corpus to classify the NER-computed disease mentions as "indication vs. non-indication." Through experiments with 500 drug labels, our end-to-end system delivered 86.3% F1-measure in drug indication extraction, with 17% improvement over baseline. Further analysis shows that the indication classifier delivers a performance comparable to human experts and that the remaining errors are mostly due to disease NER (more than 50%). Given its performance, we conclude that our end-to-end approach has the potential to significantly reduce human annotation costs.

  4. Automatic extraction of syntactic patterns for dependency parsing in noun phrase chunks

    Directory of Open Access Journals (Sweden)

    Mihaela Colhon

    2014-05-01

    Full Text Available In this article we present a method for automatic extraction of syntactic patterns that are used to develop a dependency parsing method. The patterns have been extracted from a corpus automatically annotated for tokens, sentences’ borders, parts of speech and noun phrases, and manually annotated for dependency relations between words. The evaluation shows promising results in the case of an order-free language.

  5. Automatic information extraction from unstructured mammography reports using distributed semantics.

    Science.gov (United States)

    Gupta, Anupama; Banerjee, Imon; Rubin, Daniel L

    2018-02-01

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F 1 -score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Temporally rendered automatic cloud extraction (TRACE) system

    Science.gov (United States)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  7. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  8. Automatically Extracting Typical Syntactic Differences from Corpora

    NARCIS (Netherlands)

    Wiersma, Wybo; Nerbonne, John; Lauttamus, Timo

    We develop an aggregate measure of syntactic difference for automatically finding common syntactic differences between collections of text. With the use of this measure, it is possible to mine for differences between, for example, the English of learners and natives, or between related dialects. If

  9. Reference region automatic extraction in dynamic [(11)C]PIB.

    Science.gov (United States)

    Ikoma, Yoko; Edison, Paul; Ramlackhansingh, Anil; Brooks, David J; Turkheimer, Federico E

    2013-11-01

    The positron emission tomography (PET) radiotracer [(11)C]Pittsburgh Compound B (PIB) is a marker of amyloid plaque deposition in brain, and binding potential is usually quantified using the cerebellum as a reference where the specific binding is negligible. The use of the cerebellum as a reference, however, has been questioned by the reported cerebellar [(11)C]PIB retention in familial Alzheimer's disease (AD) subjects. In this work, we developed a supervised clustering procedure for the automatic extraction of a reference region in [(11)C]PIB studies. Supervised clustering models each gray matter voxel as the linear combination of three predefined kinetic classes, normal and lesion gray matter, and blood pool, and extract reference voxels in which the contribution of the normal gray matter class is high. In the validation with idiopathic AD subjects, supervised clustering extracted reference voxels mostly in the cerebellum that indicated little specific [(11)C]PIB binding, and total distribution volumes of the extracted region were lower than those of the cerebellum. Next, the methodology was applied to the familial AD cohort where the cerebellar amyloid load had been demonstrated previously, resulting in higher binding potential compared with that obtained with the cerebellar reference. The supervised clustering method is a useful tool for the accurate quantification of [(11)C]PIB studies.

  10. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    Science.gov (United States)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  11. AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    H. Ma

    2017-09-01

    Full Text Available Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  12. Automatically extracting functionally equivalent proteins from SwissProt

    Directory of Open Access Journals (Sweden)

    Martin Andrew CR

    2008-10-01

    Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and

  13. Automatic Statistics Extraction for Amateur Soccer Videos

    NARCIS (Netherlands)

    Gemert, J.C. van; Schavemaker, J.G.M.; Bonenkamp, C.W.B.

    2014-01-01

    Amateur soccer statistics have interesting applications such as providing insights to improve team performance, individual coaching, monitoring team progress and personal or team entertainment. Professional soccer statistics are extracted with labor intensive expensive manual effort which is not

  14. Automatic seamless image mosaic method based on SIFT features

    Science.gov (United States)

    Liu, Meiying; Wen, Desheng

    2017-02-01

    An automatic seamless image mosaic method based on SIFT features is proposed. First a scale-invariant feature extracting algorithm SIFT is used for feature extraction and matching, which gains sub-pixel precision for features extraction. Then, the transforming matrix H is computed with improved PROSAC algorithm , compared with RANSAC algorithm, the calculate efficiency is advanced, and the number of the inliers are more. Then the transforming matrix H is purify with LM algorithm. And finally image mosaic is completed with smoothing algorithm. The method implements automatically and avoids the disadvantages of traditional image mosaic method under different scale and illumination conditions. Experimental results show the image mosaic effect is wonderful and the algorithm is stable very much. It is high valuable in practice.

  15. A Risk Assessment System with Automatic Extraction of Event Types

    Science.gov (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  16. Accurate and Automatic Building Roof Extraction Using Neighborhood Information of Point Clouds

    Directory of Open Access Journals (Sweden)

    ZHAO Chuan

    2017-09-01

    Full Text Available High accuracy building roof extraction from LiDAR data is the key to build topological relationship of building roofs and reconstruct buildings. Aiming at the poor adaptation and low extraction precision of existing roof extraction methods for complex building, an accurate and automatic building roof extraction method using neighborhood information of point clouds is proposed. Point clouds features are calculated by principle component analysis, and reliable seed points are selected after feature histogram construction. Initial roof surfaces are extracted quickly and precisely by the proposed local normal vector distribution density-based spatial clustering of applications with noise (LNVD-DBSCAN. Roof competition problem is solved effectively by the poll model based on neighborhood information. Experimental results show that the proposed method can extract building roofs automatically and precisely, and has preferable adaptation to buildings with different complexity, which is able to provide reliable roof information for building reconstruction.

  17. Automatic Definition Extraction and Crossword Generation From Spanish News Text

    Directory of Open Access Journals (Sweden)

    Jennifer Esteche

    2017-08-01

    Full Text Available This paper describes the design and implementation of a system that takes Spanish texts and generates crosswords (board and definitions in a fully automatic way using definitions extracted from those texts. Our solution divides the problem in two parts: a definition extraction module that applies pattern matching implemented in Python, and a crossword generation module that uses a greedy strategy implemented in Prolog. The system achieves 73% precision and builds crosswords similar to those built by humans.

  18. An enhanced model for automatically extracting topic phrase from ...

    African Journals Online (AJOL)

    The key benefit foreseen from this automatic document classification is not only related to search engines, but also to many other fields like, document organization, text filtering and semantic index managing. Key words: Keyphrase extraction, machine learning, search engine snippet, document classification, topic tracking ...

  19. Automatically extracting information needs from complex clinical questions.

    Science.gov (United States)

    Cao, Yong-gang; Cimino, James J; Ely, John; Yu, Hong

    2010-12-01

    Clinicians pose complex clinical questions when seeing patients, and identifying the answers to those questions in a timely manner helps improve the quality of patient care. We report here on two natural language processing models, namely, automatic topic assignment and keyword identification, that together automatically and effectively extract information needs from ad hoc clinical questions. Our study is motivated in the context of developing the larger clinical question answering system AskHERMES (Help clinicians to Extract and aRrticulate Multimedia information for answering clinical quEstionS). We developed supervised machine-learning systems to automatically assign predefined general categories (e.g. etiology, procedure, and diagnosis) to a question. We also explored both supervised and unsupervised systems to automatically identify keywords that capture the main content of the question. We evaluated our systems on 4654 annotated clinical questions that were collected in practice. We achieved an F1 score of 76.0% for the task of general topic classification and 58.0% for keyword extraction. Our systems have been implemented into the larger question answering system AskHERMES. Our error analyses suggested that inconsistent annotation in our training data have hurt both question analysis tasks. Our systems, available at http://www.askhermes.org, can automatically extract information needs from both short (the number of word tokens 20), and from both well-structured and ill-formed questions. We speculate that the performance of general topic classification and keyword extraction can be further improved if consistently annotated data are made available. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. [A novel spectrum feature extraction method].

    Science.gov (United States)

    Li, Xiang-Ru; Feng, Chun-Ming; Wang, Yong-Jun; Lu, Yu

    2011-10-01

    The present focuses on the celestial spectra feature extraction problem, which is a key procedure in automatic spectra classification. By extracting features, the authors can reduce redundancy, alleviate noise influence, and improve accuracy and efficiency in spectra classification. The authors introduced a novel feature analysis framework STP (space transformation and partition), which focuses on four essential components in feature extraction: decompose and reorganize spectrum components, reorganize, alleviate noise influence and eliminate redundancy. Based on STP, we can analyze most of the available feature extraction methods, for example, the unsupervised methods principal component analysis (PCA), wavelet transform, the supervised methods support vector machine (SVM), relevance vector machine (RVM), linear discriminant analysis (LDA), etc. We introduced a novel feature analysis framework and proposed a novel feature extraction method. The outstanding characteristics of the proposed method are its simplicity and efficiency. Researches show that it is sufficient to extract features by the proposed method in some cases, and it is not necessary to use the sophisticated methods, which is usually more complex in computation. The proposed method is evaluated in classifying Galaxy and QSO spectra, which is disturbed by red shift and is representative in automatic spectra classification research. The results are practical and helpful to gain novel insight into the traditional feature extraction methods and design more efficient spectrum classification method.

  1. Sensitive, automatic method for the determination of diazepam and its five metabolites in human oral fluid by online solid-phase extraction and liquid chromatography with tandem mass spectrometry.

    Science.gov (United States)

    Jiang, Fengli; Rao, Yulan; Wang, Rong; Johansen, Sys Stybe; Ni, Chunfang; Liang, Chen; Zheng, Shuiqing; Ye, Haiying; Zhang, Yurong

    2016-05-01

    A novel and simple online solid-phase extraction liquid chromatography-tandem mass spectrometry method was developed and validated for the simultaneous determination of diazepam and its five metabolites including nordazepam, oxazepam, temazepam, oxazepam glucuronide, and temazepam glucuronide in human oral fluid. Human oral fluid was obtained using the Salivette(®) collection device, and 100 μL of oral fluid samples were loaded onto HySphere Resin GP cartridge for extraction. Analytes were separated on a Waters Xterra C18 column and quantified by liquid chromatography with tandem mass spectrometry using the multiple reaction monitoring mode. The whole procedure was automatic, and the total run time was 21 min. The limit of detection was in the range of 0.05-0.1 ng/mL for all analytes. The linearity ranged from 0.25 to 250 ng/mL for oxazepam, and 0.1 to 100 ng/mL for the other five analytes. Intraday and interday precision for all analytes was 0.6-12.8 and 1.0-9.2%, respectively. Accuracy ranged from 95.6 to 114.7%. Method recoveries were in the range of 65.1-80.8%. This method was fully automated, simple, and sensitive. Authentic oral fluid samples collected from two volunteers after consuming a single oral dose of 10 mg diazepam were analyzed to demonstrate the applicability of this method. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Automatic extraction of forward stroke volume using dynamic 11C-acetate PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik

    , potentially introducing bias if measured with a separate modality. The aim of this study was to develop and validate methods for automatically extracting FSV directly from the dynamic PET used for measuring oxidative metabolism. Methods: 16 subjects underwent a dynamic 27 min PET scan on a Siemens Biograph...... TruePoint 64 PET/CT scanner after bolus injection of 399±27 MBq of 11C-acetate. The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was derived by automatic extrapolation of the down-slope of the TAC. FSV...... was then calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured in the left ventricular outflow tract by cardiovascular magnetic resonance using phase-contrast velocity mapping within two weeks of PET imaging. Results...

  3. AUTOMATIC EXTRACTION OF BUILDING OUTLINE FROM HIGH RESOLUTION AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2016-06-01

    Full Text Available In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.

  4. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    Science.gov (United States)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  5. Automatic archaeological feature extraction from satellite VHR images

    Science.gov (United States)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  6. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  7. Method to extract uranium

    International Nuclear Information System (INIS)

    Barreiro, A.J.; Hollemann, R.A.; Lyon, W.L.; Randell, C.C.

    1978-01-01

    The invented method to clean the commercial wet-method phosphoric acid - as pretreatment stage in the recovery of uranium - can be carried out at low capital and operational costs and requires little maintenance. In order to recover 0.2 g uranium which are contained in 1 litre of phosphoric acid solution obtained from the wet process, the solution is firstly largely cleared and cleaned. A cleaning agent, essentially of hydrocarbon with a boiling point between 150 and 300 0 C, reacts with the remaining slurry-forming impurities in the acid and enables these to be separated off in a solvent-extraction mixer separator with the slurry. Kerosine is used as cleaning agent in wet-process phosphoric acid with humus impurities. The same process can also be used with oxidation or reduction extraction separation methods to obtain uranium from industrial crude wet phosphoric acid solutions. Comparing a flow sheet each of a known method of separating uranium and the new one as according to the invention, one can see the attained simplification and probable cost reduction. (RW) [de

  8. Actinide extraction methods

    Science.gov (United States)

    Peterman, Dean R [Idaho Falls, ID; Klaehn, John R [Idaho Falls, ID; Harrup, Mason K [Idaho Falls, ID; Tillotson, Richard D [Moore, ID; Law, Jack D [Pocatello, ID

    2010-09-21

    Methods of separating actinides from lanthanides are disclosed. A regio-specific/stereo-specific dithiophosphinic acid having organic moieties is provided in an organic solvent that is then contacted with an acidic medium containing an actinide and a lanthanide. The method can extend to separating actinides from one another. Actinides are extracted as a complex with the dithiophosphinic acid. Separation compositions include an aqueous phase, an organic phase, dithiophosphinic acid, and at least one actinide. The compositions may include additional actinides and/or lanthanides. A method of producing a dithiophosphinic acid comprising at least two organic moieties selected from aromatics and alkyls, each moiety having at least one functional group is also disclosed. A source of sulfur is reacted with a halophosphine. An ammonium salt of the dithiophosphinic acid product is precipitated out of the reaction mixture. The precipitated salt is dissolved in ether. The ether is removed to yield the dithiophosphinic acid.

  9. Automatic Annotation Method on Learners' Opinions in Case Method Discussion

    Science.gov (United States)

    Samejima, Masaki; Hisakane, Daichi; Komoda, Norihisa

    2015-01-01

    Purpose: The purpose of this paper is to annotate an attribute of a problem, a solution or no annotation on learners' opinions automatically for supporting the learners' discussion without a facilitator. The case method aims at discussing problems and solutions in a target case. However, the learners miss discussing some of problems and solutions.…

  10. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær

    2015-01-01

    Background The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. Methods 35 subjects underwent...... a dynamic 11 C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic 15 O-water PET and 11 C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically...... from PET data using cluster analysis. The first-pass peak was isolated by automatic extrapolation of the downslope of the TAC. FSV was calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured using phase...

  11. Automatic Metadata Extraction - The High Energy Physics Use Case

    CERN Document Server

    Boyd, Joseph; Rajman, Martin

    Automatic metadata extraction (AME) of scientific papers has been described as one of the hardest problems in document engineering. Heterogeneous content, varying style, and unpredictable placement of article components render the problem inherently indeterministic. Conditional random fields (CRF), a machine learning technique, can be used to classify document metadata amidst this uncertainty, annotating document contents with semantic labels. High energy physics (HEP) papers, such as those written at CERN, have unique content and structural characteristics, with scientific collaborations of thousands of authors altering article layouts dramatically. The distinctive qualities of these papers necessitate the creation of specialised datasets and model features. In this work we build an unprecedented training set of HEP papers and propose and evaluate a set of innovative features for CRF models. We build upon state-of-the-art AME software, GROBID, a tool coordinating a hierarchy of CRF models in a full document ...

  12. Image-based mobile service: automatic text extraction and translation

    Science.gov (United States)

    Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.

    2010-01-01

    We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.

  13. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    Directory of Open Access Journals (Sweden)

    Ed Baker

    2013-09-01

    Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity  have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.

  14. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten

    2014-01-01

    was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France...

  15. Automatic temperature control method of shipping can

    International Nuclear Information System (INIS)

    Nishikawa, Kaoru.

    1992-01-01

    The present invention provides a method of rapidly and accurately controlling the temperature of a shipping can, which is used upon shipping inspection for a nuclear fuel assembly. That is, a measured temperature value of the shipping can is converted to a gas pressure setting value in a jacket of the shipping can by conducting a predetermined logic calculation by using a fuzzy logic. A gas pressure control section compares the pressure setting value of a fuzzy estimation section and the measured value of the gas pressure in the jacket of the shipping can, and conducts air supply or exhaustion of the jacket gas so as to adjust the measured value with the setting value. These fuzzy estimation section and gas pressure control section control the gas pressure in the jacket of the shipping can to control the water level in the jacket. As a result, the temperature of the shipping can is controlled. With such procedures, since the water level in the jacket can be controlled directly and finely, temperature of the shipping can is automatically controlled rapidly and accurately compared with a conventional case. (I.S.)

  16. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik

    Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study was to vali......Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study...... was to validate automated methods for extracting FSV directly from dynamic PET studies for two different tracers and to examine potential scanner hardware bias. Methods: 21 subjects underwent a dynamic 27 min 11C-acetate PET scan on a Siemens Biograph TruePoint 64 PET/CT scanner (scanner I). In addition, 8...... subjects underwent a dynamic 6 min 15O-water PET scan followed by a 27 min 11C-acetate PET scan on a GE Discovery ST PET/CT scanner (scanner II). The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was isolated by automatic...

  17. Detection of fiducial gold markers for automatic on-line megavoltage position verification using a marker extraction kernel (MEK)

    NARCIS (Netherlands)

    Nederveen, A.; Lagendijk, J.; Hofman, P.

    2000-01-01

    PURPOSE: In this study automatic detection of implanted gold markers in megavoltage portal images for on-line position verification was investigated. METHODS AND MATERIALS: A detection method for fiducial gold markers, consisting of a marker extraction kernel (MEK), was developed. The detection

  18. Automatic and unsupervised snore sound extraction from respiratory sound signals.

    Science.gov (United States)

    Azarbarzin, Ali; Moussavi, Zahra M K

    2011-05-01

    In this paper, an automatic and unsupervised snore detection algorithm is proposed. The respiratory sound signals of 30 patients with different levels of airway obstruction were recorded by two microphones: one placed over the trachea (the tracheal microphone), and the other was a freestanding microphone (the ambient microphone). All the recordings were done simultaneously with full-night polysomnography during sleep. The sound activity episodes were identified using the vertical box (V-Box) algorithm. The 500-Hz subband energy distribution and principal component analysis were used to extract discriminative features from sound episodes. An unsupervised fuzzy C-means clustering algorithm was then deployed to label the sound episodes as either snore or no-snore class, which could be breath sound, swallowing sound, or any other noise. The algorithm was evaluated using manual annotation of the sound signals. The overall accuracy of the proposed algorithm was found to be 98.6% for tracheal sounds recordings, and 93.1% for the sounds recorded by the ambient microphone. © 2011 IEEE

  19. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  20. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik

    subjects underwent a dynamic 6 min 15O-water PET scan followed by a 27 min 11C-acetate PET scan on a GE Discovery ST PET/CT scanner (scanner II). The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was isolated by automatic......Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study...... was to validate automated methods for extracting FSV directly from dynamic PET studies for two different tracers and to examine potential scanner hardware bias. Methods: 21 subjects underwent a dynamic 27 min 11C-acetate PET scan on a Siemens Biograph TruePoint 64 PET/CT scanner (scanner I). In addition, 8...

  1. Automatic Extraction of Contours of Buildings on Oblique View Maps Based on 3D City Models

    Directory of Open Access Journals (Sweden)

    ZHU Yuanyuan

    2015-09-01

    Full Text Available Aiming to deal with the problem that manual extraction of contours of buildings on oblique view maps are expensive and ineffective with low accuracy and coarse detail, we present a method of automatic extraction of contours buildings on oblique view maps which based on 3D city models. We employ depth-buffers to obtain a building object's color-buffers concerning the occlusion blocked by other buildings and the existence of groups of buildings, and then we trace building contours based on color-buffers. And in order to keep the occlusion consistency and match the traced contours with the map, we propose loading 3D city models by block on projection plane. Finally, the validity and feasibility of this method are proved through the experiments on 3D city models of Wuhan.

  2. THE FACE EXTRACTION METHOD FOR MOBILE DEVICES

    Directory of Open Access Journals (Sweden)

    Viktor Borodin

    2013-10-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The problem of automatic face recognition on images is considered. The method of face ellipse extraction from photo and methods for face special points extraction are proposed /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Обычная таблица"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  3. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    ÖZEL, Selma Ayşe; SARAÇ, Esra

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  4. Multigrid method for integral equations and automatic programs

    Science.gov (United States)

    Lee, Hosae

    1993-01-01

    Several iterative algorithms based on multigrid methods are introduced for solving linear Fredholm integral equations of the second kind. Automatic programs based on these algorithms are introduced using Simpson's rule and the piecewise Gaussian rule for numerical integration.

  5. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  6. Automatic cell object extraction of red tide algae in microscopic images

    Science.gov (United States)

    Yu, Kun; Ji, Guangrong; Zheng, Haiyong

    2017-03-01

    Extracting the cell objects of red tide algae is the most important step in the construction of an automatic microscopic image recognition system for harmful algal blooms. This paper describes a set of composite methods for the automatic segmentation of cells of red tide algae from microscopic images. Depending on the existence of setae, we classify the common marine red tide algae into non-setae algae species and Chaetoceros, and design segmentation strategies for these two categories according to their morphological characteristics. In view of the varied forms and fuzzy edges of non-setae algae, we propose a new multi-scale detection algorithm for algal cell regions based on border- correlation, and further combine this with morphological operations and an improved GrabCut algorithm to segment single-cell and multicell objects. In this process, similarity detection is introduced to eliminate the pseudo cellular regions. For Chaetoceros, owing to the weak grayscale information of their setae and the low contrast between the setae and background, we propose a cell extraction method based on a gray surface orientation angle model. This method constructs a gray surface vector model, and executes the gray mapping of the orientation angles. The obtained gray values are then reconstructed and linearly stretched. Finally, appropriate morphological processing is conducted to preserve the orientation information and tiny features of the setae. Experimental results demonstrate that the proposed methods can effectively remove noise and accurately extract both categories of algae cell objects possessing a complete shape, regular contour, and clear edge. Compared with other advanced segmentation techniques, our methods are more robust when considering images with different appearances and achieve more satisfactory segmentation effects.

  7. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  8. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  9. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.

  10. Automatic Classification of Marine Mammals with Speaker Classification Methods.

    Science.gov (United States)

    Kreimeyer, Roman; Ludwig, Stefan

    2016-01-01

    We present an automatic acoustic classifier for marine mammals based on human speaker classification methods as an element of a passive acoustic monitoring (PAM) tool. This work is part of the Protection of Marine Mammals (PoMM) project under the framework of the European Defense Agency (EDA) and joined by the Research Department for Underwater Acoustics and Geophysics (FWG), Bundeswehr Technical Centre (WTD 71) and Kiel University. The automatic classification should support sonar operators in the risk mitigation process before and during sonar exercises with a reliable automatic classification result.

  11. Statistical Analysis of Automatic Seed Word Acquisition to Improve Harmful Expression Extraction in Cyberbullying Detection

    Directory of Open Access Journals (Sweden)

    Suzuha Hatakeyama

    2016-04-01

    Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. [1]. We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.

  12. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    Science.gov (United States)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  13. Image Processing Method for Automatic Discrimination of Hoverfly Species

    Directory of Open Access Journals (Sweden)

    Vladimir Crnojević

    2014-01-01

    Full Text Available An approach to automatic hoverfly species discrimination based on detection and extraction of vein junctions in wing venation patterns of insects is presented in the paper. The dataset used in our experiments consists of high resolution microscopic wing images of several hoverfly species collected over a relatively long period of time at different geographic locations. Junctions are detected using the combination of the well known HOG (histograms of oriented gradients and the robust version of recently proposed CLBP (complete local binary pattern. These features are used to train an SVM classifier to detect junctions in wing images. Once the junctions are identified they are used to extract statistics characterizing the constellations of these points. Such simple features can be used to automatically discriminate four selected hoverfly species with polynomial kernel SVM and achieve high classification accuracy.

  14. Automatic Pole and Q-Value Extraction for RF Structures

    Energy Technology Data Exchange (ETDEWEB)

    C. Potratz, H.-W. Glock, U. van Rienen, F. Marhauser

    2011-09-01

    The experimental characterization of RF structures like accelerating cavities often demands for measuring resonant frequencies of Eigenmodes and corresponding (loaded) Q-values over a wide spectral range. A common procedure to determine the Q-values is the -3dB method, which works well for isolated poles, but may not be applicable directly in case of multiple poles residing in close proximity (e.g. for adjacent transverse modes differing by polarization). Although alternative methods may be used in such cases, this often comes at the expense of inherent systematic errors. We have developed an automation algorithm, which not only speeds up the measurement time significantly, but is also able to extract Eigenfrequencies and Q-values both for well isolated and overlapping poles. At the same time the measurement accuracy may be improved as a major benefit. To utilize this procedure merely complex scattering parameters have to be recorded for the spectral range of interest. In this paper we present the proposed algorithm applied to experimental data recorded for superconducting higher-order-mode damped multi-cell cavities as an application of high importance.

  15. ARSENAL: Automatic Requirements Specification Extraction from Natural Language

    OpenAIRE

    Ghosh, Shalini; Elenius, Daniel; Li, Wenchao; Lincoln, Patrick; Shankar, Natarajan; Steiner, Wilfried

    2014-01-01

    Requirements are informal and semi-formal descriptions of the expected behavior of a complex system from the viewpoints of its stakeholders (customers, users, operators, designers, and engineers). However, for the purpose of design, testing, and verification for critical systems, we can transform requirements into formal models that can be analyzed automatically. ARSENAL is a framework and methodology for systematically transforming natural language (NL) requirements into analyzable formal mo...

  16. Automatic Extraction of High-Resolution Rainfall Series from Rainfall Strip Charts

    Science.gov (United States)

    Saa-Requejo, Antonio; Valencia, Jose Luis; Garrido, Alberto; Tarquis, Ana M.

    2015-04-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on a host of factors, including climate, soil, topography, cropping and land management practices among others. Most models for soil erosion or hydrological processes need an accurate storm characterization. However, this data are not always available and in some cases indirect models are generated to fill this gap. In Spain, the rain intensity data known for time periods less than 24 hours back to 1924 and many studies are limited by it. In many cases this data is stored in rainfall strip charts in the meteorological stations but haven't been transfer in a numerical form. To overcome this deficiency in the raw data a process of information extraction from large amounts of rainfall strip charts is implemented by means of computer software. The method has been developed that largely automates the intensive-labour extraction work based on van Piggelen et al. (2011). The method consists of the following five basic steps: 1) scanning the charts to high-resolution digital images, 2) manually and visually registering relevant meta information from charts and pre-processing, 3) applying automatic curve extraction software in a batch process to determine the coordinates of cumulative rainfall lines on the images (main step), 4) post processing the curves that were not correctly determined in step 3, and 5) aggregating the cumulative rainfall in pixel coordinates to the desired time resolution. A colour detection procedure is introduced that automatically separates the background of the charts and rolls from the grid and subsequently the rainfall curve. The rainfall curve is detected by minimization of a cost function. Some utilities have been added to improve the previous work and automates some auxiliary processes: readjust the bands properly, merge bands when

  17. Automatic extraction and identification of users' responses in Facebook medical quizzes.

    Science.gov (United States)

    Rodríguez-González, Alejandro; Menasalvas Ruiz, Ernestina; Mayer Pujadas, Miguel A

    2016-04-01

    In the last few years the use of social media in medicine has grown exponentially, providing a new area of research based on the analysis and use of Web 2.0 capabilities. In addition, the use of social media in medical education is a subject of particular interest which has been addressed in several studies. One example of this application is the medical quizzes of The New England Journal of Medicine (NEJM) that regularly publishes a set of questions through their Facebook timeline. We present an approach for the automatic extraction of medical quizzes and their associated answers on a Facebook platform by means of a set of computer-based methods and algorithms. We have developed a tool for the extraction and analysis of medical quizzes stored on Facebook timeline at the NEJM Facebook page, based on a set of computer-based methods and algorithms using Java. The system is divided into two main modules: Crawler and Data retrieval. The system was launched on December 31, 2014 and crawled through a total of 3004 valid posts and 200,081 valid comments. The first post was dated on July 23, 2009 and the last one on December 30, 2014. 285 quizzes were analyzed with 32,780 different users providing answers to the aforementioned quizzes. Of the 285 quizzes, patterns were found in 261 (91.58%). From these 261 quizzes where trends were found, we saw that users follow trends of incorrect answers in 13 quizzes and trends of correct answers in 248. This tool is capable of automatically identifying the correct and wrong answers to a quiz provided on Facebook posts in a text format to a quiz, with a small rate of false negative cases and this approach could be applicable to the extraction and analysis of other sources after including some adaptations of the information on the Internet. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Sensitive, automatic method for the determination of diazepam and its five metabolites in human oral fluid by online solid-phase extraction and liquid chromatography with tandem mass spectrometry

    DEFF Research Database (Denmark)

    Jiang, Fengli; Rao, Yulan; Wang, Rong

    2016-01-01

    A novel and simple online solid-phase extraction liquid chromatography-tandem mass spectrometry method was developed and validated for the simultaneous determination of diazepam and its five metabolites including nordazepam, oxazepam, temazepam, oxazepam glucuronide, and temazepam glucuronide...... in human oral fluid. Human oral fluid was obtained using the Salivette(®) collection device, and 100 μL of oral fluid samples were loaded onto HySphere Resin GP cartridge for extraction. Analytes were separated on a Waters Xterra C18 column and quantified by liquid chromatography with tandem mass....... Intraday and interday precision for all analytes was 0.6-12.8 and 1.0-9.2%, respectively. Accuracy ranged from 95.6 to 114.7%. Method recoveries were in the range of 65.1-80.8%. This method was fully automated, simple, and sensitive. Authentic oral fluid samples collected from two volunteers after...

  19. Semi-automatic Term Extraction for the African Languages, with ...

    African Journals Online (AJOL)

    rbr

    and extract potential terms from electronic corpora, is known as (semi-)auto- matic term extraction. In the great majority of the current approaches, character- istics of a special-language corpus are compared to those of a general-language corpus. In all approaches, humans remain the final arbiters, and must decide whether ...

  20. Evaluation and analysis of term scoring methods for term extraction

    NARCIS (Netherlands)

    Verberne, S.; Sappelli, M.; Hiemstra, D.; Kraaij, W.

    2016-01-01

    We evaluate five term scoring methods for automatic term extraction on four different types of text collections: personal document collections, news articles, scientific articles and medical discharge summaries. Each collection has its own use case: author profiling, boolean query term suggestion,

  1. Automatic extraction of gene ontology annotation and its correlation with clusters in protein networks

    Directory of Open Access Journals (Sweden)

    Mazo Ilya

    2007-07-01

    Full Text Available Abstract Background Uncovering cellular roles of a protein is a task of tremendous importance and complexity that requires dedicated experimental work as well as often sophisticated data mining and processing tools. Protein functions, often referred to as its annotations, are believed to manifest themselves through topology of the networks of inter-proteins interactions. In particular, there is a growing body of evidence that proteins performing the same function are more likely to interact with each other than with proteins with other functions. However, since functional annotation and protein network topology are often studied separately, the direct relationship between them has not been comprehensively demonstrated. In addition to having the general biological significance, such demonstration would further validate the data extraction and processing methods used to compose protein annotation and protein-protein interactions datasets. Results We developed a method for automatic extraction of protein functional annotation from scientific text based on the Natural Language Processing (NLP technology. For the protein annotation extracted from the entire PubMed, we evaluated the precision and recall rates, and compared the performance of the automatic extraction technology to that of manual curation used in public Gene Ontology (GO annotation. In the second part of our presentation, we reported a large-scale investigation into the correspondence between communities in the literature-based protein networks and GO annotation groups of functionally related proteins. We found a comprehensive two-way match: proteins within biological annotation groups form significantly denser linked network clusters than expected by chance and, conversely, densely linked network communities exhibit a pronounced non-random overlap with GO groups. We also expanded the publicly available GO biological process annotation using the relations extracted by our NLP technology

  2. A comparison of accurate automatic hippocampal segmentation methods.

    Science.gov (United States)

    Zandifar, Azar; Fonov, Vladimir; Coupé, Pierrick; Pruessner, Jens; Collins, D Louis

    2017-07-15

    The hippocampus is one of the first brain structures affected by Alzheimer's disease (AD). While many automatic methods for hippocampal segmentation exist, few studies have compared them on the same data. In this study, we compare four fully automated hippocampal segmentation methods in terms of their conformity with manual segmentation and their ability to be used as an AD biomarker in clinical settings. We also apply error correction to the four automatic segmentation methods, and complete a comprehensive validation to investigate differences between the methods. The effect size and classification performance is measured for AD versus normal control (NC) groups and for stable mild cognitive impairment (sMCI) versus progressive mild cognitive impairment (pMCI) groups. Our study shows that the nonlinear patch-based segmentation method with error correction is the most accurate automatic segmentation method and yields the most conformity with manual segmentation (κ=0.894). The largest effect size between AD versus NC and sMCI versus pMCI is produced by FreeSurfer with error correction. We further show that, using only hippocampal volume, age, and sex as features, the area under the receiver operating characteristic curve reaches up to 0.8813 for AD versus NC and 0.6451 for sMCI versus pMCI. However, the automatic segmentation methods are not significantly different in their performance. Copyright © 2017. Published by Elsevier Inc.

  3. Comparison of mentha extracts obtained by different extraction methods

    Directory of Open Access Journals (Sweden)

    Milić Slavica

    2006-01-01

    Full Text Available The different methods of mentha extraction, such as steam distillation, extraction by methylene chloride (Soxhlet extraction and supercritical fluid extraction (SFE by carbon dioxide (CO J were investigated. SFE by CO, was performed at pressure of 100 bar and temperature of40°C. The extraction yield, as well as qualitative and quantitative composition of obtained extracts, determined by GC-MS method, were compared.

  4. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  5. Automatization of laboratory extraction installation intended for investigations in the field of reprocessing of spenf fuels

    International Nuclear Information System (INIS)

    Vznuzdaev, E.A.; Galkin, B.Ya.; Gofman, F.Eh.

    1981-01-01

    Automatized stand for solving the problem of optimum control on technological extraction process in the spent fuel reprocessing by means of an automatized control system which is based on the means of computation technick is described in the paper. Preliminary experiments which had been conducted on the stand with spent fuel from WWER-440 reactor have shown high efficiency of automatization and possibility to conduct technological investigations in a short period of time and to have much of information which can not be obtained by ordinary organisation of work [ru

  6. Effect of Feature Extraction on Automatic Sleep Stage Classification by Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Prucnal Monika

    2017-06-01

    Full Text Available EEG signal-based sleep stage classification facilitates an initial diagnosis of sleep disorders. The aim of this study was to compare the efficiency of three methods for feature extraction: power spectral density (PSD, discrete wavelet transform (DWT and empirical mode decomposition (EMD in the automatic classification of sleep stages by an artificial neural network (ANN. 13650 30-second EEG epochs from the PhysioNet database, representing five sleep stages (W, N1-N3 and REM, were transformed into feature vectors using the aforementioned methods and principal component analysis (PCA. Three feed-forward ANNs with the same optimal structure (12 input neurons, 23 + 22 neurons in two hidden layers and 5 output neurons were trained using three sets of features, obtained with one of the compared methods each. Calculating PSD from EEG epochs in frequency sub-bands corresponding to the brain waves (81.1% accuracy for the testing set, comparing with 74.2% for DWT and 57.6% for EMD appeared to be the most effective feature extraction method in the analysed problem.

  7. Automatic extraction of Manhattan-World building masses from 3D laser range scans.

    Science.gov (United States)

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich

    2012-10-01

    We propose a novel approach for the reconstruction of urban structures from 3D point clouds with an assumption of Manhattan World (MW) building geometry; i.e., the predominance of three mutually orthogonal directions in the scene. Our approach works in two steps. First, the input points are classified according to the MW assumption into four local shape types: walls, edges, corners, and edge corners. The classified points are organized into a connected set of clusters from which a volume description is extracted. The MW assumption allows us to robustly identify the fundamental shape types, describe the volumes within the bounding box, and reconstruct visible and occluded parts of the sampled structure. We show results of our reconstruction that has been applied to several synthetic and real-world 3D point data sets of various densities and from multiple viewpoints. Our method automatically reconstructs 3D building models from up to 10 million points in 10 to 60 seconds.

  8. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  9. METHOD FOR AUTOMATIC RAISING AND LEVELING OF SUPPORT PLATFORM

    OpenAIRE

    A. G. Stryzhniou

    2017-01-01

    The paper presents the method for automatic raising and leveling of support platform that differ from others in simplicity and versatility. The method includes four phases of raising and leveling when performance capabilities of the system is defined and the soil condition is tested. In addition, the current condition of the system is controlled and corrected with the issuance of control parameters to the control panel. The method can be used not only for static, but also for dynamic leveling...

  10. Automatic content extraction of filled-form images based on clustering component block projection vectors

    Science.gov (United States)

    Peng, Hanchuan; He, Xiaofeng; Long, Fuhui

    2003-12-01

    Automatic understanding of document images is a hard problem. Here we consider a sub-problem, automatically extracting content from filled form images. Without pre-selected templates or sophisticated structural/semantic analysis, we propose a novel approach based on clustering the component-block-projection-vectors. By combining spectral clustering and minimal spanning tree clustering, we generate highly accurate clusters, from which the adaptive templates are constructed to extract the filled-in content. Our experiments show this approach is effective for a set of 1040 US IRS tax form images belonging to 208 types.

  11. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  12. Automatic extraction of semantic relations between medical entities: a rule based approach

    Directory of Open Access Journals (Sweden)

    Ben Abacha Asma

    2011-10-01

    Full Text Available Abstract Background Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration. MeTAE allows (i to extract and annotate medical entities and relationships from medical texts and (ii to explore semantically the produced RDF annotations. Results Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i recognition of medical entities and (ii identification of the correct semantic relation between each pair of entities. The first step is achieved by an enhanced use of MetaMap which improves the precision obtained by MetaMap by 19.59% in our evaluation. The second step relies on linguistic patterns which are built semi-automatically from a corpus selected according to semantic criteria. We evaluate our system’s ability to identify medical entities of 16 types. We also evaluate the extraction of treatment relations between a treatment (e.g. medication and a problem (e.g. disease: we obtain 75.72% precision and 60.46% recall. Conclusions According to our experiments, using an external sentence segmenter and noun phrase chunker may improve the precision of MetaMap-based medical entity recognition. Our pattern-based relation extraction method obtains good precision and recall w.r.t related works. A more precise comparison with related approaches remains difficult however given the differences in corpora and in the exact nature of the extracted relations. The selection of MEDLINE articles through queries related to known drug-disease pairs enabled us to obtain a more focused corpus of relevant examples of treatment relations than a more general MEDLINE query.

  13. Automatic intra-modality brain image registration method

    International Nuclear Information System (INIS)

    Whitaker, J.M.; Ardekani, B.A.; Braun, M.

    1996-01-01

    Full text: Registration of 3D images of brain of the same or different subjects has potential importance in clinical diagnosis, treatment planning and neurological research. The broad aim of our work is to produce an automatic and robust intra-modality, brain image registration algorithm for intra-subject and inter-subject studies. Our algorithm is composed of two stages. Initial alignment is achieved by finding the values of nine transformation parameters (representing translation, rotation and scale) that minimise the nonoverlapping regions of the head. This is achieved by minimisation of the sum of the exclusive OR of two binary head images, produced using the head extraction procedure described by Ardekani et al. (J Comput Assist Tomogr, 19:613-623, 1995). The initial alignment successfully determines the scale parameters and gross translation and rotation parameters. Fine alignment uses an objective function described for inter-modality registration in Ardekani et al. (ibid.). The algorithm segments one of the images to be aligned into a set of connected components using K-means clustering. Registration is achieved by minimising the K-means variance of the segmentation induced in the other image. Similarity of images of the same modality makes the method attractive for intra-modality registration. A 3D MR image, with voxel dimensions, 2x2x6 mm, was misaligned. The registered image shows visually accurate registration. The average displacement of a pixel from its correct location was measured to be 3.3 mm. The algorithm was tested on intra-subject MR images and was found to produce good qualitative results. Using the data available, the algorithm produced promising qualitative results in intra-subject registration. Further work is necessary in its application to intersubject registration, due to large variability in brain structure between subjects. Clinical evaluation of the algorithm for selected applications is required

  14. Infrared Cephalic-Vein to Assist Blood Extraction Tasks: Automatic Projection and Recognition

    Science.gov (United States)

    Lagüela, S.; Gesto, M.; Riveiro, B.; González-Aguilera, D.

    2017-05-01

    Thermal infrared band is not commonly used in photogrammetric and computer vision algorithms, mainly due to the low spatial resolution of this type of imagery. However, this band captures sub-superficial information, increasing the capabilities of visible bands regarding applications. This fact is especially important in biomedicine and biometrics, allowing the geometric characterization of interior organs and pathologies with photogrammetric principles, as well as the automatic identification and labelling using computer vision algorithms. This paper presents advances of close-range photogrammetry and computer vision applied to thermal infrared imagery, with the final application of Augmented Reality in order to widen its application in the biomedical field. In this case, the thermal infrared image of the arm is acquired and simultaneously projected on the arm, together with the identification label of the cephalic-vein. This way, blood analysts are assisted in finding the vein for blood extraction, especially in those cases where the identification by the human eye is a complex task. Vein recognition is performed based on the Gaussian temperature distribution in the area of the vein, while the calibration between projector and thermographic camera is developed through feature extraction and pattern recognition. The method is validated through its application to a set of volunteers, with different ages and genres, in such way that different conditions of body temperature and vein depth are covered for the applicability and reproducibility of the method.

  15. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  16. Automatic extraction of ontological relations from Arabic text

    Directory of Open Access Journals (Sweden)

    Mohammed G.H. Al Zamil

    2014-12-01

    The proposed methodology has been designed to analyze Arabic text using lexical semantic patterns of the Arabic language according to a set of features. Next, the features have been abstracted and enriched with formal descriptions for the purpose of generalizing the resulted rules. The rules, then, have formulated a classifier that accepts Arabic text, analyzes it, and then displays related concepts labeled with its designated relationship. Moreover, to resolve the ambiguity of homonyms, a set of machine translation, text mining, and part of speech tagging algorithms have been reused. We performed extensive experiments to measure the effectiveness of our proposed tools. The results indicate that our proposed methodology is promising for automating the process of extracting ontological relations.

  17. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Subhi Al-batah

    2014-01-01

    Full Text Available To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL and high-grade squamous intraepithelial lesion (HSIL. The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

  18. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    Science.gov (United States)

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

  19. Automatic Vertebral Column Extraction by Whole-Body Bone SPECT Scan

    Directory of Open Access Journals (Sweden)

    Sheng-Fang Huang

    2013-01-01

    Full Text Available Bone extraction and division can enhance the accuracy of diagnoses based on whole-body bone SPECT data. This study developed a method for using conventional SPECT for automatic recognition of the vertebral column. A novel feature of the proposed approach is a novel “bone graph" image description method that represents the connectivity between these image regions to facilitate manipulation of morphological relationships in the skeleton before surgery. By tracking the paths shown on the bone graph, skeletal structures can be identified by performing morphological operations. The performance of the method was evaluated quantitatively and qualitatively by two experienced nuclear medicine physicians. Datasets for whole-body bone SPECT scans in 46 lung cancer patients with bone metastasis were obtained with Tc-99m MDP. The algorithm successfully segmented vertebrae in the thoracolumbar spine. The quantitative assessment shows that the segmentation method achieved an average TP, FP, and FN rates of 95.1%, 9.1%, and 4.9%. The qualitative evaluation shows an average acceptance rate of 83%, where the data for the acceptable and unacceptable groups had a Cronbach’s alpha value of 0.718, which indicated reasonable internal consistency and reliability.

  20. An Automatic Shadow Detection Method for VHR Remote Sensing Orthoimagery

    Directory of Open Access Journals (Sweden)

    Qiongjie Wang

    2017-05-01

    Full Text Available The application potential of very high resolution (VHR remote sensing imagery has been boosted by recent developments in the data acquisition and processing ability of aerial photogrammetry. However, shadows in images contribute to problems such as incomplete spectral information, lower intensity brightness, and fuzzy boundaries, which seriously affect the efficiency of the image interpretation. In this paper, to address these issues, a simple and automatic method of shadow detection is presented. The proposed method combines the advantages of the property-based and geometric-based methods to automatically detect the shadowed areas in VHR imagery. A geometric model of the scene and the solar position are used to delineate the shadowed and non-shadowed areas in the VHR image. A matting method is then applied to the image to refine the shadow mask. Different types of shadowed aerial orthoimages were used to verify the effectiveness of the proposed shadow detection method, and the results were compared with the results obtained by two state-of-the-art methods. The overall accuracy of the proposed method on the three tests was around 90%, confirming the effectiveness and robustness of the new method for detecting fine shadows, without any human input. The proposed method also performs better in detecting shadows in areas with water than the other two methods.

  1. Automatic counting method for complex overlapping erythrocytes based on seed prediction in microscopic imaging

    Directory of Open Access Journals (Sweden)

    Xudong Wei

    2016-09-01

    Full Text Available Blood cell counting is an important medical test to help medical staffs diagnose various symptoms and diseases. An automatic segmentation of complex overlapping erythrocytes based on seed prediction in microscopic imaging is proposed. The four main innovations of this research are as follows: (1 Regions of erythrocytes extracted rapidly and accurately based on the G component. (2 K-means algorithm is applied on edge detection of overlapping erythrocytes. (3 Traces of erythrocytes’ biconcave shape are utilized to predict erythrocyte’s position in overlapping clusters. (4 A new automatic counting method which aims at complex overlapping erythrocytes is presented. The experimental results show that the proposed method is efficient and accurate with very little running time. The average accuracy of the proposed method reaches 97.0%.

  2. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  3. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods.

    Science.gov (United States)

    Coppieters 't Wallant, Dorothée; Maquet, Pierre; Phillips, Christophe

    2016-01-01

    Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation) and individual characteristics (intellectual quotient). Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  4. Automatic Web Data Extraction Based on Genetic Algorithms and Regular Expressions

    Science.gov (United States)

    Barrero, David F.; Camacho, David; R-Moreno, María D.

    Data Extraction from the World Wide Web is a well known, unsolved, and critical problem when complex information systems are designed. These problems are related to the extraction, management and reuse of the huge amount ofWeb data available. These data usually has a high heterogeneity, volatility and low quality (i.e. format and content mistakes), so it is quite hard to build reliable systems. This chapter proposes an Evolutionary Computation approach to the problem of automatically learn software entities based on Genetic Algorithms and regular expressions. These entities, also called wrappers, will be able to extract some kind of Web data structures from examples.

  5. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    Science.gov (United States)

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  6. Automatic extraction of faults and fractal analysis from remote sensing data

    Directory of Open Access Journals (Sweden)

    R. Gloaguen

    2007-01-01

    Full Text Available Object-based classification is a promising technique for image classification. Unlike pixel-based methods, which only use the measured radiometric values, the object-based techniques can also use shape and context information of scene textures. These extra degrees of freedom provided by the objects allow the automatic identification of geological structures. In this article, we present an evaluation of object-based classification in the context of extraction of geological faults. Digital elevation models and radar data of an area near Lake Magadi (Kenya have been processed. We then determine the statistics of the fault populations. The fractal dimensions of fault dimensions are similar to fractal dimensions directly measured on remote sensing images of the study area using power spectra (PSD and variograms. These methods allow unbiased statistics of faults and help us to understand the evolution of the fault systems in extensional domains. Furthermore, the direct analysis of image texture is a good indicator of the fault statistics and allows us to classify the intensity and type of deformation. We propose that extensional fault networks can be modeled by iterative function system (IFS.

  7. A Simple and Automatic Method for Locating Surgical Guide Hole

    Science.gov (United States)

    Li, Xun; Chen, Ming; Tang, Kai

    2017-12-01

    Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.

  8. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    Science.gov (United States)

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Geological lineament mapping in arid area by semi-automatic extraction from satellite images: example at the El Kseïbat region (Algerian Sahara)

    Energy Technology Data Exchange (ETDEWEB)

    Hammad, N.; Djidel, M.; Maabedi, N.

    2016-07-01

    Geologists in charge of a detailed lineament mapping in arid and desert area, face the extent of the land and the abundance of eolian deposits. This study presents a semi-automatic approach of extraction of lineament, different from other methods, such as the automatic extraction and manual extraction, by being both fast and objective. It consists of a series of digital processing (textural and spatial filtering, binarization by thresholding and mathematic morphology ... etc.) applied to a Landsat 7 ETM+scene. This semi-automatic approach has produced a detailed map of lineaments, while taking account of tectonic directions recognized in the region. It helps mitigate the effect of dune deposits meet the specifications of arid environment. The visual validation of these linear structures, by geoscientists and field data, allowed the identification of the majority of structural lineaments or at least those tried geological. (Author)

  10. Automatic extraction of road features in urban environments using dense ALS data

    Science.gov (United States)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  11. A method for automatically constructing the initial contour of the common carotid artery

    Directory of Open Access Journals (Sweden)

    Yara Omran

    2013-10-01

    Full Text Available In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images of the transverse section of the common carotid artery, but it can be extended to be used on the images of the longitudinal section. The intensity profiles are classified using Support vector machine algorithm, and the results of different kernels are compared. The extracted features used for the classification are basically statistical features of the intensity profiles. The echogenicity of the arterial lumen, and gives the profiles that intersect the artery a special shape that helps recognizing these profiles from other general profiles.The outlining of the arterial walls may seem a classic task in image processing. However, most of the methods used to outline the artery start from a manual, or semi-automatic, initial contour.The proposed method is highly appreciated in automating the entire process of automatic artery detection and segmentation.

  12. A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data

    Science.gov (United States)

    XU, R.; Jia, G.

    2012-12-01

    Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China

  13. METHOD FOR AUTOMATIC RAISING AND LEVELING OF SUPPORT PLATFORM

    Directory of Open Access Journals (Sweden)

    A. G. Stryzhniou

    2017-01-01

    Full Text Available The paper presents the method for automatic raising and leveling of support platform that differ from others in simplicity and versatility. The method includes four phases of raising and leveling when performance capabilities of the system is defined and the soil condition is tested. In addition, the current condition of the system is controlled and corrected with the issuance of control parameters to the control panel. The method can be used not only for static, but also for dynamic leveling systems, such as active suspension. The method assumes identification and dynamics testing of reference units. The synchronization of reference units moving was implemented to avoid dangerous skewing of support platform. The recommendations for the system implementation and experimental model identification of support platform are presented.

  14. Automatic Extraction of Figures from Scientific Publications in High-Energy Physics

    Directory of Open Access Journals (Sweden)

    Piotr Adam Praczyk

    2013-12-01

    Full Text Available Plots and figures play an important role in the process of understanding a scientificpublication, providing overviews of large amounts of data or ideas that are difficult to in-tuitively present using only the text. State of art in digital libraries, serving as gatewaysto knowledge encoded in scholarly writings, does not take full advantage of the graphicalcontent of documents. Enabling machines to automatically unlock the meaning of scien-tific illustrations would allow immense improvements in the way scientists work and theknowledge is being processed.    In this paper we present a novel solution for the initial problem of processing graphicalcontent, obtaining figures from scholarly publications stored in PDF format. Our methodrelies on vector properties of documents and as such, does not introduce additional errors,characteristic for methods based on raster image processing. Emphasis has been placed oncorrectly processing documents in High Energy Physics. The described approach makesdistinction between different classes of objects appearing in PDF documents and usesspatial clustering techniques to group objects into larger logical entities. A number ofheuristics allow the rejection of incorrect figure candidates and the extraction of differenttypes of metadata.

  15. Semi-Automatic Rating Method for Neutrophil Alkaline Phosphatase Activity.

    Science.gov (United States)

    Sugano, Kanae; Hashi, Kotomi; Goto, Misaki; Nishi, Kiyotaka; Maeda, Rie; Kono, Keigo; Yamamoto, Mai; Okada, Kazunori; Kaga, Sanae; Miwa, Keiko; Mikami, Taisei; Masauzi, Nobuo

    2017-01-01

    The neutrophil alkaline phosphatase (NAP) score is a valuable test for the diagnosis of myeloproliferative neoplasms, but it has still manually rated. Therefore, we developed a semi-automatic rating method using Photoshop ® and Image-J, called NAP-PS-IJ. Neutrophil alkaline phosphatase staining was conducted with Tomonaga's method to films of peripheral blood taken from three healthy volunteers. At least 30 neutrophils with NAP scores from 0 to 5+ were observed and taken their images. From which the outer part of neutrophil was removed away with Image-J. These were binarized with two different procedures (P1 and P2) using Photoshop ® . NAP-positive area (NAP-PA) and granule (NAP-PGC) were measured and counted with Image-J. The NAP-PA in images binarized with P1 significantly (P < 0.05) differed between images with NAP scores from 0 to 3+ (group 1) and those from 4+ to 5+ (group 2). The original images in group 1 were binarized with P2. NAP-PGC of them significantly (P < 0.05) differed among all four NAP score groups. The mean NAP-PGC with NAP-PS-IJ indicated a good correlation (r = 0.92, P < 0.001) to results by human examiners. The sensitivity and specificity of NAP-PS-IJ were 60% and 92%, which might be considered as a prototypic method for the full-automatic rating NAP score. © 2016 Wiley Periodicals, Inc.

  16. Automatic heart positioning method in computed tomography scout images.

    Science.gov (United States)

    Li, Hong; Liu, Kaihua; Sun, Hang; Bao, Nan; Wang, Xu; Tian, Shi; Qi, Shouliang; Kang, Yan

    2014-01-01

    Computed tomography (CT) radiation dose can be reduced significantly by region of interest (ROI) CT scan. Automatically positioning the heart in CT scout images is an essential step to realize the ROI CT scan of the heart. This paper proposed a fully automatic heart positioning method in CT scout image, including the anteroposterior (A-P) scout image and lateral scout image. The key steps were to determine the feature points of the heart and obtaining part of the heart boundary on the A-P scout image, and then transform the part of the boundary into polar coordinate system and obtain the whole boundary of the heart using slant elliptic equation curve fitting. For heart positioning on the lateral image, the top and bottom boundary obtained from A-P image can be inherited. The proposed method was tested on a clinical routine dataset of 30 cases (30 A-P scout images and 30 lateral scout images). Experimental results show that 26 cases of the dataset have achieved a very good positioning result of the heart both in the A-P scout image and the lateral scout image. The method may be helpful for ROI CT scan of the heart.

  17. An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA. So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine, NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.

  18. Developing Automatic Multi-Objective Optimization Methods for Complex Actuators

    Directory of Open Access Journals (Sweden)

    CHIS, R.

    2017-11-01

    Full Text Available This paper presents the analysis and multiobjective optimization of a magnetic actuator. By varying just 8 parameters of the magnetic actuator’s model the design space grows to more than 6 million configurations. Much more, the 8 objectives that must be optimized are conflicting and generate a huge objectives space, too. To cope with this complexity, we use advanced heuristic methods for Automatic Design Space Exploration. FADSE tool is one Automatic Design Space Exploration framework including different state of the art multi-objective meta-heuristics for solving NP-hard problems, which we used for the analysis and optimization of the COMSOL and MATLAB model of the magnetic actuator. We show that using a state of the art genetic multi-objective algorithm, response surface modelling methods and some machine learning techniques, the timing complexity of the design space exploration can be reduced, while still taking into consideration objective constraints so that various Pareto optimal configurations can be found. Using our developed approach, we were able to decrease the simulation time by at least a factor of 10, compared to a run that does all the simulations, while keeping prediction errors to around 1%.

  19. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  20. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  1. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    Science.gov (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  3. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1975-01-01

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [fr

  4. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  5. Bug Forecast: A Method for Automatic Bug Prediction

    Science.gov (United States)

    Ferenc, Rudolf

    In this paper we present an approach and a toolset for automatic bug prediction during software development and maintenance. The toolset extends the Columbus source code quality framework, which is able to integrate into the regular builds, analyze the source code, calculate different quality attributes like product metrics and bad code smells; and monitor the changes of these attributes. The new bug forecast toolset connects to the bug tracking and version control systems and assigns the reported and fixed bugs to the source code classes from the past. It then applies machine learning methods to learn which values of which quality attributes typically characterized buggy classes. Based on this information it is able to predict bugs in current and future versions of the classes.

  6. Vortex flows in the solar chromosphere. I. Automatic detection method

    Science.gov (United States)

    Kato, Y.; Wedemeyer, S.

    2017-05-01

    Solar "magnetic tornadoes" are produced by rotating magnetic field structures that extend from the upper convection zone and the photosphere to the corona of the Sun. Recent studies show that these kinds of rotating features are an integral part of atmospheric dynamics and occur on a large range of spatial scales. A systematic statistical study of magnetic tornadoes is a necessary next step towards understanding their formation and their role in mass and energy transport in the solar atmosphere. For this purpose, we develop a new automatic detection method for chromospheric swirls, meaning the observable signature of solar tornadoes or, more generally, chromospheric vortex flows and rotating motions. Unlike existing studies that rely on visual inspections, our new method combines a line integral convolution (LIC) imaging technique and a scalar quantity that represents a vortex flow on a two-dimensional plane. We have tested two detection algorithms, based on the enhanced vorticity and vorticity strength quantities, by applying them to three-dimensional numerical simulations of the solar atmosphere with CO5BOLD. We conclude that the vorticity strength method is superior compared to the enhanced vorticity method in all aspects. Applying the method to a numerical simulation of the solar atmosphere reveals very abundant small-scale, short-lived chromospheric vortex flows that have not been found previously by visual inspection.

  7. EXTRACTIVE SPECTROPHOTOMETRIC METHOD FOR THE ...

    African Journals Online (AJOL)

    B. S. Chandravanshi

    ketone (MIBK) extractable yellow nickel(II)-methyldithiocarbamate complex at 380 nm through the reaction with carbon disulfide and ... health due to direct exposure or through residues in the food and drinking water [10]. Carbaryl ... methylamine and its subsequent reaction with carbon disulfide and nickel(II) acetate to form.

  8. Realtime automatic metal extraction of medical x-ray images for contrast improvement

    Science.gov (United States)

    Prangl, Martin; Hellwagner, Hermann; Spielvogel, Christian; Bischof, Horst; Szkaliczki, Tibor

    2006-03-01

    This paper focuses on an approach for real-time metal extraction of x-ray images taken from modern x-ray machines like C-arms. Such machines are used for vessel diagnostics, surgical interventions, as well as cardiology, neurology and orthopedic examinations. They are very fast in taking images from different angles. For this reason, manual adjustment of contrast is infeasible and automatic adjustment algorithms have been applied to try to select the optimal radiation dose for contrast adjustment. Problems occur when metallic objects, e.g., a prosthesis or a screw, are in the absorption area of interest. In this case, the automatic adjustment mostly fails because the dark, metallic objects lead the algorithm to overdose the x-ray tube. This outshining effect results in overexposed images and bad contrast. To overcome this limitation, metallic objects have to be detected and extracted from images that are taken as input for the adjustment algorithm. In this paper, we present a real-time solution for extracting metallic objects of x-ray images. We will explore the characteristic features of metallic objects in x-ray images and their distinction from bone fragments which form the basis to find a successful way for object segmentation and classification. Subsequently, we will present our edge based real-time approach for successful and fast automatic segmentation and classification of metallic objects. Finally, experimental results on the effectiveness and performance of our approach based on a vast amount of input image data sets will be presented.

  9. ScholarLens: extracting competences from research publications for the automatic generation of semantic user profiles

    Directory of Open Access Journals (Sweden)

    Bahar Sateli

    2017-07-01

    Full Text Available Motivation Scientists increasingly rely on intelligent information systems to help them in their daily tasks, in particular for managing research objects, like publications or datasets. The relatively young research field of Semantic Publishing has been addressing the question how scientific applications can be improved through semantically rich representations of research objects, in order to facilitate their discovery and re-use. To complement the efforts in this area, we propose an automatic workflow to construct semantic user profiles of scholars, so that scholarly applications, like digital libraries or data repositories, can better understand their users’ interests, tasks, and competences, by incorporating these user profiles in their design. To make the user profiles sharable across applications, we propose to build them based on standard semantic web technologies, in particular the Resource Description Framework (RDF for representing user profiles and Linked Open Data (LOD sources for representing competence topics. To avoid the cold start problem, we suggest to automatically populate these profiles by analyzing the publications (co-authored by users, which we hypothesize reflect their research competences. Results We developed a novel approach, ScholarLens, which can automatically generate semantic user profiles for authors of scholarly literature. For modeling the competences of scholarly users and groups, we surveyed a number of existing linked open data vocabularies. In accordance with the LOD best practices, we propose an RDF Schema (RDFS based model for competence records that reuses existing vocabularies where appropriate. To automate the creation of semantic user profiles, we developed a complete, automated workflow that can generate semantic user profiles by analyzing full-text research articles through various natural language processing (NLP techniques. In our method, we start by processing a set of research articles for a

  10. METHOD OF RARE TERM CONTRASTIVE EXTRACTION FROM NATURAL LANGUAGE TEXTS

    Directory of Open Access Journals (Sweden)

    I. A. Bessmertny

    2017-01-01

    Full Text Available The paper considers a problem of automatic domain term extraction from documents corpus by means of a contrast collection. Existing contrastive methods successfully extract often used terms but mishandle rare terms. This could yield poorness of the resulting thesaurus. Assessment of point-wise mutual information is one of the known statistical methods of term extraction and it finds rare terms successfully. Although, it extracts many false terms at that. The proposed approach consists of point-wise mutual information application for rare terms extraction and filtering of candidates by criterion of joint occurrence with the other candidates. We build “documents-by-terms” matrix that is subjected to singular value decomposition to eliminate noise and reveal strong interconnections. Then we pass on to the resulting matrix “terms-by-terms” that reproduces strength of interconnections between words. This approach was approved on a documents collection from “Geology” domain with the use of contrast documents from such topics as “Politics”, “Culture”, “Economics” and “Accidents” on some Internet resources. The experimental results demonstrate operability of this method for rare terms extraction.

  11. Real-Time Automatic Fetal Brain Extraction in Fetal MRI by Deep Learning

    OpenAIRE

    Salehi, Seyed Sadegh Mohseni; Hashemi, Seyed Raein; Velasco-Annis, Clemente; Ouaalam, Abdelhakim; Estroff, Judy A.; Erdogmus, Deniz; Warfield, Simon K.; Gholipour, Ali

    2017-01-01

    Brain segmentation is a fundamental first step in neuroimage analysis. In the case of fetal MRI, it is particularly challenging and important due to the arbitrary orientation of the fetus, organs that surround the fetal head, and intermittent fetal motion. Several promising methods have been proposed but are limited in their performance in challenging cases and in real-time segmentation. We aimed to develop a fully automatic segmentation method that independently segments sections of the feta...

  12. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    Science.gov (United States)

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. A model based method for automatic facial expression recognition

    NARCIS (Netherlands)

    Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den

    2006-01-01

    Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image

  14. Image Segmentation Method Using Thresholds Automatically Determined from Picture Contents

    Directory of Open Access Journals (Sweden)

    Yuan Been Chen

    2009-01-01

    Full Text Available Image segmentation has become an indispensable task in many image and video applications. This work develops an image segmentation method based on the modified edge-following scheme where different thresholds are automatically determined according to areas with varied contents in a picture, thus yielding suitable segmentation results in different areas. First, the iterative threshold selection technique is modified to calculate the initial-point threshold of the whole image or a particular block. Second, the quad-tree decomposition that starts from the whole image employs gray-level gradient characteristics of the currently-processed block to decide further decomposition or not. After the quad-tree decomposition, the initial-point threshold in each decomposed block is adopted to determine initial points. Additionally, the contour threshold is determined based on the histogram of gradients in each decomposed block. Particularly, contour thresholds could eliminate inappropriate contours to increase the accuracy of the search and minimize the required searching time. Finally, the edge-following method is modified and then conducted based on initial points and contour thresholds to find contours precisely and rapidly. By using the Berkeley segmentation data set with realistic images, the proposed method is demonstrated to take the least computational time for achieving fairly good segmentation performance in various image types.

  15. Electronic Nose Feature Extraction Methods: A Review.

    Science.gov (United States)

    Yan, Jia; Guo, Xiuzhen; Duan, Shukai; Jia, Pengfei; Wang, Lidan; Peng, Chao; Zhang, Songlin

    2015-11-02

    Many research groups in academia and industry are focusing on the performance improvement of electronic nose (E-nose) systems mainly involving three optimizations, which are sensitive material selection and sensor array optimization, enhanced feature extraction methods and pattern recognition method selection. For a specific application, the feature extraction method is a basic part of these three optimizations and a key point in E-nose system performance improvement. The aim of a feature extraction method is to extract robust information from the sensor response with less redundancy to ensure the effectiveness of the subsequent pattern recognition algorithm. Many kinds of feature extraction methods have been used in E-nose applications, such as extraction from the original response curves, curve fitting parameters, transform domains, phase space (PS) and dynamic moments (DM), parallel factor analysis (PARAFAC), energy vector (EV), power density spectrum (PSD), window time slicing (WTS) and moving window time slicing (MWTS), moving window function capture (MWFC), etc. The object of this review is to provide a summary of the various feature extraction methods used in E-noses in recent years, as well as to give some suggestions and new inspiration to propose more effective feature extraction methods for the development of E-nose technology.

  16. Automatic segmentation of MRI head images by 3-D region growing method which utilizes edge information

    International Nuclear Information System (INIS)

    Jiang, Hao; Suzuki, Hidetomo; Toriwaki, Jun-ichiro

    1991-01-01

    This paper presents a 3-D segmentation method that automatically extracts soft tissue from multi-sliced MRI head images. MRI produces a sequence of two-dimensional (2-D) images which contains three-dimensional (3-D) information of organs. To utilize such information we need effective algorithms to treat 3-D digital images and to extract organs and tissues of interest. We developed a method to extract the brain from MRI images which uses a region growing procedure and integrates information of uniformity of gray levels and information of the presence of edge segments in the local area around the pixel of interest. First we generate a kernel region which is a part of brain tissue by simple thresholding. Then we grow the region by means of a region growing algorithm under the control of 3-D edge existence to obtain the region of the brain. Our method is rather simple because it uses basic 3-D image processing techniques like spatial difference. It is robust for variation of gray levels inside a tissue since it also refers to the edge information in the process of region growing. Therefore, the method is flexible enough to be applicable to the segmentation of other images including soft tissues which have complicated shapes and fluctuation in gray levels. (author)

  17. Newer methods of extraction of teeth

    OpenAIRE

    MHendra Chandha

    2016-01-01

    Atraumatic extraction methods are deemed to be important to minimize alveolar bone loss after tooth extraction. With the advent of such techniques, exodontia is no more a dreaded procedure in anxious patients. Newer system and techniques for extraction of teeth have evolved in the recent few decades. This article reviews and discusses new techniques to make simple and complex exodontias more efficient with improved patient outcomes. This includes physics forceps, powered periotome, piezosurge...

  18. A simple method for automatic measurement of excitation functions

    International Nuclear Information System (INIS)

    Ogawa, M.; Adachi, M.; Arai, E.

    1975-01-01

    An apparatus has been constructed to perform the sequence control of a beam-analysing magnet for automatic excitation function measurements. This device is also applied to the feedback control of the magnet to lock the beam energy. (Auth.)

  19. Semi-automatic road extraction from very high resolution remote sensing imagery by RoadModeler

    Science.gov (United States)

    Lu, Yao

    Accurate and up-to-date road information is essential for both effective urban planning and disaster management. Today, very high resolution (VHR) imagery acquired by airborne and spaceborne imaging sensors is the primary source for the acquisition of spatial information of increasingly growing road networks. Given the increased availability of the aerial and satellite images, it is necessary to develop computer-aided techniques to improve the efficiency and reduce the cost of road extraction tasks. Therefore, automation of image-based road extraction is a very active research topic. This thesis deals with the development and implementation aspects of a semi-automatic road extraction strategy, which includes two key approaches: multidirectional and single-direction road extraction. It requires a human operator to initialize a seed circle on a road and specify a extraction approach before the road is extracted by automatic algorithms using multiple vision cues. The multidirectional approach is used to detect roads with different materials, widths, intersection shapes, and degrees of noise, but sometimes it also interprets parking lots as road areas. Different from the multidirectional approach, the single-direction approach can detect roads with few mistakes, but each seed circle can only be used to detect one road. In accordance with this strategy, a RoadModeler prototype was developed. Both aerial and GeoEye-1 satellite images of seven different types of scenes with various road shapes in rural, downtown, and residential areas were used to evaluate the performance of the RoadModeler. The experimental results demonstrated that the RoadModeler is reliable and easy-to-use by a non-expert operator. Therefore, the RoadModeler is much better than the object-oriented classification. Its average road completeness, correctness, and quality achieved 94%, 97%, and 94%, respectively. These results are higher than those of Hu et al. (2007), which are 91%, 90%, and 85

  20. Automatic Parameter Extraction Technique for MOS Structures by C-V Characterization Including the Effects of Interface States

    Science.gov (United States)

    Ryazantsev, D. V.; Grudtsov, V. P.

    2016-10-01

    An automatic MOS structure parameter extraction algorithm accounting for quantum effects has been developed and applied in the semiconductor device analyzer Agilent B1500A. Parameter extraction is based on matching the experimental C-V data with numerical modeling results. The algorithm is used to extract the parameters of test MOS structures with ultrathin gate dielectrics. The applicability of the algorithm for the determination of distribution function of DOS and finding the donor defect level in silicon is shown.

  1. Automatic Parameter Extraction Technique for MOS Structures by C-V Characterization Including the Effects of Interface States

    Directory of Open Access Journals (Sweden)

    Ryazantsev D. V.

    2016-10-01

    Full Text Available An automatic MOS structure parameter extraction algorithm accounting for quantum effects has been developed and applied in the semiconductor device analyzer Agilent B1500A. Parameter extraction is based on matching the experimental C-V data with numerical modeling results. The algorithm is used to extract the parameters of test MOS structures with ultrathin gate dielectrics. The applicability of the algorithm for the determination of distribution function of DOS and finding the donor defect level in silicon is shown.

  2. Virgin almond oil: Extraction methods and composition

    Energy Technology Data Exchange (ETDEWEB)

    Roncero, J.M.; Alvarez-Orti, M.; Pardo-Gimenez, A.; Gomez, R.; Rabadan, A.; Pardo, J.E.

    2016-07-01

    In this paper the extraction methods of virgin almond oil and its chemical composition are reviewed. The most common methods for obtaining oil are solvent extraction, extraction with supercritical fluids (CO2) and pressure systems (hydraulic and screw presses). The best industrial performance, but also the worst oil quality is achieved by using solvents. Oils obtained by this method cannot be considered virgin oils as they are obtained by chemical treatments. Supercritical fluid extraction results in higher quality oils but at a very high price. Extraction by pressing becomes the best option to achieve high quality oils at an affordable price. With regards chemical composition, almond oil is characterized by its low content in saturated fatty acids and the predominance of monounsaturated, especially oleic acid. Furthermore, almond oil contains antioxidants and fat-soluble bioactive compounds that make it an oil with interesting nutritional and cosmetic properties.

  3. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Directory of Open Access Journals (Sweden)

    A. Bellakaout

    2016-06-01

    Full Text Available Aerial topographic surveys using Light Detection and Ranging (LiDAR technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS, mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  4. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Science.gov (United States)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  5. A FUZZY AUTOMATIC CAR DETECTION METHOD BASED ON HIGH RESOLUTION SATELLITE IMAGERY AND GEODESIC MORPHOLOGY

    Directory of Open Access Journals (Sweden)

    N. Zarrinpanjeh

    2017-09-01

    Full Text Available Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  6. a Fuzzy Automatic CAR Detection Method Based on High Resolution Satellite Imagery and Geodesic Morphology

    Science.gov (United States)

    Zarrinpanjeh, N.; Dadrassjavan, F.

    2017-09-01

    Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  7. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  8. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    Science.gov (United States)

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  9. Chemical name extraction based on automatic training data generation and rich feature set.

    Science.gov (United States)

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  10. AUTOMATIC ROOFTOP EXTRACTION IN STEREO IMAGERY USING DISTANCE AND BUILDING SHAPE REGULARIZED LEVEL SET EVOLUTION

    Directory of Open Access Journals (Sweden)

    J. Tian

    2017-05-01

    Full Text Available Automatic rooftop extraction is one of the most challenging problems in remote sensing image analysis. Classical 2D image processing techniques are expensive due to the high amount of features required to locate buildings. This problem can be avoided when 3D information is available. In this paper, we show how to fuse the spectral and height information of stereo imagery to achieve an efficient and robust rooftop extraction. In the first step, the digital terrain model (DTM and in turn the normalized digital surface model (nDSM is generated by using a newly step-edge approach. In the second step, the initial building locations and rooftop boundaries are derived by removing the low-level pixels and high-level pixels with higher probability to be trees and shadows. This boundary is then served as the initial level set function, which is further refined to fit the best possible boundaries through distance regularized level-set curve evolution. During the fitting procedure, the edge-based active contour model is adopted and implemented by using the edges indicators extracted from panchromatic image. The performance of the proposed approach is tested by using the WorldView-2 satellite data captured over Munich.

  11. A chest-shape target automatic detection method based on Deformable Part Models

    Science.gov (United States)

    Zhang, Mo; Jin, Weiqi; Li, Li

    2016-10-01

    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  12. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    Directory of Open Access Journals (Sweden)

    Gang Hu

    2018-01-01

    Full Text Available The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.

  13. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  14. A New Dataset of Automatically Extracted Structure of Arms and Bars in Spiral Galaxies

    Science.gov (United States)

    Hayes, Wayne B.; Davis, D.

    2012-05-01

    We present an algorithm capable of automatically extracting quantitative structure (bars and arms) from images of spiral galaxies. We have run the algorithm on 30,000 galaxies and compared the results to human classifications generously provided pre-publication by the Galaxy Zoo 2 team. In all available measures, our algorithm agrees with the humans about as well as they agree with each other. In addition we provide objective, quantitative measures not available in human classifications. We provide a preliminary analysis of this dataset to see how the properties of arms and bars vary as a function of basic variables such as environment, redshift, absolute magnitude, and color. We also show how structure can vary across wavebands as well as along and across individual arms and bars. Finally, we present preliminary results of a measurement of the total angular momentum present in our observed set of galaxies with an eye towards determining if there is a preferred "handedness" in the universe.

  15. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  16. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  17. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    Science.gov (United States)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  18. Method of automatic detection of tumors in mammogram

    Science.gov (United States)

    Xie, Mei; Ma, Zheng

    2001-09-01

    Prevention and early diagnosis of tumors in mammogram are foremost. Unfortunately, these images are often corrupted by the noise due to the film noise and the background texture of the images, which did not allow isolation of the target information from the background noise, and often results in the suspicious area to be analyzed inaccurately. In order to achieve more accurate detection and segmentation tumors, the quality of the images need to improve, (including to suppressing noise and enhancing the contrast of the image). This paper presents a new adaptive histogram threshold method approach for segmentation of suspicious mass regions in digitized images. The method use multi-scale wavelet decomposition and a threshold selection criterion based on a transformed imageís histogram. This separation can help eliminate background noise and discriminates against objects of different size and shape. The tumors are extracted by used an adaptively bayesian classifier. We demonstrate that the method proposed can greatly improve the accuracy of detection in tumors.

  19. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    Jiang, Hao; Yamamoto, Shinji; Imao, Masanao.

    1995-01-01

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  20. Automatic detecting method of LED signal lamps on fascia based on color image

    Science.gov (United States)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  1. Newer methods of extraction of teeth

    Directory of Open Access Journals (Sweden)

    MHendra Chandha

    2016-06-01

    Full Text Available Atraumatic extraction methods are deemed to be important to minimize alveolar bone loss after tooth extraction. With the advent of such techniques, exodontia is no more a dreaded procedure in anxious patients. Newer system and techniques for extraction of teeth have evolved in the recent few decades. This article reviews and discusses new techniques to make simple and complex exodontias more efficient with improved patient outcomes. This includes physics forceps, powered periotome, piezosurgery, benex extractor, sonic instrument for bone surgery, lasers.

  2. Automatic Extraction of Road Surface and Curbstone Edges from Mobile Laser Scanning Data

    Science.gov (United States)

    Miraliakbari, A.; Hahn, M.; Sok, S.

    2015-05-01

    We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  3. Automatically extracting sentences from Medline citations to support clinicians' information needs.

    Science.gov (United States)

    Jonnalagadda, Siddhartha Reddy; Del Fiol, Guilherme; Medlin, Richard; Weir, Charlene; Fiszman, Marcelo; Mostafa, Javed; Liu, Hongfang

    2013-01-01

    Online health knowledge resources contain answers to most of the information needs raised by clinicians in the course of care. However, significant barriers limit the use of these resources for decision-making, especially clinicians' lack of time. In this study we assessed the feasibility of automatically generating knowledge summaries for a particular clinical topic composed of relevant sentences extracted from Medline citations. The proposed approach combines information retrieval and semantic information extraction techniques to identify relevant sentences from Medline abstracts. We assessed this approach in two case studies on the treatment alternatives for depression and Alzheimer's disease. A total of 515 of 564 (91.3%) sentences retrieved in the two case studies were relevant to the topic of interest. About one-third of the relevant sentences described factual knowledge or a study conclusion that can be used for supporting information needs at the point of care. The high rate of relevant sentences is desirable, given that clinicians' lack of time is one of the main barriers to using knowledge resources at the point of care. Sentence rank was not significantly associated with relevancy, possibly due to most sentences being highly relevant. Sentences located closer to the end of the abstract and sentences with treatment and comparative predications were likely to be conclusive sentences. Our proposed technical approach to helping clinicians meet their information needs is promising. The approach can be extended for other knowledge resources and information need types.

  4. Extractive spectrophotometric method for the determination of ...

    African Journals Online (AJOL)

    In the view of the potential hazards associated with the widespread use of carbaryl insecticide, a new simple extractive spectrophotometric method has been developed for its determination in environmental samples viz. soil, water and foodstuffs for its safer and more effective use. The proposed method is based on the ...

  5. Automatic spot preparation and image processing of paper microzone-based assays for analysis of bioactive compounds in plant extracts.

    Science.gov (United States)

    Vaher, M; Borissova, M; Seiman, A; Aid, T; Kolde, H; Kazarjan, J; Kaljurand, M

    2014-01-15

    The colorimetric determination of the concentration of phytochemicals in plant extract samples using a spotting automatic system, mobile phone camera and a computer with developed software for quantification is described. Method automation was achieved by using a robotic system for spotting. The instrument was set to disperse the appropriate aliquots of the reagents and sample on a Whatman paper sheet. Spots were photographed and analysed by ImageJ software or by applying the developed MatLab based algorithm. The developed assay was found to be effective, with a linear response at the concentration range of 0.03-0.25g/L for polyphenols. The detection limit of the proposed method is sub 0.03g/L. The paper microzone-based assays for flavonoids and amino acids/peptides were also developed and evaluated as applicable. Comparing the results with conventional PμZP methods demonstrates that both methods yield similar results. At the same time, the proposed method has an attractive advantage in analysis time and repeatability/reproducibility. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Automatic Extraction of Small Spatial Plots from Geo-Registered UAS Imagery

    Science.gov (United States)

    Cherkauer, Keith; Hearst, Anthony

    2015-04-01

    Accurate extraction of spatial plots from high-resolution imagery acquired by Unmanned Aircraft Systems (UAS), is a prerequisite for accurate assessment of experimental plots in many geoscience fields. If the imagery is correctly geo-registered, then it may be possible to accurately extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. The methods developed are suitable for work in many fields where replicates across time and space are necessary to quantify variability.

  7. Automaticity of multiplication facts with cognitive behavioral method

    OpenAIRE

    Ferlin, Sara

    2017-01-01

    Slovenian students are achieving good results in math, yet the attitude on this subject remains negative. The automaticity of multiplication facts is one of the main learning objectives in 4th grade math. If the student does not automate multiplication, he or she may solve assignments at a slower rate and make mistakes during the process. Failure may contribute to a change in their attitude toward multiplication and, later on, math. This can be avoided by effectively addressing the issue. On ...

  8. Extracting contextual information in digital imagery: applications to automatic target recognition and mammography

    Science.gov (United States)

    Spence, Clay D.; Sajda, Paul; Pearson, John C.

    1996-02-01

    An important problem in image analysis is finding small objects in large images. The problem is challenging because (1) searching a large image is computationally expensive, and (2) small targets (on the order of a few pixels in size) have relatively few distinctive features which enable them to be distinguished from non-targets. To overcome these challenges we have developed a hierarchical neural network (HNN) architecture which combines multi-resolution pyramid processing with neural networks. The advantages of the architecture are: (1) both neural network training and testing can be done efficiently through coarse-to-fine techniques, and (2) such a system is capable of learning low-resolution contextual information to facilitate the detection of small target objects. We have applied this neural network architecture to two problems in which contextual information appears to be important for detecting small targets. The first problem is one of automatic target recognition (ATR), specifically the problem of detecting buildings in aerial photographs. The second problem focuses on a medical application, namely searching mammograms for microcalcifications, which are cues for breast cancer. Receiver operating characteristic (ROC) analysis suggests that the hierarchical architecture improves the detection accuracy for both the ATR and microcalcification detection problems, reducing false positive rates by a significant factor. In addition, we have examined the hidden units at various levels of the processing hierarchy and found what appears to be representations of road location (for the ATR example) and ductal/vasculature location (for mammography), both of which are in agreement with the contextual information used by humans to find these classes of targets. We conclude that this hierarchical neural network architecture is able to automatically extract contextual information in imagery and utilize it for target detection.

  9. CURRENT STATE ANALYSIS OF AUTOMATIC BLOCK SYSTEM DEVICES, METHODS OF ITS SERVICE AND MONITORING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2014-01-01

    Full Text Available Purpose. Development of formalized description of automatic block system of numerical code based on the analysis of characteristic failures of automatic block system and procedure of its maintenance. Methodology. For this research a theoretical and analytical methods have been used. Findings. Typical failures of the automatic block systems were analyzed, as well as basic reasons of failure occur were found out. It was determined that majority of failures occurs due to defects of the maintenance system. Advantages and disadvantages of the current service technology of automatic block system were analyzed. Works that can be automatized by means of technical diagnostics were found out. Formal description of the numerical code of automatic block system as a graph in the state space of the system was carried out. Originality. The state graph of the numerical code of automatic block system that takes into account gradual transition from the serviceable condition to the loss of efficiency was offered. That allows selecting diagnostic information according to attributes and increasing the effectiveness of recovery operations in the case of a malfunction. Practical value. The obtained results of analysis and proposed the state graph can be used as the basis for the development of new means of diagnosing devices for automatic block system, which in turn will improve the efficiency and service of automatic block system devices in general.

  10. eCTG: an automatic procedure to extract digital cardiotocographic signals from digital images.

    Science.gov (United States)

    Sbrollini, Agnese; Agostinelli, Angela; Marcantoni, Ilaria; Morettini, Micaela; Burattini, Luca; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura

    2018-03-01

    Cardiotocography (CTG), consisting in the simultaneous recording of fetal heart rate (FHR) and maternal uterine contractions (UC), is a popular clinical test to assess fetal health status. Typically, CTG machines provide paper reports that are visually interpreted by clinicians. Consequently, visual CTG interpretation depends on clinician's experience and has a poor reproducibility. The lack of databases containing digital CTG signals has limited number and importance of retrospective studies finalized to set up procedures for automatic CTG analysis that could contrast visual CTG interpretation subjectivity. In order to help overcoming this problem, this study proposes an electronic procedure, termed eCTG, to extract digital CTG signals from digital CTG images, possibly obtainable by scanning paper CTG reports. eCTG was specifically designed to extract digital CTG signals from digital CTG images. It includes four main steps: pre-processing, Otsu's global thresholding, signal extraction and signal calibration. Its validation was performed by means of the "CTU-UHB Intrapartum Cardiotocography Database" by Physionet, that contains digital signals of 552 CTG recordings. Using MATLAB, each signal was plotted and saved as a digital image that was then submitted to eCTG. Digital CTG signals extracted by eCTG were eventually compared to corresponding signals directly available in the database. Comparison occurred in terms of signal similarity (evaluated by the correlation coefficient ρ, and the mean signal error MSE) and clinical features (including FHR baseline and variability; number, amplitude and duration of tachycardia, bradycardia, acceleration and deceleration episodes; number of early, variable, late and prolonged decelerations; and UC number, amplitude, duration and period). The value of ρ between eCTG and reference signals was 0.85 (P < 10 -560 ) for FHR and 0.97 (P < 10 -560 ) for UC. On average, MSE value was 0.00 for both FHR and UC. No CTG feature

  11. Automatic methods for alveolar bone loss degree measurement in periodontitis periapical radiographs.

    Science.gov (United States)

    Lin, P L; Huang, P Y; Huang, P W

    2017-09-01

    Periodontitis involves progressive loss of alveolar bone around the teeth. Hence, automatic alveolar bone loss measurement in periapical radiographs can assist dentists in diagnosing such disease. In this paper, we propose an automatic length-based alveolar bone loss measurement system with emphasis on a cementoenamel junction (CEJ) localization method: CEJ_LG. The bone loss measurement system first adopts the methods TSLS and ABLifBm, which we presented previously, to extract teeth contours and bone loss areas from periodontitis radiograph images. It then applies the proposed methods to locate the positions of CEJ, alveolar crest (ALC), and apex of tooth root (APEX), respectively. Finally the system computes the ratio of the distance between the positions of CEJ and ALC to the distance between the positions of CEJ and APEX as the degree of bone loss for that tooth. The method CEJ_LG first obtains the gradient of the tooth image then detects the border between the lower enamel and dentin (EDB) from the gradient image. Finally, the method identifies a point on the tooth contour that is horizontally closest to the EDB. Experimental results on 18 tooth images segmented from 12 periodontitis periapical radiographs, including 8 views of upper-jaw teeth and 10 views of lower-jaw teeth, show that 53% of the localized CEJs are within 3 pixels deviation (∼ 0.15 mm) from the positions marked by dentists and 90% have deviation less than 9 pixels (∼ 0.44 mm). For degree of alveolar bone loss, more than half of the measurements using our system have deviation less than 10% from the ground truth, and all measurements using our system are within 25% deviation from the ground truth. Our results suggest that the proposed automatic system can effectively estimate degree of horizontal alveolar bone loss in periodontitis radiograph images. We believe that our proposed system, if implemented in routine clinical practice, can serve as a valuable tool for early and accurate

  12. Calibration of three rainfall simulators with automatic measurement methods

    Science.gov (United States)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  13. A novel method for automatically locating the pylorus in the wireless capsule endoscopy.

    Science.gov (United States)

    Zhou, Shangbo; Yang, Han; Siddique, Muhammad Abubakar; Xu, Jie; Zhou, Ping

    2017-02-01

    Wireless capsule endoscopy (WCE) is a non-invasive technique used to examine the interiors of digestive tracts. Generally, the digestive tract can be divided into four segments: the entrance; stomach; small intestine; and large intestine. The stomach and the small intestine have a higher risk of infections than the other segments. In order to locate the diseased organ, an appropriate classification of the WCE images is necessary. In this article, a novel method is proposed for automatically locating the pylorus in WCE. The location of the pylorus is determined on two levels: rough-level and refined-level. In the rough-level, a short-term color change at the boundary between stomach and intestine can help us to find approximately 70-150 positions. In the refined-level, an improved Weber local descriptor (WLD) feature extraction method is designed for gray-scale images. Compared to the original WLD calculation method, the method for calculating the differential excitation is improved to give a higher level of robustness. A K-nearest neighbor (KNN) classifier is incorporated to segment these images around the approximate position into different regions. The proposed algorithm locates three most probable positions of the pylorus that were marked by the clinician. The experimental results indicate that the proposed method is effective.

  14. Automatic Detection of Microaneurysms in Color Fundus Images using a Local Radon Transform Method

    Directory of Open Access Journals (Sweden)

    Hamid Reza Pourreza

    2009-03-01

    Full Text Available Introduction: Diabetic retinopathy (DR is one of the most serious and most frequent eye diseases in the world and the most common cause of blindness in adults between 20 and 60 years of age. Following 15 years of diabetes, about 2% of the diabetic patients are blind and 10% suffer from vision impairment due to DR complications. This paper addresses the automatic detection of microaneurysms (MA in color fundus images, which plays a key role in computer-assisted early diagnosis of diabetic retinopathy. Materials and Methods: The algorithm can be divided into three main steps. The purpose of the first step or pre-processing is background normalization and contrast enhancement of the images. The second step aims to detect candidates, i.e., all patterns possibly corresponding to MA, which is achieved using a local radon transform, Then, features are extracted, which are used in the last step to automatically classify the candidates into real MA or other objects using the SVM method. A database of 100 annotated images was used to test the algorithm. The algorithm was compared to manually obtained gradings of these images. Results: The sensitivity of diagnosis for DR was 100%, with specificity of 90% and the sensitivity of precise MA localization was 97%, at an average number of 5 false positives per image. Discussion and Conclusion: Sensitivity and specificity of this algorithm make it one of the best methods in this field. Using the local radon transform in this algorithm eliminates the noise sensitivity for MA detection in retinal image analysis.

  15. Recent developments in automatic solid-phase extraction with renewable surfaces exploiting flow-based approaches

    DEFF Research Database (Denmark)

    Miró, Manuel; Hartwell, Supaporn Kradtap; Jakmunee, Jaroon

    2008-01-01

    Solid-phase extraction (SPE) is the most versatile sample-processing method for removal of interfering species and/or analyte enrichment. Although significant advances have been made over the past two decades in automating the entire analytical protocol involving SPE via flow-injection approaches...

  16. Comparison of pyrethrins extraction methods efficiencies

    African Journals Online (AJOL)

    STORAGESEVER

    2010-05-03

    May 3, 2010 ... extraction treatment using solvents with lower cost and toxicity and an adequate method for the identification and separation of active compounds (pyrethrins) with possible application in enterprises or industry. ... HPLC, normal phase- high performance liquid chromatography;. GC, gas-liquid ...

  17. Automatic Object-Oriented, Spectral-Spatial Feature Extraction Driven by Tobler’s First Law of Geography for Very High Resolution Aerial Imagery Classification

    Directory of Open Access Journals (Sweden)

    Zhiyong Lv

    2017-03-01

    Full Text Available Aerial image classification has become popular and has attracted extensive research efforts in recent decades. The main challenge lies in its very high spatial resolution but relatively insufficient spectral information. To this end, spatial-spectral feature extraction is a popular strategy for classification. However, parameter determination for that feature extraction is usually time-consuming and depends excessively on experience. In this paper, an automatic spatial feature extraction approach based on image raster and segmental vector data cross-analysis is proposed for the classification of very high spatial resolution (VHSR aerial imagery. First, multi-resolution segmentation is used to generate strongly homogeneous image objects and extract corresponding vectors. Then, to automatically explore the region of a ground target, two rules, which are derived from Tobler’s First Law of Geography (TFL and a topological relationship of vector data, are integrated to constrain the extension of a region around a central object. Third, the shape and size of the extended region are described. A final classification map is achieved through a supervised classifier using shape, size, and spectral features. Experiments on three real aerial images of VHSR (0.1 to 0.32 m are done to evaluate effectiveness and robustness of the proposed approach. Comparisons to state-of-the-art methods demonstrate the superiority of the proposed method in VHSR image classification.

  18. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Directory of Open Access Journals (Sweden)

    Fasahat Ullah Siddiqui

    2016-07-01

    Full Text Available Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality. Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state

  19. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    Science.gov (United States)

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building

  20. Automatic crack detection method for loaded coal in vibration failure process.

    Directory of Open Access Journals (Sweden)

    Chengwu Li

    Full Text Available In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM. A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.

  1. Automatically extracting cancer disease characteristics from pathology reports into a Disease Knowledge Representation Model.

    Science.gov (United States)

    Coden, Anni; Savova, Guergana; Sominsky, Igor; Tanenblatt, Michael; Masanz, James; Schuler, Karin; Cooper, James; Guan, Wei; de Groen, Piet C

    2009-10-01

    We introduce an extensible and modifiable knowledge representation model to represent cancer disease characteristics in a comparable and consistent fashion. We describe a system, MedTAS/P which automatically instantiates the knowledge representation model from free-text pathology reports. MedTAS/P is based on an open-source framework and its components use natural language processing principles, machine learning and rules to discover and populate elements of the model. To validate the model and measure the accuracy of MedTAS/P, we developed a gold-standard corpus of manually annotated colon cancer pathology reports. MedTAS/P achieves F1-scores of 0.97-1.0 for instantiating classes in the knowledge representation model such as histologies or anatomical sites, and F1-scores of 0.82-0.93 for primary tumors or lymph nodes, which require the extractions of relations. An F1-score of 0.65 is reported for metastatic tumors, a lower score predominantly due to a very small number of instances in the training and test sets.

  2. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  3. Automatic building extraction from LiDAR data fusion of point and grid-based features

    Science.gov (United States)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  4. Comparison of Landsat-8, ASTER and Sentinel 1 satellite remote sensing data in automatic lineaments extraction: A case study of Sidi Flah-Bouskour inlier, Moroccan Anti Atlas

    Science.gov (United States)

    Adiri, Zakaria; El Harti, Abderrazak; Jellouli, Amine; Lhissou, Rachid; Maacha, Lhou; Azmi, Mohamed; Zouhair, Mohamed; Bachaoui, El Mostafa

    2017-12-01

    Certainly, lineament mapping occupies an important place in several studies, including geology, hydrogeology and topography etc. With the help of remote sensing techniques, lineaments can be better identified due to strong advances in used data and methods. This allowed exceeding the usual classical procedures and achieving more precise results. The aim of this work is the comparison of ASTER, Landsat-8 and Sentinel 1 data sensors in automatic lineament extraction. In addition to image data, the followed approach includes the use of the pre-existing geological map, the Digital Elevation Model (DEM) as well as the ground truth. Through a fully automatic approach consisting of a combination of edge detection algorithm and line-linking algorithm, we have found the optimal parameters for automatic lineament extraction in the study area. Thereafter, the comparison and the validation of the obtained results showed that the Sentinel 1 data are more efficient in restitution of lineaments. This indicates the performance of the radar data compared to those optical in this kind of study.

  5. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Georgel, B.; Zorgati, R.

    1994-01-01

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  6. Automatic Extraction System for Common Artifacts in EEG Signals Based on Evolutionary Stone’s BSS Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed Kareem Abdullah

    2014-01-01

    Full Text Available An automatic artifact extraction system is proposed based on a hybridization of Stone’s BSS and genetic algorithm. This hybridization is called evolutionary Stone’s BSS algorithm (ESBSS. Original Stone’s BSS used short- and long-term half-life parameters as constant values, and the changes in these parameters will be affecting directly the separated signals; also there is no way to determine the best parameters. The genetic algorithm is a suitable technique to overcome this problem by finding randomly the optimum half-life parameters in Stone’s BSS. The proposed system is used to extract automatically the common artifacts such as ocular and heart beat artifacts from EEG mixtures without prejudice to the data; also there is no notch filter used in the proposed system in order not to lose any useful information.

  7. Optical Methods For Automatic Rating Of Engine Test Components

    Science.gov (United States)

    Pritchard, James R.; Moss, Brian C.

    1989-03-01

    In recent years, increasing commercial and legislative pressure on automotive engine manufacturers, including increased oil drain intervals, cleaner exhaust emissions and high specific power outputs, have led to increasing demands on lubricating oil performance. Lubricant performance is defined by bench engine tests run under closely controlled conditions. After test, engines are dismantled and the parts rated for wear and accumulation of deposit. This rating must be consistently carried out in laboratories throughout the world in order to ensure lubricant quality meeting the specified standards. To this end, rating technicians evaluate components, following closely defined procedures. This process is time consuming, inaccurate and subject to drift, requiring regular recalibration of raters by means of international rating workshops. This paper describes two instruments for automatic rating of engine parts. The first uses a laser to determine the degree of polishing of the engine cylinder bore, caused by the reciprocating action of piston. This instrument has been developed to prototype stage by the NDT Centre at Harwell under contract to Exxon Chemical, and is planned for production within the next twelve months. The second instrument uses red and green filtered light to determine the type, quality and position of deposit formed on the piston surfaces. The latter device has undergone feasibility study, but no prototype exists.

  8. Automatic extraction of the cingulum bundle in diffusion tensor tract-specific analysis. Feasibility study in Parkinson's disease with and without dementia

    International Nuclear Information System (INIS)

    Ito, Kenji; Masutani, Yoshitaka; Suzuki, Yuichi; Ino, Kenji; Kunimatsu, Akira; Ohtomo, Kuni; Kamagata, Koji; Yasmin, Hasina; Aoki, Shigeki

    2013-01-01

    Tract-specific analysis (TSA) measures diffusion parameters along a specific fiber that has been extracted by fiber tracking using manual regions of interest (ROIs), but TSA is limited by its requirement for manual operation, poor reproducibility, and high time consumption. We aimed to develop a fully automated extraction method for the cingulum bundle (CB) and to apply the method to TSA in neurobehavioral disorders such as Parkinson's disease (PD). We introduce the voxel classification (VC) and auto diffusion tensor fiber-tracking (AFT) methods of extraction. The VC method directly extracts the CB, skipping the fiber-tracking step, whereas the AFT method uses fiber tracking from automatically selected ROIs. We compared the results of VC and AFT to those obtained by manual diffusion tensor fiber tracking (MFT) performed by 3 operators. We quantified the Jaccard similarity index among the 3 methods in data from 20 subjects (10 normal controls [NC] and 10 patients with Parkinson's disease dementia [PDD]). We used all 3 extraction methods (VC, AFT, and MFT) to calculate the fractional anisotropy (FA) values of the anterior and posterior CB for 15 NC subjects, 15 with PD, and 15 with PDD. The Jaccard index between results of AFT and MFT, 0.72, was similar to the inter-operator Jaccard index of MFT. However, the Jaccard indices between VC and MFT and between VC and AFT were lower. Consequently, the VC method classified among 3 different groups (NC, PD, and PDD), whereas the others classified only 2 different groups (NC, PD or PDD). For TSA in Parkinson's disease, the VC method can be more useful than the AFT and MFT methods for extracting the CB. In addition, the results of patient data analysis suggest that a reduction of FA in the posterior CB may represent a useful biological index for monitoring PD and PDD. (author)

  9. Purging Musical Instrument Sample Databases Using Automatic Musical Instrument Recognition Methods

    OpenAIRE

    Livshin , Arie; Rodet , Xavier

    2009-01-01

    cote interne IRCAM: Livshin09a; None / None; National audience; Compilation of musical instrument sample databases requires careful elimination of badly recorded samples and validation of sample classification into correct categories. This paper introduces algorithms for automatic removal of bad instrument samples using Automatic Musical Instrument Recognition and Outlier Detection techniques. Best evaluation results on a methodically contaminated sound database are achieved using the introdu...

  10. Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping

    OpenAIRE

    Sophie Crommelinck; Rohan Bennett; Markus Gerke; Francesco Nex; Michael Ying Yang; George Vosselman

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to its full potential: based on UAV data, cadastral boundaries are visually detected and manually digitized. A workflow that automatically extracts boundary features from UAV data cou...

  11. Automatic Morphological Sieving: Comparison between Different Methods, Application to DNA Ploidy Measurements

    Directory of Open Access Journals (Sweden)

    Christophe Boudry

    1999-01-01

    Full Text Available The aim of the present study is to propose alternative automatic methods to time consuming interactive sorting of elements for DNA ploidy measurements. One archival brain tumour and two archival breast carcinoma were studied, corresponding to 7120 elements (3764 nuclei, 3356 debris and aggregates. Three automatic classification methods were tested to eliminate debris and aggregates from DNA ploidy measurements (mathematical morphology (MM, multiparametric analysis (MA and neural network (NN. Performances were evaluated by reference to interactive sorting. The results obtained for the three methods concerning the percentage of debris and aggregates automatically removed reach 63, 75 and 85% for MM, MA and NN methods, respectively, with false positive rates of 6, 21 and 25%. Information about DNA ploidy abnormalities were globally preserved after automatic elimination of debris and aggregates by MM and MA methods as opposed to NN method, showing that automatic classification methods can offer alternatives to tedious interactive elimination of debris and aggregates, for DNA ploidy measurements of archival tumours.

  12. Method for Extracting and Sequestering Carbon Dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Rau, Gregory H.; Caldeira, Kenneth G.

    2005-05-10

    A method and apparatus to extract and sequester carbon dioxide (CO2) from a stream or volume of gas wherein said method and apparatus hydrates CO2, and reacts the resulting carbonic acid with carbonate. Suitable carbonates include, but are not limited to, carbonates of alkali metals and alkaline earth metals, preferably carbonates of calcium and magnesium. Waste products are metal cations and bicarbonate in solution or dehydrated metal salts, which when disposed of in a large body of water provide an effective way of sequestering CO2 from a gaseous environment.

  13. An adaptive and fully automatic method for estimating the 3D position of bendable instruments using endoscopic images.

    Science.gov (United States)

    Cabras, Paolo; Nageotte, Florent; Zanne, Philippe; Doignon, Christophe

    2017-12-01

    Flexible bendable instruments are key tools for performing surgical endoscopy. Being able to measure the 3D position of such instruments can be useful for various tasks, such as controlling automatically robotized instruments and analyzing motions. An automatic method is proposed to infer the 3D pose of a single bending section instrument, using only the images provided by a monocular camera embedded at the tip of the endoscope. The proposed method relies on colored markers attached onto the bending section. The image of the instrument is segmented using a graph-based method and the corners of the markers are extracted by detecting the color transitions along Bézier curves fitted on edge points. These features are accurately located and then used to estimate the 3D pose of the instrument using an adaptive model that takes into account the mechanical play between the instrument and its housing channel. The feature extraction method provides good localization of marker corners with images of the in vivo environment despite sensor saturation due to strong lighting. The RMS error on estimation of the tip position of the instrument for laboratory experiments was 2.1, 1.96, and 3.18 mm in the x, y and z directions, respectively. Qualitative analysis in the case of in vivo images shows the ability to correctly estimate the 3D position of the instrument tip during real motions. The proposed method provides an automatic and accurate estimation of the 3D position of the tip of a bendable instrument in realistic conditions, where standard approaches fail. Copyright © 2017 John Wiley & Sons, Ltd.

  14. Characterization of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash using different extraction methods.

    Science.gov (United States)

    Sun, Ping; Weavers, Linda K; Taerakul, Panuwat; Walker, Harold W

    2006-01-01

    In this study, traditional Soxhlet, automatic Soxhlet and ultrasonic extraction techniques were employed to determine the speciation and concentration of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash samples collected from the baghouse of a spreader stoker boiler. To test the efficiencies of different extraction methods, LSD ash samples were doped with a mixture of 16 US EPA specified PAHs to measure the matrix spike recoveries. The results showed that the spike recoveries of PAHs were different using these three extraction methods with dichloromethane (DCM) as the solvent. Traditional Soxhlet extraction achieved slightly higher recoveries than automatic Soxhlet and ultrasonic extraction. Different solvents including toluene, DCM:acetone (1:1 V/V) and hexane:acetone (1:1 V/V) were further examined to optimize the recovery using ultrasonic extraction. Toluene achieved the highest spike recoveries of PAHs at a spike level of 10 microg kg(-1). When the spike level was increased to 50 microg kg(-1), the spike recoveries of PAHs also correspondingly increased. Although the type and concentration of PAHs detected on LSD ash samples by different extraction methods varied, the concentration of each detected PAH was consistently low, at microg kg(-1) levels.

  15. Method for automatic control rod operation using rule-based control

    International Nuclear Information System (INIS)

    Kinoshita, Mitsuo; Yamada, Naoyuki; Kiguchi, Takashi

    1988-01-01

    An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

  16. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  17. Automatic 3D segmentation of the kidney in MR images using wavelet feature extraction and probability shape model

    Science.gov (United States)

    Akbari, Hamed; Fei, Baowei

    2012-02-01

    Numerical estimation of the size of the kidney is useful in evaluating conditions of the kidney, especially, when serial MR imaging is performed to evaluate the kidney function. This paper presents a new method for automatic segmentation of the kidney in three-dimensional (3D) MR images, by extracting texture features and statistical matching of geometrical shape of the kidney. A set of Wavelet-based support vector machines (W-SVMs) is trained on the MR images. The W-SVMs capture texture priors of MRI for classification of the kidney and non-kidney tissues in different zones around the kidney boundary. In the segmentation procedure, these W-SVMs are trained to tentatively label each voxel around the kidney model as a kidney or non-kidney voxel by texture matching. A probability kidney model is created using 10 segmented MRI data. The model is initially localized based on the intensity profiles in three directions. The weight functions are defined for each labeled voxel for each Wavelet-based, intensity-based, and model-based label. Consequently, each voxel has three labels and three weights for the Wavelet feature, intensity, and probability model. Using a 3D edge detection method, the model is re-localized and the segmented kidney is modified based on a region growing method in the model region. The probability model is re-localized based on the results and this loop continues until the segmentation converges. Experimental results with mouse MRI data show the good performance of the proposed method in segmenting the kidney in MR images.

  18. Automatic extraction of protein point mutations using a graph bigram association.

    Directory of Open Access Journals (Sweden)

    Lawrence C Lee

    2007-02-01

    Full Text Available Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram, that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR, tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76, protein tyrosine kinases (0.72 versus 0.69, and ion channel transporters (0.76 versus 0.74. Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.

  19. A cell extraction method for oily sediments

    Directory of Open Access Journals (Sweden)

    Michael eLappé

    2011-11-01

    Full Text Available Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are also an important economical resource, through natural seepage or accidental release they can also be major pollutants. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence and thereby hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008 separates the cells from the sediment matrix. In principle, this technique can also be used to separate cells from oily sediments, but it is not optimized for this application.Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification and enumeration much easier. Consequently, the resulting cell counts from samples treated according to our new protocol are significantly higher than those treated according to Kallmeyer et al. (2008. We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and – in samples containing more biodegraded oils – methanol, delivered the best results. However, as solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which hydrocarbon extraction is maximised and cell lysis minimized. A ratio between slurry and solvent of 1:2 to 1:5 delivered the highest cell counts without lysing too many cells. The method provided reproducibly good results on samples from very different environments, both marine and terrestrial.

  20. Influence of extraction methods on the hepatotoxicity of Azadirachta ...

    African Journals Online (AJOL)

    The influence of extraction methods: Cold aqueous (CA) hot aqueous (HA) and alcoholic extraction (AE) methods on the hepatotoxic effect of Azadirachta indica bark extract (ABC) was investigated using albino rats. A total of forty eight rats were divided into three groups of sixteen rats equally for the extraction methods.

  1. Automatic segmentation method of pelvic floor levator hiatus in ultrasound using a self-normalizing neural network.

    Science.gov (United States)

    Bonmati, Ester; Hu, Yipeng; Sindhwani, Nikhil; Dietz, Hans Peter; D'hooge, Jan; Barratt, Dean; Deprest, Jan; Vercauteren, Tom

    2018-04-01

    Segmentation of the levator hiatus in ultrasound allows the extraction of biometrics, which are of importance for pelvic floor disorder assessment. We present a fully automatic method using a convolutional neural network (CNN) to outline the levator hiatus in a two-dimensional image extracted from a three-dimensional ultrasound volume. In particular, our method uses a recently developed scaled exponential linear unit (SELU) as a nonlinear self-normalizing activation function, which for the first time has been applied in medical imaging with CNN. SELU has important advantages such as being parameter-free and mini-batch independent, which may help to overcome memory constraints during training. A dataset with 91 images from 35 patients during Valsalva, contraction, and rest, all labeled by three operators, is used for training and evaluation in a leave-one-patient-out cross validation. Results show a median Dice similarity coefficient of 0.90 with an interquartile range of 0.08, with equivalent performance to the three operators (with a Williams' index of 1.03), and outperforming a U-Net architecture without the need for batch normalization. We conclude that the proposed fully automatic method achieved equivalent accuracy in segmenting the pelvic floor levator hiatus compared to a previous semiautomatic approach.

  2. Effect of extraction methods on the chemical components and taste quality of green tea extract.

    Science.gov (United States)

    Xu, Yong-Quan; Ji, Wei-Bin; Yu, Peigen; Chen, Jian-Xin; Wang, Fang; Yin, Jun-Feng

    2018-05-15

    The physicochemical properties of tea extracts are significantly affected by the extraction method. The aim of this study was to compare the effects of static and dynamic extractions on the concentrations of chemical components and taste quality of green tea extracts. Our results show that extraction of chemical components using static extraction follows a pseudo-second-order reaction, while that of dynamic extraction follows a first-order reaction. The concentrations of the solids, polyphenols, and free amino acids in green tea extract prepared by dynamic extraction were much higher, although the overall yields were not significantly different between the two extraction methods. Green tea extracts obtained via dynamic extraction were of lower bitterness and astringency, as well and higher intensities of umami and overall acceptability. These results suggest that dynamic extraction is more suitable for the processing of green tea concentrate because of the higher concentration of green tea extract. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Virgin almond oil: Extraction methods and composition

    Directory of Open Access Journals (Sweden)

    Roncero, J. M.

    2016-09-01

    Full Text Available In this paper the extraction methods of virgin almond oil and its chemical composition are reviewed. The most common methods for obtaining oil are solvent extraction, extraction with supercritical fluids (CO2 and pressure systems (hydraulic and screw presses. The best industrial performance, but also the worst oil quality is achieved by using solvents. Oils obtained by this method cannot be considered virgin oils as they are obtained by chemical treatments. Supercritical fluid extraction results in higher quality oils but at a very high price. Extraction by pressing becomes the best option to achieve high quality oils at an affordable price. With regards chemical composition, almond oil is characterized by its low content in saturated fatty acids and the predominance of monounsaturated, especially oleic acid. Furthermore, almond oil contains antioxidants and fat-soluble bioactive compounds that make it an oil with interesting nutritional and cosmetic properties.En este trabajo se revisan los métodos de extracción del aceite de almendra virgen y su composición química. Los métodos más habituales para la obtención del aceite son la extracción con disolventes, la extracción con fluidos supercríticos (CO2 y los sistemas de presión (prensas hidráulica y de tornillo. El mayor rendimiento industrial, pero también la peor calidad de los aceites, se consigue mediante el uso de disolventes. Además, los aceites obtenidos por este método no se pueden considerar vírgenes, pues se obtienen por medio de tratamientos químicos. La extracción con fluidos supercríticos da lugar a aceites de mayor calidad pero a un precio muy elevado. La extracción mediante prensado se convierte en la mejor opción de extracción, al conseguir aceites de alta calidad a un precio asequible. En cuanto a su composición química, el aceite de almendra se caracteriza por su bajo contenido en ácidos grasos saturados y el predominio de los monoinsaturados, en

  4. Automatically extracting clinically useful sentences from UpToDate to support clinicians’ information needs

    Science.gov (United States)

    Mishra, Rashmi; Fiol, Guilherme Del; Kilicoglu, Halil; Jonnalagadda, Siddhartha; Fiszman, Marcelo

    2013-01-01

    Clinicians raise several information needs in the course of care. Most of these needs can be met by online health knowledge resources such as UpToDate. However, finding relevant information in these resources often requires significant time and cognitive effort. Objective: To design and assess algorithms for extracting from UpToDate the sentences that represent the most clinically useful information for patient care decision making. Methods: We developed algorithms based on semantic predications extracted with SemRep, a semantic natural language processing parser. Two algorithms were compared against a gold standard composed of UpToDate sentences rated in terms of clinical usefulness. Results: Clinically useful sentences were strongly correlated with predication frequency (correlation= 0.95). The two algorithms did not differ in terms of top ten precision (53% vs. 49%; p=0.06). Conclusions: Semantic predications may serve as the basis for extracting clinically useful sentences. Future research is needed to improve the algorithms. PMID:24551389

  5. Development of an automatic evaluation method for patient positioning error.

    Science.gov (United States)

    Kubota, Yoshiki; Tashiro, Mutsumi; Shinohara, Ayaka; Abe, Satoshi; Souda, Saki; Okada, Ryosuke; Ishii, Takayoshi; Kanai, Tatsuaki; Ohno, Tatsuya; Nakano, Takashi

    2015-07-08

    Highly accurate radiotherapy needs highly accurate patient positioning. At our facility, patient positioning is manually performed by radiology technicians. After the positioning, positioning error is measured by manually comparing some positions on a digital radiography image (DR) to the corresponding positions on a digitally reconstructed radiography image (DRR). This method is prone to error and can be time-consuming because of its manual nature. Therefore, we propose an automated measuring method for positioning error to improve patient throughput and achieve higher reliability. The error between a position on the DR and a position on the DRR was calculated to determine the best matched position using the block-matching method. The zero-mean normalized cross correlation was used as our evaluation function, and the Gaussian weight function was used to increase importance as the pixel position approached the isocenter. The accuracy of the calculation method was evaluated using pelvic phantom images, and the method's effectiveness was evaluated on images of prostate cancer patients before the positioning, comparing them with the results of radiology technicians' measurements. The root mean square error (RMSE) of the calculation method for the pelvic phantom was 0.23 ± 0.05 mm. The coefficients between the calculation method and the measurement results of the technicians were 0.989 for the phantom images and 0.980 for the patient images. The RMSE of the total evaluation results of positioning for prostate cancer patients using the calculation method was 0.32 ± 0.18 mm. Using the proposed method, we successfully measured residual positioning errors. The accuracy and effectiveness of the method was evaluated for pelvic phantom images and images of prostate cancer patients. In the future, positioning for cancer patients at other sites will be evaluated using the calculation method. Consequently, we expect an improvement in treatment throughput for these other sites.

  6. Providing Automatic Support for Heuristic Rules of Methods

    NARCIS (Netherlands)

    Tekinerdogan, B.; Aksit, Mehmet; Demeyer, Serge; Bosch, H.G.P.; Bosch, Jan

    In method-based software development, software engineers create artifacts based on the heuristic rules of the adopted method. Most CASE tools, however, do not actively assist software engineers in applying the heuristic rules. To provide an active support, the rules must be formalized, implemented

  7. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    Science.gov (United States)

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  8. Automatic diagnostic methods of nuclear reactor collected signals

    International Nuclear Information System (INIS)

    Lavison, P.

    1978-03-01

    This work is the first phase of an opwall study of diagnosis limited to problems of monitoring the operating state; this allows to show all what the pattern recognition methods bring at the processing level. The present problem is the research of the control operations. The analysis of the state of the reactor gives a decision which is compared with the history of the control operations, and if there is not correspondence, the state subjected to the analysis will be said 'abnormal''. The system subjected to the analysis is described and the problem to solve is defined. Then, one deals with the gaussian parametric approach and the methods to evaluate the error probability. After one deals with non parametric methods and an on-line detection has been tested experimentally. Finally a non linear transformation has been studied to reduce the error probability previously obtained. All the methods presented have been tested and compared to a quality index: the error probability [fr

  9. Auto-OBSD: Automatic parameter selection for reliable Oscillatory Behavior-based Signal Decomposition with an application to bearing fault signature extraction

    Science.gov (United States)

    Huang, Huan; Baddour, Natalie; Liang, Ming

    2017-03-01

    Bearing signals are often contaminated by in-band interferences and random noise. Oscillatory Behavior-based Signal Decomposition (OBSD) is a new technique which decomposes a signal according to its oscillatory behavior, rather than frequency or scale. Due to the low oscillatory transients of bearing fault-induced signals, the OBSD can be used to effectively extract bearing fault signatures from a blurred signal. However, the quality of the result highly relies on the selection of method-related parameters. Such parameters are often subjectively selected and a systematic approach has not been reported in the literature. As such, this paper proposes a systematic approach to automatic selection of OBSD parameters for reliable extraction of bearing fault signatures. The OBSD utilizes the idea of Morphological Component Analysis (MCA) that optimally projects the original signal to low oscillatory wavelets and high oscillatory wavelets established via the Tunable Q-factor Wavelet Transform (TQWT). In this paper, the effects of the selection of each parameter on the performance of the OBSD for bearing fault signature extraction are investigated. It is found that some method-related parameters can be fixed at certain values due to the nature of bearing fault-induced impulses. To adaptively tune the remaining parameters, index-guided parameter selection algorithms are proposed. A Convergence Index (CI) is proposed and a CI-guided self-tuning algorithm is developed to tune the convergence-related parameters, namely, penalty factor and number of iterations. Furthermore, a Smoothness Index (SI) is employed to measure the effectiveness of the extracted low oscillatory component (i.e. bearing fault signature). It is shown that a minimum SI implies an optimal result with respect to the adjustment of relevant parameters. Thus, two SI-guided automatic parameter selection algorithms are also developed to specify two other parameters, i.e., Q-factor of high-oscillatory wavelets and

  10. An automatic method to quantify the vibration properties of human vocal folds via videokymography

    NARCIS (Netherlands)

    Qiu, QJ; Schutte, HK; Gu, L; Yu, QL

    2003-01-01

    The study offers an automatical quantitative method to obtain vibration properties of human vocal folds via videokymography. The presented method is based on image processing, which combines an active contour model with a genetic algorithm to improve detecting precision and processing speed, can

  11. Support subspaces method for synthetic aperture radar automatic target recognition

    Directory of Open Access Journals (Sweden)

    Vladimir Fursov

    2016-09-01

    Full Text Available This article offers a new object recognition approach that gives high quality using synthetic aperture radar images. The approach includes image preprocessing, clustering and recognition stages. At the image preprocessing stage, we compute the mass centre of object images for better image matching. A conjugation index of a recognition vector is used as a distance function at clustering and recognition stages. We suggest a construction of the so-called support subspaces, which provide high recognition quality with a significant dimension reduction. The results of the experiments demonstrate that the proposed method provides higher recognition quality (97.8% than such methods as support vector machine (95.9%, deep learning based on multilayer auto-encoder (96.6% and adaptive boosting (96.1%. The proposed method is stable for objects processed from different angles.

  12. Statistical and neural net methods for automatic glaucoma diagnosis determination

    Czech Academy of Sciences Publication Activity Database

    Pluháček, F.; Pospíšil, Jaroslav

    2004-01-01

    Roč. 1, č. 2 (2004), s. 12-24 ISSN 1644-3608 Institutional research plan: CEZ:AV0Z1010921 Keywords : glaucoma * diagnostic methods * pallor * image analysis * statistical evaluation Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.375, year: 2004

  13. A comparison of five extraction methods for extracellular polymeric ...

    African Journals Online (AJOL)

    Two physical methods (centrifugation and ultrasonication) and 3 chemical methods (extraction with EDTA, extraction with formaldehyde, and extraction with formaldehyde plus NaOH) for extraction of EPS from alga-bacteria biofilm were assessed. Pretreatment with ultrasound at low intensity doubled the EPS yield without ...

  14. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.

    Science.gov (United States)

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W

    2016-01-01

    Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial

  15. Design and implementation of a control automatic module for the volume extraction of a 99mTc generator

    International Nuclear Information System (INIS)

    Lopez, Yon; Urquizo, Rafael; Gago, Javier; Mendoza, Pablo

    2014-01-01

    A module for the automatic extraction of volume from 0.05 mL to 1 mL has been developed using a 3D printer, using as base material acrylonitrile butadiene styrene (ABS). The design allows automation of the input and ejection eluate 99m Tc in the generator prototype 99 Mo/ 99m Tc processes; use in other systems is feasible due to its high degree of versatility, depending on the selection of the main components: precision syringe and multi-way solenoid valve. An accuracy equivalent to commercial equipment has been obtained, but at lower cost. This article describes the mechanical design, design calculations of the movement mechanism, electronics and automatic syringe dispenser control. (authors).

  16. Automatic on-line solid-phase extraction with ultra-high performance liquid chromatography and tandem mass spectrometry for the determination of ten antipsychotics in human plasma.

    Science.gov (United States)

    Zhong, Qisheng; Shen, Lingling; Liu, Jiaqi; Yu, Dianbao; Li, Simin; Li, Zhiru; Yao, Jinting; Huang, Taohong; Kawano, Shin-Ichi; Hashi, Yuki; Zhou, Ting

    2016-06-01

    An automatic on-line solid-phase extraction with ultra-high performance liquid chromatography and tandem mass spectrometry method was developed for the simultaneous determination of ten antipsychotics in human plasma. The plasma sample after filtration was injected directly into the system without any pretreatment. A Shim-pack MAYI-C8 (G) column was used as a solid-phase extraction column, and all the analytes were separated on a Shim-pack XR-ODS III column with a mobile phase consisting of 0.1% v/v formic acid in water with 5 mM ammonium acetate and acetonitrile. The method features were systematically investigated, including extraction conditions, desorption conditions, the equilibration solution, the valve switching time, and the dilution for column-head stacking. Under the optimized conditions, the whole analysis procedure took only 10 min. The limits of quantitation were in the range of 0.00321-2.75 μg/L and the recoveries ranged from 75.9 to 122%. Compared with the off-line ultra-high performance liquid chromatography and the reported methods, this validated on-line method showed significant advantages such as minimal pretreatment, shortest analysis time, and highest sensitivity. The results indicated that this automatic on-line method was rapid, sensitive, and reliable for the determination of antipsychotics in plasma and could be extended to other target analytes in biological samples. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. A METHOD OF AUTOMATIC DETERMINATION OF THE NUMBER OF THE ELECTRICAL MOTORS SIMULTANEOUSLY WORKING IN GROUP

    Directory of Open Access Journals (Sweden)

    A. V. Voloshko

    2016-11-01

    Full Text Available Purpose. Propose a method of automatic determination of the number of operating high voltage electric motors in the group of the same type based on the determination and analysis of the account data of power consumption, obtained from of electric power meters installed at the connection of motors. Results. The algorithm of the automatic determination program for the number of working in the same group of electric motors, which is based on the determination of the motor power minimum value at which it is considered on, was developed. Originality. For the first time a method of automatic determination of the number of working of the same type high-voltage motors group was proposed. Practical value. Obtained results may be used for the introduction of an automated accounting run of each motor, calculating the parameters of the equivalent induction motor or a synchronous motor.

  18. A new method for the automatic calculation of prosody

    International Nuclear Information System (INIS)

    GUIDINI, Annie

    1981-01-01

    An algorithm is presented for the calculation of the prosodic parameters for speech synthesis. It uses the melodic patterns, composed of rising and falling slopes, suggested by G. CAELEN, and rests on: 1. An analysis into units of meaning to determine a melodic pattern 2. the calculation of the numeric values for the prosodic variations of each syllable; 3. The use of a table of vocalic values for the three parameters for each vowel according to the consonantal environment and of a table of standard duration for consonants. This method was applied in the 'SARA' program of synthesis with satisfactory results. (author) [fr

  19. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  20. A comparative study of Averrhoabilimbi extraction method

    Science.gov (United States)

    Zulhaimi, H. I.; Rosli, I. R.; Kasim, K. F.; Akmal, H. Muhammad; Nuradibah, M. A.; Sam, S. T.

    2017-09-01

    In recent year, bioactive compound in plant has become a limelight in the food and pharmaceutical market, leading to research interest to implement effective technologies for extracting bioactive substance. Therefore, this study is focusing on extraction of Averrhoabilimbi by different extraction technique namely, maceration and ultrasound-assisted extraction. Fewplant partsof Averrhoabilimbiweretaken as extraction samples which are fruits, leaves and twig. Different solvents such as methanol, ethanol and distilled water were utilized in the process. Fruit extractsresult in highest extraction yield compared to other plant parts. Ethanol and distilled water have significant role compared to methanol in all parts and both extraction technique. The result also shows that ultrasound-assisted extraction gave comparable result with maceration. Besides, the shorter period on extraction process gives useful in term of implementation to industries.

  1. Accuracy of structure-based sequence alignment of automatic methods

    Directory of Open Access Journals (Sweden)

    Lee Byungkook

    2007-09-01

    Full Text Available Abstract Background Accurate sequence alignments are essential for homology searches and for building three-dimensional structural models of proteins. Since structure is better conserved than sequence, structure alignments have been used to guide sequence alignments and are commonly used as the gold standard for sequence alignment evaluation. Nonetheless, as far as we know, there is no report of a systematic evaluation of pairwise structure alignment programs in terms of the sequence alignment accuracy. Results In this study, we evaluate CE, DaliLite, FAST, LOCK2, MATRAS, SHEBA and VAST in terms of the accuracy of the sequence alignments they produce, using sequence alignments from NCBI's human-curated Conserved Domain Database (CDD as the standard of truth. We find that 4 to 9% of the residues on average are either not aligned or aligned with more than 8 residues of shift error and that an additional 6 to 14% of residues on average are misaligned by 1–8 residues, depending on the program and the data set used. The fraction of correctly aligned residues generally decreases as the sequence similarity decreases or as the RMSD between the Cα positions of the two structures increases. It varies significantly across CDD superfamilies whether shift error is allowed or not. Also, alignments with different shift errors occur between proteins within the same CDD superfamily, leading to inconsistent alignments between superfamily members. In general, residue pairs that are more than 3.0 Å apart in the reference alignment are heavily (>= 25% on average misaligned in the test alignments. In addition, each method shows a different pattern of relative weaknesses for different SCOP classes. CE gives relatively poor results for β-sheet-containing structures (all-β, α/β, and α+β classes, DaliLite for "others" class where all but the major four classes are combined, and LOCK2 and VAST for all-β and "others" classes. Conclusion When the sequence

  2. An unsupervised text mining method for relation extraction from biomedical literature.

    Science.gov (United States)

    Quan, Changqin; Wang, Meng; Ren, Fuji

    2014-01-01

    The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1) Protein-protein interactions extraction, and (2) Gene-suicide association extraction. The evaluation of task (1) on the benchmark dataset (AImed corpus) showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  3. An unsupervised text mining method for relation extraction from biomedical literature.

    Directory of Open Access Journals (Sweden)

    Changqin Quan

    Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  4. A Framework for the Development of Automatic DFA Method to Minimize the Number of Components and Assembly Reorientations

    Science.gov (United States)

    Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa

    2018-03-01

    Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.

  5. Automatic ultrasonic image analysis method for defect detection

    International Nuclear Information System (INIS)

    Magnin, I.; Perdrix, M.; Corneloup, G.; Cornu, B.

    1987-01-01

    Ultrasonic examination of austenitic steel weld seams raises well known problems of interpreting signals perturbed by this type of material. The JUKEBOX ultrasonic imaging system developed at the Cadarache Nuclear Research Center provides a major improvement in the general area of defect localization and characterization, based on processing overall images obtained by (X, Y) scanning. (X, time) images are formed by juxtaposing input signals. A series of parallel images shifted on the Y-axis is also available. The authors present a novel defect detection method based on analysing the timeline positions of the maxima and minima recorded on (X, time) images. This position is statistically stable when a defect is encountered, and is random enough under spurious noise conditions to constitute a discriminating parameter. The investigation involves calculating the trace variance: this parameters is then taken into account for detection purposes. Correlation with parallel images enhances detection reliability. A significant increase in the signal-to-noise ratio during tests on artificial defects is shown

  6. Method and apparatus for automatic control of a humanoid robot

    Science.gov (United States)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Reiland, Matthew J (Inventor); Sanders, Adam M (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  7. Using the Echo Nest's automatically extracted music features for a musicological purpose

    DEFF Research Database (Denmark)

    Andersen, Jesper Steen

    2014-01-01

    This paper sums up the preliminary observations and challenges encountered during my first engaging with the music intelligence company Echo Nest's automatically derived data of more than 35 million songs. The overall purpose is to investigate whether musicologists can draw benefit from Echo Nest...

  8. Gas chromatographic determination of N-nitrosamines in beverages following automatic solid-phase extraction.

    Science.gov (United States)

    Jurado-Sánchez, Beatriz; Ballesteros, Evaristo; Gallego, Mercedes

    2007-11-28

    A semiautomatic method for the determination of seven N-nitrosamines in beverages by gas chromatography with nitrogen-phosphorus detection is proposed. Beverage samples are aspirated into a solid-phase extraction module for preconcentration and cleanup. The influence of the experimental conditions was examined by using various sorbents among which LiChrolut EN was found to provide quantitative elution and the highest preconcentration factors of all. The proposed method is sensitive, with limits of detection between 7 and 33 ng/kg, and precise, with relative standard deviations from 4.3% to 6.0%. The recoveries of N-nitrosamines from beverage samples spiked with 0.5 or 1 microg/kg concentrations of these compounds ranged from 95% to 102%. The method was successfully applied to the determination of residues of the studied N-nitrosamines in beverages including beer, wine, liquor, whisky, cognac, rum, vodka, grape juice, cider, tonic water, and soft drinks. The analytes were only detected in beer samples, positives being confirmed by gas chromatography coupled with impact ionization mass spectrometry.

  9. The development of an automatic scanning method for CR-39 neutron dosimeter

    International Nuclear Information System (INIS)

    Tawara, Hiroko; Miyajima, Mitsuhiro; Sasaki, Shin-ichi; Hozumi, Ken-ichi

    1989-01-01

    A method of measuring low level neutron dose has been developed with CR-39 track detectors using an automatic scanning system. It is composed of the optical microscope with a video camera, an image processor and a personal computer. The focus point of the microscope and the X-Y stage are controlled from the computer. The minimum detectable neutron dose is estimated at 4.6 mrem in the uniform field of neutron with equivalent energy spectrum to Am-Be source from the results of automatic measurements. (author)

  10. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  11. Method and apparatus for mounting or dismounting a semi-automatic twist-lock

    NARCIS (Netherlands)

    Klein Breteler, A.J.; Tekeli, G.

    2001-01-01

    The invention relates to a method for mounting or dismounting a semi-automatic twistlock at a corner of a deck container, wherein the twistlock is mounted or dismounted on a quayside where a ship may be docked for loading or unloading, in a loading or unloading terminal installed on the quayside,

  12. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  13. Extraction Methods for the Isolation of Isoflavonoids from Plant Material

    Directory of Open Access Journals (Sweden)

    Blicharski Tomasz

    2017-03-01

    Full Text Available The purpose of this review is to describe and compare selected traditional and modern extraction methods employed in the isolation of isoflavonoids from plants. Conventional methods such as maceration, percolation, or Soxhlet extraction are still frequently used in phytochemical analysis. Despite their flexibility, traditional extraction techniques have significant drawbacks, including the need for a significant investment of time, energy, and starting material, and a requirement for large amounts of potentially toxic solvents. Moreover, these techniques are difficult to automate, produce considerable amount of waste and pose a risk of degradation of thermolabile compounds. Modern extraction methods, such as: ultrasound-assisted extraction, microwave-assisted extraction, accelerated solvent extraction, supercritical fluid extraction, and negative pressure cavitation extraction, can be regarded as remedies for the aforementioned problems. This manuscript discusses the use of the most relevant extraction techniques in the process of isolation of isoflavonoids, secondary metabolites that have been found to have a plethora of biological and pharmacological activities.

  14. Development of automatic extraction of the corpus callosum from magnetic resonance imaging of the head and examination of the early dementia objective diagnostic technique in feature analysis

    International Nuclear Information System (INIS)

    Kodama, Naoki; Kaneko, Tomoyuki

    2005-01-01

    We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)

  15. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  16. A Classification Method of Inquiry E-mails for Describing FAQ with Automatic Setting Mechanism of Judgment Thresholds

    Science.gov (United States)

    Tsuda, Yuki; Akiyoshi, Masanori; Samejima, Masaki; Oka, Hironori

    In this paper the authors propose a classification method of inquiry e-mails for describing FAQ (Frequently Asked Questions) and automatic setting mechanism of judgment thresholds. In this method, a dictionary used for classification of inquiries is generated and updated automatically by statistical information of characteristic words in clusters, and inquiries are classified correctly to each proper cluster by using the dictionary. Threshold values are automatically set by using statistical information.

  17. Using Probe Vehicle Data for Automatic Extraction of Road Traffic Parameters

    Directory of Open Access Journals (Sweden)

    Roman Popescu Maria Alexandra

    2016-12-01

    Full Text Available Through this paper the author aims to study and find solutions for automatic detection of traffic light position and for automatic calculation of the waiting time at traffic light. The first objective serves mainly the road transportation field, mainly because it removes the need for collaboration with local authorities to establish a national network of traffic lights. The second objective is important not only for companies which are providing navigation solutions, but especially for authorities, institutions, companies operating in road traffic management systems. Real-time dynamic determination of traffic queue length and of waiting time at traffic lights allow the creation of dynamic systems, intelligent and flexible, adapted to actual traffic conditions, and not to generic, theoretical models. Thus, cities can approach the Smart City concept by boosting, efficienting and greening the road transport, promoted in Europe through the Horizon 2020, Smart Cities, Urban Mobility initiative.

  18. Commissioning (Method Development) of the Panasonic UD-794 Automatic Thermoluminescent Dosemeter (TLD) Irradiator

    International Nuclear Information System (INIS)

    McKittrick Leo

    2005-08-01

    This study is presented in two parts. Part 1, the Literature Survey examines the history, theory and application of TL dosimetry. A general overview of the thermoluminescent dosemeter is presented together with a complete in-depth look at the Panasonic UD-716 TLD Reader. The irradiation and calibration of TL Dosemeters is also examined together with an overview of past papers and research carried out on related topics. Part 2 documents the study of commissioning the Panasonic UD-794 Automatic TLD Irradiator, the section includes; methods and procedures used with several dosimetry instruments and materials; method development with the irradiator; results and findings from the irradiator compared to a certified method of irradiation, using the methods developed; investigations of irradiator parameters-features; conclusions made from carrying out the study of commissioning the Panasonic UD-794 Automatic TLD Irradiator

  19. Validation of the ICU-DaMa tool for automatically extracting variables for minimum dataset and quality indicators: The importance of data quality assessment.

    Science.gov (United States)

    Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María

    2018-04-01

    Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement

  20. Nouvelle methode d'extraction automatique de routes dans des images satellitaires

    Science.gov (United States)

    Hemiari, Gholamabbas

    In the present thesis, a new automatic method to extract roads from satellite imagery is proposed. This new method called Tridimensional Multilayer (3DM) is part of the global methods of linear feature extraction and is based on the Radon transform concept. The 3DM method eliminates simultaneously the three restrictions of the linear Radon transform for line extraction. This method allows the extraction of lines with different lengths and curvatures even in a noisy context. The 3DM method allows also to establish a geometrical database relative to extracted lines like the length and the endpoints of extracted lines. This database can be integrated into a Geographic Information System (GIS) and it can be used in diverse applications. The methodological approach of this study is divided in two phases: mathematical and algorithm developments. In the first phase, we generalized the Radon transform for a continuous second-degree polynomial function (Tridimensional Radon Transform 3DRT) for extracting lines with different curvatures. The second phase consists first in elaborating a new concept of acquisition and analysis of information adapted to the methods of linear feature extraction (Multilayer method). Then, we developed the 3DM method by combining 3DRT and MM. The 3DM method was applied to a binary noisy image for extracting the lines that represent roads with different lengths and the river borders with different curvatures. The performance of the 3DM method is evaluated by comparing the result obtained from the reference image (input image without noise). The evaluation of the 2DM method shows that 88% of the lines are correctly extracted. Meanwhile the percentage of omitted lines is 12% and committed lines reach 4%. The extraction success rate of this method is consequently quantified at 82%. These measurements show the improvement brought by the 3DM method in the extraction of the different curve lines. Implementation of the 3DM method onto images obtained by

  1. Extraction of human genomic DNA from whole blood using a magnetic microsphere method.

    Science.gov (United States)

    Gong, Rui; Li, Shengying

    2014-01-01

    With the rapid development of molecular biology and the life sciences, magnetic extraction is a simple, automatic, and highly efficient method for separating biological molecules, performing immunoassays, and other applications. Human blood is an ideal source of human genomic DNA. Extracting genomic DNA by traditional methods is time-consuming, and phenol and chloroform are toxic reagents that endanger health. Therefore, it is necessary to find a more convenient and efficient method for obtaining human genomic DNA. In this study, we developed urea-formaldehyde resin magnetic microspheres and magnetic silica microspheres for extraction of human genomic DNA. First, a magnetic microsphere suspension was prepared and used to extract genomic DNA from fresh whole blood, frozen blood, dried blood, and trace blood. Second, DNA content and purity were measured by agarose electrophoresis and ultraviolet spectrophotometry. The human genomic DNA extracted from whole blood was then subjected to polymerase chain reaction analysis to further confirm its quality. The results of this study lay a good foundation for future research and development of a high-throughput and rapid extraction method for extracting genomic DNA from various types of blood samples.

  2. Development of automatic blood extraction device with a micro-needle for blood-sugar level measurement

    Science.gov (United States)

    Kawanaka, Kaichiro; Uetsuji, Yasutomo; Tsuchiya, Kazuyoshi; Nakamachi, Eiji

    2008-12-01

    In this study, a portable type HMS (Health Monitoring System) device is newly developed. It has features 1) puncturing a blood vessel by using a minimally invasive micro-needle, 2) extracting and transferring human blood and 3) measuring blood glucose level. This miniature SMBG (Self-Monitoring of Blood Glucose) device employs a syringe reciprocal blood extraction system equipped with an electro-mechanical control unit for accurate and steady operations. The device consists of a) a disposable syringe unit, b) a non-disposable body unit, and c) a glucose enzyme sensor. The syringe unit consists of a syringe itself, its cover, a piston and a titanium alloy micro-needle, whose inner diameter is about 100µm. The body unit consists of a linear driven-type stepping motor, a piston jig, which connects directly to the shaft of the stepping motor, and a syringe jig, which is driven by combining with the piston jig and slider, which fixes the syringe jig. The required thrust to drive the slider is designed to be greater than the value of the blood extraction force. Because of this driving mechanism, the automatic blood extraction and discharging processes are completed by only one linear driven-type stepping motor. The experimental results using our miniature SMBG device was confirmed to output more than 90% volumetric efficiency under the driving speed of the piston, 1.0mm/s. Further, the blood sugar level was measured successfully by using the glucose enzyme sensor.

  3. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  4. System and method for free-boundary surface extraction

    KAUST Repository

    Algarni, Marei

    2017-10-26

    A method of extracting surfaces in three-dimensional data includes receiving as inputs three-dimensional data and a seed point p located on a surface to be extracted. The method further includes propagating a front outwardly from the seed point p and extracting a plurality of ridge curves based on the propagated front. A surface boundary is detected based on a comparison of distances between adjacent ridge curves and the desired surface is extracted based on the detected surface boundary.

  5. Optimization strategies of in-tube extraction (ITEX) methods

    OpenAIRE

    Laaks, Jens; Jochmann, Maik A.; Schilling, Beat; Schmidt, Torsten C.

    2015-01-01

    Microextraction techniques, especially dynamic techniques like in-tube extraction (ITEX), can require an extensive method optimization procedure. This work summarizes the experiences from several methods and gives recommendations for the setting of proper extraction conditions to minimize experimental effort. Therefore, the governing parameters of the extraction and injection stages are discussed. This includes the relative extraction efficiencies of 11 kinds of sorbent tubes, either commerci...

  6. High-accuracy automatic classification of Parkinsonian tremor severity using machine learning method.

    Science.gov (United States)

    Jeon, Hyoseon; Lee, Woongwoo; Park, Hyeyoung; Lee, Hong Ji; Kim, Sang Kyong; Kim, Han Byul; Jeon, Beomseok; Park, Kwang Suk

    2017-10-31

    Although clinical aspirations for new technology to accurately measure and diagnose Parkinsonian tremors exist, automatic scoring of tremor severity using machine learning approaches has not yet been employed. This study aims to maximize the scientific validity of automatic tremor-severity classification using machine learning algorithms to score Parkinsonian tremor severity in the same manner as the unified Parkinson's disease rating scale (UPDRS) used to rate scores in real clinical practice. Eighty-five PD patients perform four tasks for severity assessment of their resting, resting with mental stress, postural, and intention tremors. The tremor signals are measured using a wristwatch-type wearable device with an accelerometer and gyroscope. Displacement and angle signals are obtained by integrating the acceleration and angular-velocity signals. Nineteen features are extracted from each of the four tremor signals. The optimal feature configuration is decided using the wrapper feature selection algorithm or principal component analysis, and decision tree, support vector machine, discriminant analysis, and k-nearest neighbour algorithms are considered to develop an automatic scoring system for UPDRS prediction. The results are compared to UPDRS ratings assigned by two neurologists. The highest accuracies are 92.3%, 86.2%, 92.1%, and 89.2% for resting, resting with mental stress, postural, and intention tremors, respectively. The weighted Cohen's kappa values are 0.745, 0.635 and 0.633 for resting, resting with mental stress, and postural tremors (almost perfect agreement), and 0.570 for intention tremors (moderate). These results indicate the feasibility of the proposed system as a clinical decision tool for Parkinsonian tremor-severity automatic scoring.

  7. A generic method for automatic translation between input models for different versions of simulation codes

    International Nuclear Information System (INIS)

    Serfontein, Dawid E.; Mulder, Eben J.; Reitsma, Frederik

    2014-01-01

    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications

  8. Sequential injection system incorporating a micro extraction column for automatic fractionation of metal ions in solid samples

    DEFF Research Database (Denmark)

    Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald

    2005-01-01

    to conventional batch methods, this fully automated approach furthermore offers the potentials of a variety of operational extraction protocols. Employing the three-step sequential extraction BCR scheme to a certified homogeneous soil reference material (NIST, SRM 2710), this communication investigates four...

  9. Evaluation of needle trap micro-extraction and automatic alveolar sampling for point-of-care breath analysis.

    Science.gov (United States)

    Trefz, Phillip; Rösner, Lisa; Hein, Dietmar; Schubert, Jochen K; Miekisch, Wolfram

    2013-04-01

    Needle trap devices (NTDs) have shown many advantages such as improved detection limits, reduced sampling time and volume, improved stability, and reproducibility if compared with other techniques used in breath analysis such as solid-phase extraction and solid-phase micro-extraction. Effects of sampling flow (2-30 ml/min) and volume (10-100 ml) were investigated in dry gas standards containing hydrocarbons, aldehydes, and aromatic compounds and in humid breath samples. NTDs contained (single-bed) polymer packing and (triple-bed) combinations of divinylbenzene/Carbopack X/Carboxen 1000. Substances were desorbed from the NTDs by means of thermal expansion and analyzed by gas chromatography-mass spectrometry. An automated CO2-controlled sampling device for direct alveolar sampling at the point-of-care was developed and tested in pilot experiments. Adsorption efficiency for small volatile organic compounds decreased and breakthrough increased when sampling was done with polymer needles from a water-saturated matrix (breath) instead from dry gas. Humidity did not affect analysis with triple-bed NTDs. These NTDs showed only small dependencies on sampling flow and low breakthrough from 1-5 %. The new sampling device was able to control crucial parameters such as sampling flow and volume. With triple-bed NTDs, substance amounts increased linearly with increasing sample volume when alveolar breath was pre-concentrated automatically. When compared with manual sampling, automatic sampling showed comparable or better results. Thorough control of sampling and adequate choice of adsorption material is mandatory for application of needle trap micro-extraction in vivo. The new CO2-controlled sampling device allows direct alveolar sampling at the point-of-care without the need of any additional sampling, storage, or pre-concentration steps.

  10. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  11. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  12. Systems and methods for automatically identifying and linking names in digital resources

    Science.gov (United States)

    Parker, Charles T.; Lyons, Catherine M.; Roston, Gerald P.; Garrity, George M.

    2017-06-06

    The present invention provides systems and methods for automatically identifying name-like-strings in digital resources, matching these name-like-string against a set of names held in an expertly curated database, and for those name-like-strings found in said database, enhancing the content by associating additional matter with the name, wherein said matter includes information about the names that is held within said database and pointers to other digital resources which include the same name and it synonyms.

  13. Technical characterization by image analysis: an automatic method of mineralogical studies

    International Nuclear Information System (INIS)

    Oliveira, J.F. de

    1988-01-01

    The application of a modern method of image analysis fully automated for the study of grain size distribution modal assays, degree of liberation and mineralogical associations is discussed. The image analyser is interfaced with a scanning electron microscope and an energy dispersive X-rays analyser. The image generated by backscattered electrons is analysed automatically and the system has been used in accessment studies of applied mineralogy as well as in process control in the mining industry. (author) [pt

  14. Automatic planning for robots: review of methods and some ideas about structure and learning

    Energy Technology Data Exchange (ETDEWEB)

    Cuena, J.; Salmeron, C.

    1983-01-01

    After a brief review of the problems involved in the design of an automatic planner system, the attention is focused in the particular problems that appear when the planner is used to control the actions of a robot. As conclusion, the introduction of techniques for learning in order to improve the efficiency of a planner are suggested, and a method for it, at present in development, is presented. 14 references.

  15. Method of automatic image registration of three-dimensional range of archaeological restoration

    International Nuclear Information System (INIS)

    Garcia, O.; Perez, M.; Morales, N.

    2012-01-01

    We propose an automatic registration system for reconstruction of various positions of a large object based on a static structured light pattern. The system combines the technology of stereo vision, structured light pattern, the positioning system of the vision sensor and an algorithm that simplifies the process of finding correspondence for the modeling of large objects. A new structured light pattern based on Kautz sequence is proposed, using this pattern as static implement a proposed new registration method. (Author)

  16. Social network extraction based on Web: 1. Related superficial methods

    Science.gov (United States)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  17. Supervised non-negative tensor factorization for automatic hyperspectral feature extraction and target discrimination

    Science.gov (United States)

    Anderson, Dylan; Bapst, Aleksander; Coon, Joshua; Pung, Aaron; Kudenov, Michael

    2017-05-01

    Hyperspectral imaging provides a highly discriminative and powerful signature for target detection and discrimination. Recent literature has shown that considering additional target characteristics, such as spatial or temporal profiles, simultaneously with spectral content can greatly increase classifier performance. Considering these additional characteristics in a traditional discriminative algorithm requires a feature extraction step be performed first. An example of such a pipeline is computing a filter bank response to extract spatial features followed by a support vector machine (SVM) to discriminate between targets. This decoupling between feature extraction and target discrimination yields features that are suboptimal for discrimination, reducing performance. This performance reduction is especially pronounced when the number of features or available data is limited. In this paper, we propose the use of Supervised Nonnegative Tensor Factorization (SNTF) to jointly perform feature extraction and target discrimination over hyperspectral data products. SNTF learns a tensor factorization and a classification boundary from labeled training data simultaneously. This ensures that the features learned via tensor factorization are optimal for both summarizing the input data and separating the targets of interest. Practical considerations for applying SNTF to hyperspectral data are presented, and results from this framework are compared to decoupled feature extraction/target discrimination pipelines.

  18. Sparse deconvolution method for ultrasound images based on automatic estimation of reference signals.

    Science.gov (United States)

    Jin, Haoran; Yang, Keji; Wu, Shiwei; Wu, Haiteng; Chen, Jian

    2016-04-01

    Sparse deconvolution is widely used in the field of non-destructive testing (NDT) for improving the temporal resolution. Generally, the reference signals involved in sparse deconvolution are measured from the reflection echoes of standard plane block, which cannot accurately describe the acoustic properties at different spatial positions. Therefore, the performance of sparse deconvolution will deteriorate, due to the deviations in reference signals. Meanwhile, it is inconvenient for automatic ultrasonic NDT using manual measurement of reference signals. To overcome these disadvantages, a modified sparse deconvolution based on automatic estimation of reference signals is proposed in this paper. By estimating the reference signals, the deviations would be alleviated and the accuracy of sparse deconvolution is therefore improved. Based on the automatic estimation of reference signals, regional sparse deconvolution is achievable by decomposing the whole B-scan image into small regions of interest (ROI), and the image dimensionality is significantly reduced. Since the computation time of proposed method has a power dependence on the signal length, the computation efficiency is therefore improved significantly with this strategy. The performance of proposed method is demonstrated using immersion measurement of scattering targets and steel block with side-drilled holes. The results verify that the proposed method is able to maintain the vertical resolution enhancement and noise-suppression capabilities in different scenarios. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Development of automatic control method for cryopump system for JT-60 neutral beam injector

    International Nuclear Information System (INIS)

    Shibanuma, Kiyoshi; Akino, Noboru; Dairaku, Masayuki; Ohuchi, Yutaka; Shibata, Takemasa

    1991-10-01

    A cryopump system for JT-60 neutral beam injector (NBI) is composed of 14 cryopumps with the largest total pumping speed of 20000 m 3 /s in the world, which are cooled by liquid helium through a long-distance liquid helium transferline of about 500 m from a helium refrigerator with the largest capacity of 3000 W at 3.6 K in Japan. An automatic control method of the cryopump system has been developed and tested. Features of the automatic control method are as follows. 1) Suppression control of the thermal imbalance in cooling-down of the 14 cryopumps. 2) Stable cooling control of the cryopump due to liquid helium supply to six cryopanels by natural circulation in steady-state mode. 3) Stable liquid helium supply control for the cryopumps from the liquid helium dewar in all operation modes of the cryopumps, considering the helium quantities held in respective components of the closed helium loop. 4) Stable control of the helium refrigerator for the fluctuation in thermal load from the cryopumps and the change of operation mode of the cryopumps. In the automatic operation of the cryopump system by the newly developed control method, the cryopump system including the refrigerator was stably operated for all operation modes of the cryopumps, so that the cool-down of 14 cryopumps was completed in 16 hours from the start of cool-down of the system and the cryopumps was stably cooled by natural circulation cooling in steady-state mode. (author)

  20. [A wavelet-transform-based method for the automatic detection of late-type stars].

    Science.gov (United States)

    Liu, Zhong-tian; Zhao, Rrui-zhen; Zhao, Yong-heng; Wu, Fu-chao

    2005-07-01

    The LAMOST project, the world largest sky survey project, urgently needs an automatic late-type stars detection system. However, to our knowledge, no effective methods for automatic late-type stars detection have been reported in the literature up to now. The present study work is intended to explore possible ways to deal with this issue. Here, by "late-type stars" we mean those stars with strong molecule absorption bands, including oxygen-rich M, L and T type stars and carbon-rich C stars. Based on experimental results, the authors find that after a wavelet transform with 5 scales on the late-type stars spectra, their frequency spectrum of the transformed coefficient on the 5th scale consistently manifests a unimodal distribution, and the energy of frequency spectrum is largely concentrated on a small neighborhood centered around the unique peak. However, for the spectra of other celestial bodies, the corresponding frequency spectrum is of multimodal and the energy of frequency spectrum is dispersible. Based on such a finding, the authors presented a wavelet-transform-based automatic late-type stars detection method. The proposed method is shown by extensive experiments to be practical and of good robustness.

  1. Evaluation of urinary cortisol excretion by radioimmunoassay through two methods (extracted and non-extracted)

    International Nuclear Information System (INIS)

    Fonte Kohek, M.B. da; Mendonca, B.B. de; Nicolau, W.

    1993-01-01

    The objective of this paper is to compare the feasibility, sensitivity and specificity of both methods (extracted versus non-extracted) in the hypercortisolism diagnosis. It used Gamma Coat 125 cortisol Kit provided by Clinical Assays, Incstar, USA, for both methods extracting it with methylene chloride in order to measure the extracted cortisol. It was performed 32 assays from which it was obtained from 0.1 to 0.47 u g/d l of sensitivity. The intra-run precision was varied from 8.29 +- 3.38% and 8.19 +-4.72% for high and low levels, respectively for non-extracted cortisol, and 9.72 +- 1.94% and 9.54 +- 44% for high and low levels, respectively, for extracted cortisol. The inter-run precision was 15.98% and 16.15% for high level of non-extracted cortisol, respectively. For the low level it obtained 17.25% and 18.59% for non-extracted and extracted cortisol respectively. It was evaluated 24-hour urine basal samples from 43 normal subjects, and 53 obese (body mass index > 30) and 53 Cushing's syndrome patients. The sensitivity of the methods were similar (100% and 98.1% for non-extracted and extracted methods, respectively) and the specificity was the same for both methods (100%). It was noticed a positive correlation between the two methods in all the groups studied (p s syndrome. (author)

  2. Steroid hormones in environmental matrices: extraction method comparison.

    Science.gov (United States)

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  3. Automatically extracting clinically useful sentences from UpToDate to support clinicians' information needs.

    Science.gov (United States)

    Mishra, Rashmi; Del Fiol, Guilherme; Kilicoglu, Halil; Jonnalagadda, Siddhartha; Fiszman, Marcelo

    2013-01-01

    Clinicians raise several information needs in the course of care. Most of these needs can be met by online health knowledge resources such as UpToDate. However, finding relevant information in these resources often requires significant time and cognitive effort. To design and assess algorithms for extracting from UpToDate the sentences that represent the most clinically useful information for patient care decision making. We developed algorithms based on semantic predications extracted with SemRep, a semantic natural language processing parser. Two algorithms were compared against a gold standard composed of UpToDate sentences rated in terms of clinical usefulness. Clinically useful sentences were strongly correlated with predication frequency (correlation= 0.95). The two algorithms did not differ in terms of top ten precision (53% vs. 49%; p=0.06). Semantic predications may serve as the basis for extracting clinically useful sentences. Future research is needed to improve the algorithms.

  4. Leukocyte telomere length variation due to DNA extraction method.

    Science.gov (United States)

    Denham, Joshua; Marques, Francine Z; Charchar, Fadi J

    2014-12-04

    Telomere length is indicative of biological age. Shorter telomeres have been associated with several disease and health states. There are inconsistencies throughout the literature amongst relative telomere length measured by quantitative PCR (qPCR) and different extraction methods or kits used. We quantified whole-blood leukocyte telomere length using the telomere to single copy gene (T/S) ratio by qPCR in 20 young (18-25 yrs) men after extracting DNA using three common extraction methods: Lahiri and Nurnberger (high salt) method, PureLink Genomic DNA Mini kit (Life Technologies) and QiaAmp DNA Mini kit (Qiagen). Telomere length differences of DNA extracted from the three extraction methods was assessed by one-way analysis of variance (ANOVA). DNA purity differed between extraction methods used (P=0.01). Telomere length was impacted by the DNA extraction method used (P=0.01). Telomeres extracted using the Lahiri and Nurnberger method (mean T/S ratio: 2.43, range: 1.57-3.02) and PureLink Genomic DNA Mini Kit (mean T/S ratio: 2.57, range: 2.24-2.80) did not differ (P=0.13). Likewise, QiaAmp and Purelink-extracted telomeres were not statistically different (P=0.14). The Lahiri-extracted telomeres, however, were significantly shorter than those extracted using the QiaAmp DNA Mini Kit (mean T/S ratio: 2.71, range: 2.32-3.02; P=0.003). DNA purity was associated with telomere length. There are discrepancies between the length of leukocyte telomeres extracted from the same individuals according to the DNA extraction method used. DNA purity could be responsible for the discrepancy in telomere length but this will require validation studies. We recommend using the same DNA extraction kit when quantifying leukocyte telomere length by qPCR or when comparing different cohorts to avoid erroneous associations between telomere length and traits of interest.

  5. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    Science.gov (United States)

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren

    2008-12-01

    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  6. Automatic electricity markets data extraction for realistic multi-agent simulations

    DEFF Research Database (Denmark)

    Pereira, Ivo F.; Sousa, Tiago M.; Praca, Isabel

    2014-01-01

    This paper presents the development of a tool that provides a database with available information from real electricity markets, ensuring the required updating mechanisms. Some important characteristics of this tool are: capability of collecting, analyzing, processing and storing real electricity...... markets data available on-line; capability of dealing with different file formats and types, some of them inserted by the user, resulting from information obtained not on-line but based on the possible collaboration with market entities; definition and implementation of database gathering information from...... different market sources, even including different market types; machine learning approach for automatic definition of downloads periodicity of new information available on-line. This is a crucial tool to go a step forward in electricity markets simulation, since the integration of this database...

  7. Semi-automatic extraction of sectional view from point clouds - The case of Ottmarsheim's abbey-church

    Science.gov (United States)

    Landes, T.; Bidino, S.; Guild, R.

    2014-06-01

    Today, elevations or sectional views of buildings are often produced from terrestrial laser scanning. However, due to the amount of data to process and because usually 2D maps are required by customers, the 3D point cloud is often degraded into 2D slices. In a sectional view, not only the portions of the objet which are intersected by the cutting plane but also edges and contours of other parts of the object which are visible behind the cutting plane are represented. To avoid the tedious manual drawing, the aim of this work is to propose a semi-automatic approach for creating sectional views by point cloud processing. The extraction of sectional views requires in a first step the segmentation of the point cloud into planar and non-planar entities. Since in cultural heritage buildings, arches, vaults, columns can be found, the position and the direction of the sectional view must be taken into account before contours extraction. Indeed, the edges of surfaces of revolution depend on the chosen view. The developed extraction approach is detailed based on point clouds acquired inside and outside churches. The resulting sectional view has been evaluated in a qualitative and quantitative way by comparing it with a reference sectional view made by hand. A mean deviation of 3 cm between both sections proves that the proposed approach is promising. Regarding the processing time, despite a few manual corrections, it has saved 40% of the time required for manual drawing.

  8. A method for unsupervised change detection and automatic radiometric normalization in multispectral data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton John

    2011-01-01

    Based on canonical correlation analysis the iteratively re-weighted multivariate alteration detection (MAD) method is used to successfully perform unsupervised change detection in bi-temporal Landsat ETM+ images covering an area with villages, woods, agricultural fields and open pit mines in North...... Rhine- Westphalia, Germany. A link to an example with ASTER data to detect change with the same method after the 2005 Kashmir earthquake is given. The method is also used to automatically normalize multitemporal, multispectral Landsat ETM+ data radiometrically. IDL/ENVI, Python and Matlab software...

  9. Natural colorants: Pigment stability and extraction yield enhancement via utilization of appropriate pretreatment and extraction methods.

    Science.gov (United States)

    Ngamwonglumlert, Luxsika; Devahastin, Sakamon; Chiewchan, Naphaporn

    2017-10-13

    Natural colorants from plant-based materials have gained increasing popularity due to health consciousness of consumers. Among the many steps involved in the production of natural colorants, pigment extraction is one of the most important. Soxhlet extraction, maceration, and hydrodistillation are conventional methods that have been widely used in industry and laboratory for such a purpose. Recently, various non-conventional methods, such as supercritical fluid extraction, pressurized liquid extraction, microwave-assisted extraction, ultrasound-assisted extraction, pulsed-electric field extraction, and enzyme-assisted extraction have emerged as alternatives to conventional methods due to the advantages of the former in terms of smaller solvent consumption, shorter extraction time, and more environment-friendliness. Prior to the extraction step, pretreatment of plant materials to enhance the stability of natural pigments is another important step that must be carefully taken care of. In this paper, a comprehensive review of appropriate pretreatment and extraction methods for chlorophylls, carotenoids, betalains, and anthocyanins, which are major classes of plant pigments, is provided by using pigment stability and extraction yield as assessment criteria.

  10. A method for the automatic separation of the images of galaxies and stars from measurements made with the COSMOS machine

    International Nuclear Information System (INIS)

    MacGillivray, H.T.; Martin, R.; Pratt, N.M.; Reddish, V.C.; Seddon, H.; Alexander, L.W.G.; Walker, G.S.; Williams, P.R.

    1976-01-01

    A method has been developed which allows the computer to distinguish automatically between the images of galaxies and those of stars from measurements made with the COSMOS automatic plate-measuring machine at the Royal Observatory, Edinburgh. Results have indicated that a 90 to 95 per cent separation between galaxies and stars is possible. (author)

  11. Methods and extractants to evaluate silicon availability for sugarcane.

    Science.gov (United States)

    Crusciol, Carlos Alexandre Costa; de Arruda, Dorival Pires; Fernandes, Adalton Mazetti; Antonangelo, João Arthur; Alleoni, Luís Reynaldo Ferracciú; Nascimento, Carlos Antonio Costa do; Rossato, Otávio Bagiotto; McCray, James Mabry

    2018-01-17

    The correct evaluation of silicon (Si) availability in different soil types is critical in defining the amount of Si to be supplied to crops. This study was carried out to evaluate two methods and five chemical Si extractants in clayey, sandy-loam, and sandy soils cultivated with sugarcane (Saccharum spp. hybrids). Soluble Si was extracted using two extraction methods (conventional and microwave oven) and five Si extractants (CaCl 2 , deionized water, KCl, Na-acetate buffer (pH 4.0), and acetic acid). No single method and/or extractant adequately estimated the Si availability in the soils. Conventional extraction with KCl was no more effective than other methods in evaluating Si availability; however, it had less variation in estimating soluble Si between soils with different textural classes. In the clayey and sandy soils, the Na-acetate buffer (pH 4.0) and acetic acid were effective in evaluating the Si availability in the soil regardless of the extraction methods. The extraction with acetic acid using the microwave oven, however, overestimated the Si availability. In the sandy-loam soil, extraction with deionized water using the microwave oven method was more effective in estimating the Si availability in the soil than the other extraction methods.

  12. An integrated automatic system to evaluate U and Th dynamic lixiviation from solid matrices, and to extract/pre-concentrate leached analytes previous ICP-MS detection.

    Science.gov (United States)

    Ceballos, Melisa Rodas; García-Tenorio, Rafael; Estela, José Manuel; Cerdà, Víctor; Ferrer, Laura

    2017-12-01

    Leached fractions of U and Th from different environmental solid matrices were evaluated by an automatic system enabling the on-line lixiviation and extraction/pre-concentration of these two elements previous ICP-MS detection. UTEVA resin was used as selective extraction material. Ten leached fraction, using artificial rainwater (pH 5.4) as leaching agent, and a residual fraction were analyzed for each sample, allowing the study of behavior of U and Th in dynamic lixiviation conditions. Multivariate techniques have been employed for the efficient optimization of the independent variables that affect the lixiviation process. The system reached LODs of 0.1 and 0.7ngkg -1 of U and Th, respectively. The method was satisfactorily validated for three solid matrices, by the analysis of a soil reference material (IAEA-375), a certified sediment reference material (BCR- 320R) and a phosphogypsum reference material (MatControl CSN-CIEMAT 2008). Besides, environmental samples were analyzed, showing a similar behavior, i.e. the content of radionuclides decreases with the successive extractions. In all cases, the accumulative leached fraction of U and Th for different solid matrices studied (soil, sediment and phosphogypsum) were extremely low, up to 0.05% and 0.005% of U and Th, respectively. However, a great variability was observed in terms of mass concentration released, e.g. between 44 and 13,967ngUkg -1 . Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  14. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review

    Directory of Open Access Journals (Sweden)

    Tim Mathes

    2017-11-01

    Full Text Available Abstract Background Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. Methods We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016. Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. Results The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%. Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. Conclusion The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.

  15. Semi-automatic Term Extraction for an isiZulu Linguistic Terms ...

    African Journals Online (AJOL)

    Abstract. The University of KwaZulu-Natal (UKZN) is compiling a series of Language for Special Purposes (LSP) dictionaries for various specialized subject domains in line with its language policy and plan. The focus in this paper is the term extraction for words in the linguistics subject domain. This paper advances the use ...

  16. Comparative Analysis of Music Recordings from Western and Non-Western traditions by Automatic Tonal Feature Extraction

    Directory of Open Access Journals (Sweden)

    Emilia Gómez

    2008-09-01

    Full Text Available The automatic analysis of large musical corpora by means of computational models overcomes some limitations of manual analysis, and the unavailability of scores for most existing music makes necessary to work with audio recordings. Until now, research on this area has focused on music from the Western tradition. Nevertheless, we might ask if the available methods are suitable when analyzing music from other cultures. We present an empirical approach to the comparative analysis of audio recordings, focusing on tonal features and data mining techniques. Tonal features are related to the pitch class distribution, pitch range and employed scale, gamut and tuning system. We provide our initial but promising results obtained when trying to automatically distinguish music from Western and non- Western traditions; we analyze which descriptors are most relevant and study their distribution over 1500 pieces from different traditions and styles. As a result, some feature distributions differ for Western and non-Western music, and the obtained classification accuracy is higher than 80% for different classification algorithms and an independent test set. These results show that automatic description of audio signals together with data mining techniques provide means to characterize huge music collections from different traditions and complement musicological manual analyses.

  17. Brazil nut sorting for aflatoxin prevention: a comparison between automatic and manual shelling methods

    Directory of Open Access Journals (Sweden)

    Ariane Mendonça Pacheco

    2013-06-01

    Full Text Available The impact of automatic and manual shelling methods during manual/visual sorting of different batches of Brazil nuts from the 2010 and 2011 harvests was evaluated in order to investigate aflatoxin prevention.The samples were tested as follows: in-shell, shell, shelled, and pieces in order to evaluate the moisture content (mc, water activity (Aw, and total aflatoxin (LOD = 0.3 µg/kg and LOQ 0.85 µg/kg at the Brazil nut processing plant. The results of aflatoxins obtained for the manually shelled nut samples ranged from 3.0 to 60.3 µg/g and from 2.0 to 31.0 µg/g for the automatically shelled samples. All samples showed levels of mc below the limit of 15%; on the other hand, shelled samples from both harvests showed levels of Aw above the limit. There were no significant differences concerning the manual or automatic shelling results during the sorting stages. On the other hand, the visual sorting was effective in decreasing the aflatoxin contamination in both methods.

  18. A novel method for automatic genotyping of microsatellite markers based on parametric pattern recognition.

    Science.gov (United States)

    Johansson, Asa; Karlsson, Patrik; Gyllensten, Ulf

    2003-09-01

    Genetic mapping of loci affecting complex phenotypes in human and other organisms is presently being conducted on a very large scale, using either microsatellite or single nucleotide polymorphism (SNP) markers and by partly automated methods. A critical step in this process is the conversion of the instrument output into genotypes, both a time-consuming and error prone procedure. Errors made during this calling of genotypes will dramatically reduce the ability to map the location of loci underlying a phenotype. Accurate methods for automatic genotype calling are therefore important. Here, we describe novel algorithms for automatic calling of microsatellite genotypes using parametric pattern recognition. The analysis of microsatellite data is complicated both by the occurrence of stutter bands, which arise from Taq polymerase misreading the number of repeats, and additional bands derived form the non-template dependent addition of a nucleotide to the 3' end of the PCR products. These problems, together with the fact that the lengths of two alleles in a heterozygous individual may differ by only two nucleotides, complicate the development of an automated process. The novel algorithms markedly reduce the need for manual editing and the frequency of miscalls, and compares very favourably with commercially available software for automatic microsatellite genotyping.

  19. Semi-automatic watershed medical image segmentation methods for customized cancer radiation treatment planning simulation

    International Nuclear Information System (INIS)

    Kum Oyeon; Kim Hye Kyung; Max, N.

    2007-01-01

    A cancer radiation treatment planning simulation requires image segmentation to define the gross tumor volume, clinical target volume, and planning target volume. Manual segmentation, which is usual in clinical settings, depends on the operator's experience and may, in addition, change for every trial by the same operator. To overcome this difficulty, we developed semi-automatic watershed medical image segmentation tools using both the top-down watershed algorithm in the insight segmentation and registration toolkit (ITK) and Vincent-Soille's bottom-up watershed algorithm with region merging. We applied our algorithms to segment two- and three-dimensional head phantom CT data and to find pixel (or voxel) numbers for each segmented area, which are needed for radiation treatment optimization. A semi-automatic method is useful to avoid errors incurred by both human and machine sources, and provide clear and visible information for pedagogical purpose. (orig.)

  20. Automatic flow-through dynamic extraction: A fast tool to evaluate char-based remediation of multi-element contaminated mine soils.

    Science.gov (United States)

    Rosende, María; Beesley, Luke; Moreno-Jimenez, Eduardo; Miró, Manuel

    2016-02-01

    An automatic in-vitro bioaccessibility test based upon dynamic microcolumn extraction in a programmable flow setup is herein proposed as a screening tool to evaluate bio-char based remediation of mine soils contaminated with trace elements as a compelling alternative to conventional phyto-availability tests. The feasibility of the proposed system was evaluated by extracting the readily bioaccessible pools of As, Pb and Zn in two contaminated mine soils before and after the addition of two biochars (9% (w:w)) of diverse source origin (pine and olive). Bioaccessible fractions under worst-case scenarios were measured using 0.001 mol L(-1) CaCl2 as extractant for mimicking plant uptake, and analysis of the extracts by inductively coupled optical emission spectrometry. The t-test of comparison of means revealed an efficient metal (mostly Pb and Zn) immobilization by the action of olive pruning-based biochar against the bare (control) soil at the 0.05 significance level. In-vitro flow-through bioaccessibility tests are compared for the first time with in-vivo phyto-toxicity assays in a microcosm soil study. By assessing seed germination and shoot elongation of Lolium perenne in contaminated soils with and without biochar amendments the dynamic flow-based bioaccessibility data proved to be in good agreement with the phyto-availability tests. Experimental results indicate that the dynamic extraction method is a viable and economical in-vitro tool in risk assessment explorations to evaluate the feasibility of a given biochar amendment for revegetation and remediation of metal contaminated soils in a mere 10 min against 4 days in case of phyto-toxicity assays. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  2. Knickzone Extraction Tool (KET) - A new ArcGIS toolset for automatic extraction of knickzones from a DEM based on multi-scale stream gradients

    Science.gov (United States)

    Zahra, Tuba; Paudel, Uttam; Hayakawa, Yuichi S.; Oguchi, Takashi

    2017-04-01

    Extraction of knickpoints or knickzones from a Digital Elevation Model (DEM) has gained immense significance owing to the increasing implications of knickzones on landform development. However, existing methods for knickzone extraction tend to be subjective or require time-intensive data processing. This paper describes the proposed Knickzone Extraction Tool (KET), a new raster-based Python script deployed in the form of an ArcGIS toolset that automates the process of knickzone extraction and is both fast and more user-friendly. The KET is based on multi-scale analysis of slope gradients along a river course, where any locally steep segment (knickzone) can be extracted as an anomalously high local gradient. We also conducted a comparative analysis of the KET and other contemporary knickzone identification techniques. The relationship between knickzone distribution and its morphometric characteristics are also examined through a case study of a mountainous watershed in Japan.

  3. Comparative exergy analyses of Jatropha curcas oil extraction methods: Solvent and mechanical extraction processes

    International Nuclear Information System (INIS)

    Ofori-Boateng, Cynthia; Keat Teong, Lee; JitKang, Lim

    2012-01-01

    Highlights: ► Exergy analysis detects locations of resource degradation within a process. ► Solvent extraction is six times exergetically destructive than mechanical extraction. ► Mechanical extraction of jatropha oil is 95.93% exergetically efficient. ► Solvent extraction of jatropha oil is 79.35% exergetically efficient. ► Exergy analysis of oil extraction processes allow room for improvements. - Abstract: Vegetable oil extraction processes are found to be energy intensive. Thermodynamically, any energy intensive process is considered to degrade the most useful part of energy that is available to produce work. This study uses literature values to compare the efficiencies and degradation of the useful energy within Jatropha curcas oil during oil extraction taking into account solvent and mechanical extraction methods. According to this study, J. curcas seeds on processing into J. curcas oil is upgraded with mechanical extraction but degraded with solvent extraction processes. For mechanical extraction, the total internal exergy destroyed is 3006 MJ which is about six times less than that for solvent extraction (18,072 MJ) for 1 ton J. curcas oil produced. The pretreatment processes of the J. curcas seeds recorded a total internal exergy destructions of 5768 MJ accounting for 24% of the total internal exergy destroyed for solvent extraction processes and 66% for mechanical extraction. The exergetic efficiencies recorded are 79.35% and 95.93% for solvent and mechanical extraction processes of J. curcas oil respectively. Hence, mechanical oil extraction processes are exergetically efficient than solvent extraction processes. Possible improvement methods are also elaborated in this study.

  4. Automatic knowledge extraction in sequencing analysis with multiagent system and grid computing.

    Science.gov (United States)

    González, Roberto; Zato, Carolina; Benito, Rocío; Bajo, Javier; Hernández, Jesús M; De Paz, Juan F; Vera, Vicente; Corchado, Juan M

    2012-12-01

    Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.

  5. Automatic knowledge extraction in sequencing analysis with multiagent system and grid computing

    Directory of Open Access Journals (Sweden)

    González Roberto

    2012-12-01

    Full Text Available Advances in bioinformatics have contributed towards a significant increase in available information. Information analysis requires the use of distributed computing systems to best engage the process of data analysis. This study proposes a multiagent system that incorporates grid technology to facilitate distributed data analysis by dynamically incorporating the roles associated to each specific case study. The system was applied to genetic sequencing data to extract relevant information about insertions, deletions or polymorphisms.

  6. Semi-automatic Term Extraction for an isiZulu Linguistic Terms ...

    African Journals Online (AJOL)

    user

    This paper advances the use of frequency analysis and the keyword analysis as strategies to extract terms for the compilation of the dictionary of isiZulu linguistic terms. The study uses the isiZulu. National Corpus (INC) of about 1,2 million tokens as a reference corpus as well as an LSP corpus of about 100,000 tokens as a ...

  7. Automatic measuring method of catenary geometric parameters based on laser scanning and imaging

    Science.gov (United States)

    Fu, Luhua; Chang, Songhong; Liu, Changjie

    2018-01-01

    The catenary geometric parameters are important factors that affect the safe operation of the railway. Among them, height of conductor and stagger value are two key parameters. At present, the two parameters are mainly measured by laser distance sensor and angle measuring device with manual aiming method, with low measuring speed and poor efficiency. In order to improve the speed and accuracy of catenary geometric parameters detection, a new automatic measuring method of contact wire's parameters based on laser scanning and imaging is proposed. The DLT method is used to calibrate the parameters of the linear array CCD camera. The direction of the scanning laser beam and the spatial coordinate of the starting point of the beam are calculated by geometric method. Finally, the equation is established using the calibrated parameters and the imaginary coordinates of the imaging point, to solve the spatial coordinate of the measured point on the contact wire, so as to calculate height of conductor and stagger value. Different from the traditional hand-held laser phase measuring method, the new method can achieve measurement of the catenary geometric parameters automatically without manual aiming. Through measurement results, accuracy can reach 2mm.

  8. Methods for microbial DNA extraction from soil for PCR amplification

    Directory of Open Access Journals (Sweden)

    Yeates C

    1998-01-01

    Full Text Available Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1. DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol precipitation. This method was compared to other DNA extraction methods with regard to DNA purity and size.

  9. An automated and simple method for brain MR image extraction

    Directory of Open Access Journals (Sweden)

    Zhu Zixin

    2011-09-01

    Full Text Available Abstract Background The extraction of brain tissue from magnetic resonance head images, is an important image processing step for the analyses of neuroimage data. The authors have developed an automated and simple brain extraction method using an improved geometric active contour model. Methods The method uses an improved geometric active contour model which can not only solve the boundary leakage problem but also is less sensitive to intensity inhomogeneity. The method defines the initial function as a binary level set function to improve computational efficiency. The method is applied to both our data and Internet brain MR data provided by the Internet Brain Segmentation Repository. Results The results obtained from our method are compared with manual segmentation results using multiple indices. In addition, the method is compared to two popular methods, Brain extraction tool and Model-based Level Set. Conclusions The proposed method can provide automated and accurate brain extraction result with high efficiency.

  10. A comparison of coronal mass ejections identified by manual and automatic methods

    Directory of Open Access Journals (Sweden)

    S. Yashiro

    2008-10-01

    Full Text Available Coronal mass ejections (CMEs are related to many phenomena (e.g. flares, solar energetic particles, geomagnetic storms, thus compiling of event catalogs is important for a global understanding these phenomena. CMEs have been identified manually for a long time, but in the SOHO era, automatic identification methods are being developed. In order to clarify the advantage and disadvantage of the manual and automatic CME catalogs, we examined the distributions of CME properties listed in the CDAW (manual and CACTus (automatic catalogs. Both catalogs have a good agreement on the wide CMEs (width>120° in their properties, while there is a significant discrepancy on the narrow CMEs (width≤30°: CACTus has a larger number of narrow CMEs than CDAW. We carried out an event-by-event examination of a sample of events and found that the CDAW catalog have missed many narrow CMEs during the solar maximum. Another significant discrepancy was found on the fast CMEs (speed>1000 km/s: the majority of the fast CDAW CMEs are wide and originate from low latitudes, while the fast CACTus CMEs are narrow and originate from all latitudes. Event-by-event examination of a sample of events suggests that CACTus has a problem on the detection of the fast CMEs.

  11. Research on large spatial coordinate automatic measuring system based on multilateral method

    Science.gov (United States)

    Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui

    2015-10-01

    To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.

  12. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  13. Automatic diagnosis of melanoma using machine learning methods on a spectroscopic system.

    Science.gov (United States)

    Li, Lin; Zhang, Qizhi; Ding, Yihua; Jiang, Huabei; Thiers, Bruce H; Wang, James Z

    2014-10-13

    Early and accurate diagnosis of melanoma, the deadliest type of skin cancer, has the potential to reduce morbidity and mortality rate. However, early diagnosis of melanoma is not trivial even for experienced dermatologists, as it needs sampling and laboratory tests which can be extremely complex and subjective. The accuracy of clinical diagnosis of melanoma is also an issue especially in distinguishing between melanoma and mole. To solve these problems, this paper presents an approach that makes non-subjective judgements based on quantitative measures for automatic diagnosis of melanoma. Our approach involves image acquisition, image processing, feature extraction, and classification. 187 images (19 malignant melanoma and 168 benign lesions) were collected in a clinic by a spectroscopic device that combines single-scattered, polarized light spectroscopy with multiple-scattered, un-polarized light spectroscopy. After noise reduction and image normalization, features were extracted based on statistical measurements (i.e. mean, standard deviation, mean absolute deviation, L1 norm, and L2 norm) of image pixel intensities to characterize the pattern of melanoma. Finally, these features were fed into certain classifiers to train learning models for classification. We adopted three classifiers - artificial neural network, naïve bayes, and k-nearest neighbour to evaluate our approach separately. The naive bayes classifier achieved the best performance - 89% accuracy, 89% sensitivity and 89% specificity, which was integrated with our approach in a desktop application running on the spectroscopic system for diagnosis of melanoma. Our work has two strengths. (1) We have used single scattered polarized light spectroscopy and multiple scattered unpolarized light spectroscopy to decipher the multilayered characteristics of human skin. (2) Our approach does not need image segmentation, as we directly probe tiny spots in the lesion skin and the image scans do not involve

  14. THE EFFICIENCY OF RANDOM FOREST METHOD FOR SHORELINE EXTRACTION FROM LANDSAT-8 AND GOKTURK-2 IMAGERIES

    Directory of Open Access Journals (Sweden)

    B. Bayram

    2017-11-01

    Full Text Available Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718 titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model – Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band and GOKTURK-2 (4th band imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.

  15. The Efficiency of Random Forest Method for Shoreline Extraction from LANDSAT-8 and GOKTURK-2 Imageries

    Science.gov (United States)

    Bayram, B.; Erdem, F.; Akpinar, B.; Ince, A. K.; Bozkurt, S.; Catal Reis, H.; Seker, D. Z.

    2017-11-01

    Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718) titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model - Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band) and GOKTURK-2 (4th band) imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.

  16. Extraction of Roots of Quintics by Division Method

    Science.gov (United States)

    Kulkarni, Raghavendra G.

    2009-01-01

    We describe a method to extract roots of a reducible quintic over the real field, which makes use of a simple division. A procedure to synthesize such quintics is given and a numerical example is solved to extract the roots of quintic with the proposed method.

  17. Method of purifying phosphoric acid after solvent extraction

    International Nuclear Information System (INIS)

    Kouloheris, A.P.; Lefever, J.A.

    1979-01-01

    A method of purifying phosphoric acid after solvent extraction is described. The phosphoric acid is contacted with a sorbent which sorbs or takes up the residual amount of organic carrier and the phosphoric acid separated from the organic carrier-laden sorbent. The method is especially suitable for removing residual organic carrier from phosphoric acid after solvent extraction uranium recovery. (author)

  18. Proposal of the Measurement Method of the Transmission Line Constants by Automatic Oscillograph Utilization

    Science.gov (United States)

    Ooura, Yoshifumi

    The author devised new method for measurement of the transmission line constants of high precision with the automatic oscillograph. This paper is proposal of new method for measurement of the transmission line constants. The author utilized that the inherent eigenvector matrixs of transmission line had an equal relation with four-terminal constants eigenvector matrixs of transmission line. And the author calculated four-terminal constants of transmission line from the data (voltage-current data of the automatic oscillograph) of six cases of transmission line system faults and devised the method for measurement for transmission line constants from analysis of the four-terminal constants of transmission line next. Furthermore, the author inspected this new method in the system fault simulations of the EMTP transmission line system model. It was shown that the result is the measurement method of high accuracy. From now on, the author advances the measurement of the transmission line constants from actual system faults data of the transmission line and its periphery with the cooperation power system companies.

  19. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  20. Identifying Basketball Plays from Sensor Data; towards a Low-Cost Automatic Extraction of Advanced Statistics

    DEFF Research Database (Denmark)

    Sangüesa, Adrià Arbués; Moeslund, Thomas B.; Bahnsen, Chris Holmberg

    2017-01-01

    Advanced statistics have proved to be a crucial tool for basketball coaches in order to improve training skills. Indeed, the performance of the team can be further optimized by studying the behaviour of players under certain conditions. In the United States of America, companies such as STATS...... created and meaningful basketball features have been extracted. 97.9% accuracy is obtained using Support Vector Machines when identifying 5 different classic plays: floppy offense, pick and roll, press break, post-up situation and fast breaks. After recognizing these plays in video sequences, advanced...

  1. Automatic contour extraction for multiple objects based on Schroedinger transform of image

    Science.gov (United States)

    Lou, Liantang; Lu, Ling; Li, Liguo; Gao, Wenliang; Li, Lingling; Fu, Zhongliang

    2009-10-01

    Analytical and numerical solutions of the Schroedinger Equation which was satisfied by the propagator P(b, a) , including all the paths contribution, are discussed. The definition of Schrödinger transform of image is first proposed. Exterior and interior of objects are obtained from Schroedinger transforms of original image and its inverse image. Using the bruteforce algorithm, sets of exterior and interior points are thinned. By finding pairs of exterior and interior points with the smallest distance between them, contours of multiple objects are extracted. Some experiments with simulated and real images are given.

  2. An automatic multigrid method for the solution of sparse linear systems

    Science.gov (United States)

    Shapira, Yair; Israeli, Moshe; Sidi, Avram

    1993-01-01

    An automatic version of the multigrid method for the solution of linear systems arising from the discretization of elliptic PDE's is presented. This version is based on the structure of the algebraic system solely, and does not use the original partial differential operator. Numerical experiments show that for the Poisson equation the rate of convergence of our method is equal to that of classical multigrid methods. Moreover, the method is robust in the sense that its high rate of convergence is conserved for other classes of problems: non-symmetric, hyperbolic (even with closed characteristics) and problems on non-uniform grids. No double discretization or special treatment of sub-domains (e.g. boundaries) is needed. When supplemented with a vector extrapolation method, high rates of convergence are achieved also for anisotropic and discontinuous problems and also for indefinite Helmholtz equations. A new double discretization strategy is proposed for finite and spectral element schemes and is found better than known strategies.

  3. Evaluation of an automatic brain segmentation method developed for neonates on adult MR brain images

    Science.gov (United States)

    Moeskops, Pim; Viergever, Max A.; Benders, Manon J. N. L.; Išgum, Ivana

    2015-03-01

    Automatic brain tissue segmentation is of clinical relevance in images acquired at all ages. The literature presents a clear distinction between methods developed for MR images of infants, and methods developed for images of adults. The aim of this work is to evaluate a method developed for neonatal images in the segmentation of adult images. The evaluated method employs supervised voxel classification in subsequent stages, exploiting spatial and intensity information. Evaluation was performed using images available within the MRBrainS13 challenge. The obtained average Dice coefficients were 85.77% for grey matter, 88.66% for white matter, 81.08% for cerebrospinal fluid, 95.65% for cerebrum, and 96.92% for intracranial cavity, currently resulting in the best overall ranking. The possibility of applying the same method to neonatal as well as adult images can be of great value in cross-sectional studies that include a wide age range.

  4. Automatic segmentation of corpus callosum using Gaussian mixture modeling and Fuzzy C means methods.

    Science.gov (United States)

    İçer, Semra

    2013-10-01

    This paper presents a comparative study of the success and performance of the Gaussian mixture modeling and Fuzzy C means methods to determine the volume and cross-sectionals areas of the corpus callosum (CC) using simulated and real MR brain images. The Gaussian mixture model (GMM) utilizes weighted sum of Gaussian distributions by applying statistical decision procedures to define image classes. In the Fuzzy C means (FCM), the image classes are represented by certain membership function according to fuzziness information expressing the distance from the cluster centers. In this study, automatic segmentation for midsagittal section of the CC was achieved from simulated and real brain images. The volume of CC was obtained using sagittal sections areas. To compare the success of the methods, segmentation accuracy, Jaccard similarity and time consuming for segmentation were calculated. The results show that the GMM method resulted by a small margin in more accurate segmentation (midsagittal section segmentation accuracy 98.3% and 97.01% for GMM and FCM); however the FCM method resulted in faster segmentation than GMM. With this study, an accurate and automatic segmentation system that allows opportunity for quantitative comparison to doctors in the planning of treatment and the diagnosis of diseases affecting the size of the CC was developed. This study can be adapted to perform segmentation on other regions of the brain, thus, it can be operated as practical use in the clinic. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. AUTOMATIC BUILDING EXTRACTION AND ROOF RECONSTRUCTION IN 3K IMAGERY BASED ON LINE SEGMENTS

    Directory of Open Access Journals (Sweden)

    A. Köhn

    2016-06-01

    Full Text Available We propose an image processing workflow to extract rectangular building footprints using georeferenced stereo-imagery and a derivative digital surface model (DSM product. The approach applies a line segment detection procedure to the imagery and subsequently verifies identified line segments individually to create a footprint on the basis of the DSM. The footprint is further optimized by morphological filtering. Towards the realization of 3D models, we decompose the produced footprint and generate a 3D point cloud from DSM height information. By utilizing the robust RANSAC plane fitting algorithm, the roof structure can be correctly reconstructed. In an experimental part, the proposed approach has been performed on 3K aerial imagery.

  6. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    Science.gov (United States)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  7. Diode parameter extraction by a linear cofactor difference operation method

    Energy Technology Data Exchange (ETDEWEB)

    Ma Chenyue; He Jin; Lin Xinnan [Shenzhen SOC Key Laboratory of Peking University, PKU-HKUST Shenzhen Institute, Hi-Tech Industrial Park South, Shenzhen 518057 (China); Zhang Chenfei; Wang Hao [Key Laboratory of Integrated Microsystems, School of Computer and Information Engineering, Peking University, Shenzhen Graduate School, Shenzhen 518055 (China); Mansun Chan, E-mail: xnlin@szpku.edu.cn, E-mail: hejin@szpku.edu.cn [Department of Electronic and Computer Engineering, Hong Kong University of Science and Technology, Kowloon (Hong Kong)

    2010-11-15

    The linear cofactor difference operator (LCDO) method, a direct parameter extraction method for general diodes, is presented. With the developed LCDO method, the extreme spectral characteristic of the diode voltage-current curves is revealed, and its extreme positions are related to the diode characteristic parameters directly. The method is applied to diodes with different sizes and temperatures, and the related characteristic parameters, such as reverse saturation current, series resistance and non-ideality factor, are extracted directly. The extraction result shows good agreement with the experimental data.

  8. On Young’s modulus profile across anisotropic nonhomogeneous polymeric fibre using automatic transverse interferometric method

    Science.gov (United States)

    Sokkar, T. Z. N.; Shams El-Din, M. A.; El-Tawargy, A. S.

    2012-09-01

    This paper provides the Young's modulus profile across anisotropic nonhomogeneous polymeric fibre using an accurate transverse interferometric method. A mathematical model based on optical and tensile concepts is presented to calculate the mechanical parameter profiles of fibres. The proposed model with the aid of Mach-Zehnder interferometer combined with an automated drawing device are used to determine the Young's modulus profiles for three drawn polypropylene (PP) fibres (virgin, recycled and virgin recycled 50/50). The obtained microinterferograms are analyzed automatically using fringe processor programme to determine the phase distribution.

  9. Comparison of DNA extraction methods for meat analysis.

    Science.gov (United States)

    Yalçınkaya, Burhanettin; Yumbul, Eylem; Mozioğlu, Erkan; Akgoz, Muslum

    2017-04-15

    Preventing adulteration of meat and meat products with less desirable or objectionable meat species is important not only for economical, religious and health reasons, but also, it is important for fair trade practices, therefore, several methods for identification of meat and meat products have been developed. In the present study, ten different DNA extraction methods, including Tris-EDTA Method, a modified Cetyltrimethylammonium Bromide (CTAB) Method, Alkaline Method, Urea Method, Salt Method, Guanidinium Isothiocyanate (GuSCN) Method, Wizard Method, Qiagen Method, Zymogen Method and Genespin Method were examined to determine their relative effectiveness for extracting DNA from meat samples. The results show that the salt method is easy to perform, inexpensive and environmentally friendly. Additionally, it has the highest yield among all the isolation methods tested. We suggest this method as an alternative method for DNA isolation from meat and meat products. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Effects of Different Extraction Methods and Conditions on the Phenolic Composition of Mate Tea Extracts

    Directory of Open Access Journals (Sweden)

    Jelena Vladic

    2012-03-01

    Full Text Available A simple and rapid HPLC method for determination of chlorogenic acid (5-O-caffeoylquinic acid in mate tea extracts was developed and validated. The chromatography used isocratic elution with a mobile phase of aqueous 1.5% acetic acid-methanol (85:15, v/v. The flow rate was 0.8 mL/min and detection by UV at 325 nm. The method showed good selectivity, accuracy, repeatability and robustness, with detection limit of 0.26 mg/L and recovery of 97.76%. The developed method was applied for the determination of chlorogenic acid in mate tea extracts obtained by ethanol extraction and liquid carbon dioxide extraction with ethanol as co-solvent. Different ethanol concentrations were used (40, 50 and 60%, v/v and liquid CO2 extraction was performed at different pressures (50 and 100 bar and constant temperature (27 ± 1 °C. Significant influence of extraction methods, conditions and solvent polarity on chlorogenic acid content, antioxidant activity and total phenolic and flavonoid content of mate tea extracts was established. The most efficient extraction solvent was liquid CO2 with aqueous ethanol (40% as co-solvent using an extraction pressure of 100 bar.

  11. Extraction of lipids from microalgae by ultrasound application: prospection of the optimal extraction method.

    Science.gov (United States)

    Araujo, Glacio S; Matos, Leonardo J B L; Fernandes, Jader O; Cartaxo, Samuel J M; Gonçalves, Luciana R B; Fernandes, Fabiano A N; Farias, Wladimir R L

    2013-01-01

    Microalgae have the ability to grow rapidly, synthesize and accumulate large amounts (approximately 20-50% of dry weight) of lipids. A successful and economically viable algae based oil industry will depend on the selection of appropriate microalgal strains and the selection of the most suitable lipid extraction method. In this paper, five extraction methods were evaluated regarding the extraction of lipids from Chlorella vulgaris: Bligh and Dyer, Chen, Folch, Hara and Radin, and Soxhlet. Furthermore, the addition of silica powder was studied to evaluate the introduction of more shear stress to the system as to increase the disruption of cell walls. Among the studied methods, the Bligh and Dyer method assisted by ultrasound resulted in the highest extraction of oil from C. vulgaris (52.5% w/w). Addition of powder silica did not improve the extraction of oil. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Impact of different extraction methods on the quality of Dipteryx alata extracts

    Directory of Open Access Journals (Sweden)

    Frederico S. Martins

    2013-05-01

    Full Text Available This study aimed to impact of different extraction methods on the quality of Dipteryx alata Vogel, Fabaceae, extracts from fruits. The major compounds found were the lipids 38.9% (w/w and proteins 26.20% (w/w. The residual moisture was 7.20% (w/w, total fiber 14.50% (w/w, minerals 4.10% (w/w and carbohydrate 9.10 % (w/w. The species studied has great potential in producing oil, but the content and type of fatty acids obtained is dependent on the method of extraction. The Blingh & Dyer method was more selective for unsaturated fatty acids and Shoxlet method was more selective for saturated fatty acids. The tannin extraction by ultrasound (33.70 % w/w was 13.90% more efficient than extraction by decoction (29 % w/w.

  13. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    Science.gov (United States)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  14. A semi-automatic semantic method for mapping SNOMED CT concepts to VCM Icons.

    Science.gov (United States)

    Lamy, Jean-Baptiste; Tsopra, Rosy; Venot, Alain; Duclos, Catherine

    2013-01-01

    VCM (Visualization of Concept in Medicine) is an iconic language for representing key medical concepts by icons. However, the use of this language with reference terminologies, such as SNOMED CT, will require the mapping of its icons to the terms of these terminologies. Here, we present and evaluate a semi-automatic semantic method for the mapping of SNOMED CT concepts to VCM icons. Both SNOMED CT and VCM are compositional in nature; SNOMED CT is expressed in description logic and VCM semantics are formalized in an OWL ontology. The proposed method involves the manual mapping of a limited number of underlying concepts from the VCM ontology, followed by automatic generation of the rest of the mapping. We applied this method to the clinical findings of the SNOMED CT CORE subset, and 100 randomly-selected mappings were evaluated by three experts. The results obtained were promising, with 82 of the SNOMED CT concepts correctly linked to VCM icons according to the experts. Most of the errors were easy to fix.

  15. Automatic Tie-point Extraction Based on Multiple-image Matching and Bundle Adjustment of Large Block of Oblique Aerial Images

    Directory of Open Access Journals (Sweden)

    ZHANG Li

    2017-05-01

    Full Text Available Due to advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, the aerial oblique images have found their place in numerous civil applications. However, for these applications high quality orientation data are essential. A fully automatic tie-point extraction procedure is developed to precisely orient the large block of oblique aerial images, in which a refined ASIFT algorithm and a window-based multiple-viewing image matching (WMVM method are combined. In this approach, the WMVM method is based on the concept of multi-image matching guided from object space and allows reconstruction of 3D objects by matching all available images simultaneously, and a square correlation window in the reference image can be correlated with windows of different size, shape and orientation in the search images.Then another key algorithms, i.e. the combined bundle adjustment method with gross-error detection & removal algorithm, which can be used for simultaneously orient the oblique and nearly-vertical images will be presented. Finally, through the experiments by using real oblique images over several test areas, the performance and accuracy of the proposed method is studied and presented.

  16. New extraction method for the analysis of linear alkylbenzene sulfonates in marine organisms. Pressurized liquid extraction versus Soxhlet extraction.

    Science.gov (United States)

    Alvarez-Muñoz, D; Sáez, M; Lara-Martin, P A; Gómez-Parra, A; González-Mazo, E

    2004-10-15

    A new method has been developed for the determination of linear alkylbenzene sulfonates (LAS) from various marine organisms, and compared with Soxhlet extraction. The technique applied includes the use of pressurized liquid extraction (PLE) for the extraction stage, preconcentration of the samples, purification by solid-phase extraction (SPE) and analysis by liquid chromatography with fluorescence detection. The spiked concentrations were added to the samples (wet mass of the organisms: Solea senegalensis and Ruditapes semidecussatus), which were homogenized and agitated continuously for 25 h. The samples were extracted by pressurized hot solvent extraction using two different extraction temperatures (100 and 150 degrees C) and by traditional Soxhlet extraction. The best recoveries were obtained employing pressurized hot solvent extraction at 100 degrees C and varied in the range from 66.1 to 101.3% with a standard deviation of between 2 and 13. Detection limit was between 5 and 15 microg kg(-1) wet mass using HPLC-fluorescence detection. The analytical method developed in this paper has been applied for LAS determination in samples from a Flow-through exposure system with the objective of measuring the bioconcentration of this surfactant.

  17. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  18. Rational automatic search method for stable docking models of protein and ligand.

    Science.gov (United States)

    Mizutani, M Y; Tomioka, N; Itai, A

    1994-10-21

    An efficient automatic method has been developed for docking a ligand molecule to a protein molecule. The method can construct energetically favorable docking models, considering specific interactions between the two molecules and conformational flexibility in the ligand. In the first stage of docking, likely binding modes are searched and estimated effectively in terms of hydrogen bonds, together with conformations in part of the ligand structure that includes hydrogen bonding groups. After that part is placed in the protein cavity and is optimized, conformations in the remaining part are also examined systematically. Finally, several stable docking models are obtained after optimization of the position, orientation and conformation of the whole ligand molecule. In all the screening processes, the total potential energy including intra- and intermolecular interaction energy, consisting of van der Waals, electrostatic and hydrogen bonding energies, is used as the index. The characteristics of our docking method are high accuracy of the results, fully automatic generation of models and short computational time. The efficiency of the method was confirmed by four docking trials using two enzyme systems. In two attempts to dock methotrexate to dihydrofolate reductase and 2'-GMP to ribonuclease T1, the exact structures of complexes in crystals were reproduced as the most stable docking models, without any assumptions concerning the binding modes and ligand conformations. The most stable docking models of dihydrofolate and trimethoprim, respectively, to dihydrofolate reductase were also in good agreement with those suggested by experiment. In all test cases, it was shown that our method can accurately predict the correct docking structures, discriminating the correct model from incorrect ones. The efficiency of our method was further tested from the viewpoint of ability to predict the relative stability of the docking structures of two triazine derivatives to

  19. Development of an extraction method for perchlorate in soils.

    Science.gov (United States)

    Cañas, Jaclyn E; Patel, Rashila; Tian, Kang; Anderson, Todd A

    2006-03-01

    Perchlorate originates as a contaminant in the environment from its use in solid rocket fuels and munitions. The current US EPA methods for perchlorate determination via ion chromatography using conductivity detection do not include recommendations for the extraction of perchlorate from soil. This study evaluated and identified appropriate conditions for the extraction of perchlorate from clay loam, loamy sand, and sandy soils. Based on the results of this evaluation, soils should be extracted in a dry, ground (mortar and pestle) state with Milli-Q water in a 1 ratio 1 soil ratio water ratio and diluted no more than 5-fold before analysis. When sandy soils were extracted in this manner, the calculated method detection limit was 3.5 microg kg(-1). The findings of this study have aided in the establishment of a standardized extraction method for perchlorate in soil.

  20. Automatic method of analysis and measurement of additional parameters of corneal deformation in the Corvis tonometer.

    Science.gov (United States)

    Koprowski, Robert

    2014-11-19

    The method for measuring intraocular pressure using the Corvis tonometer provides a sequence of images of corneal deformation. Deformations of the cornea are recorded using the ultra-high-speed Scheimpflug camera. This paper presents a new and reproducible method of analysis of corneal deformation images that allows for automatic measurements of new features, namely new three parameters unavailable in the original software. The images subjected to processing had a resolution of 200 × 576 × 140 pixels. They were acquired from the Corvis tonometer and simulation. In total 14,000 2D images were analysed. The image analysis method proposed by the author automatically detects the edge of the cornea and sclera fragments. For this purpose, new methods of image analysis and processing proposed by the author as well as those well-known, such as Canny filter, binarization, median filtering etc., have been used. The presented algorithms were implemented in Matlab (version 7.11.0.584-R2010b) with Image Processing toolbox (version 7.1-R2010b) using both known algorithms for image analysis and processing and those proposed by the author. Owing to the proposed algorithm it is possible to determine three parameters: (1) the degree of the corneal reaction relative to the static position; (2) the corneal length changes; (3) the ratio of amplitude changes to the corneal deformation length. The corneal reaction is smaller by about 30.40% compared to its static position. The change in the corneal length during deformation is very small, approximately 1% of its original length. Parameter (3) enables to determine the applanation points with a correlation of 92% compared to the conventional method for calculating corneal flattening areas. The proposed algorithm provides reproducible results fully automatically within a few seconds/per patient using Core i7 processor. Using the proposed algorithm, it is possible to measure new, additional parameters of corneal deformation, which

  1. The Impact of the Implementation of Edge Detection Methods on the Accuracy of Automatic Voltage Reading

    Science.gov (United States)

    Sidor, Kamil; Szlachta, Anna

    2017-04-01

    The article presents the impact of the edge detection method in the image analysis on the reading accuracy of the measured value. In order to ensure the automatic reading of the measured value by an analog meter, a standard webcam and the LabVIEW programme were applied. NI Vision Development tools were used. The Hough transform was used to detect the indicator. The programme output was compared during the application of several methods of edge detection. Those included: the Prewitt operator, the Roberts cross, the Sobel operator and the Canny edge detector. The image analysis was made for an analog meter indicator with the above-mentioned methods, and the results of that analysis were compared with each other and presented.

  2. Online measurement method for pulse amplitude in pulsed extraction columns

    International Nuclear Information System (INIS)

    Wang Xinghai; Li Shichang; Chen Jing

    2009-01-01

    Online measurement of pulse amplitude by air purge was studied. The pulse amplitude in a pulsed extraction column was calculated online by measurement of characteristic parameters of the signal's curve. The method can be used for calculation of different pulsed extraction columns. (authors)

  3. Comparison of protein extraction methods suitable for proteomics ...

    African Journals Online (AJOL)

    Jane

    2011-07-27

    Jul 27, 2011 ... An efficient protein extraction method is a prerequisite for successful implementation of proteomics. In this study, seedling roots of Jerusalem artichoke were treated with the concentration of 250 mM. NaCl for 36 h. Subsequently, six different protocols of protein extraction were applied to seedling roots.

  4. Effect of mucin extraction method on some properties of ...

    African Journals Online (AJOL)

    To evaluate the effects of mucin extraction method and plasticizer concentration on the bioadhesive strength and metronidazole release profile from mucin-based mucoadhesive patches. Mucin was extracted from the giant African snail Archachatina marginata by differential precipitation with acetone and alum. Various ...

  5. Comparative study of methods for extraction and purification of ...

    African Journals Online (AJOL)

    DNA extraction from wastewater sludge (COD 50000 and BOD 25000 mg/l) was conducted using nine different methods normally used for environmental samples including a procedure used in this study and the results obtained were compared. The quality of the differently extracted DNAs was subsequently assessed by ...

  6. An efficient method for DNA extraction from Cladosporioid fungi

    NARCIS (Netherlands)

    Moslem, M.A.; Bahkali, A.H.; Abd-Elsalam, K.A.; Wit, de P.J.G.M.

    2010-01-01

    We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on

  7. Effects of Extraction Method on the Physicochemical and ...

    African Journals Online (AJOL)

    The effects of improved method of extraction on the physicochemical, mycological and stability of crude Canarium Schweinfurthii fruit oil were studied. The extracted oils were then stored at 25±5oC for 24 months with samples analyzed at 6months interval for; pH, saponification value, acid value, peroxide value and iodine ...

  8. A Circular Statistical Method for Extracting Rotation Measures

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    where RM is the Rotation Measure and θ0 is the intrinsic position angle of polarization. (IPA). The extraction of RM is ambiguous since the observed polarization is defined only up to additions of nπ, where n is an integer. In the present paper we propose an alternate method for the extraction of RM and. IPA from data.

  9. Comparison of protein extraction methods suitable for proteomics ...

    African Journals Online (AJOL)

    An efficient protein extraction method is a prerequisite for successful implementation of proteomics. In this study, seedling roots of Jerusalem artichoke were treated with the concentration of 250 mM NaCl for 36 h. Subsequently, six different protocols of protein extraction were applied to seedling roots of Jerusalem artichoke ...

  10. An automatic method for the determination of saturation curve and metastable zone width of lysine monohydrochloride

    Science.gov (United States)

    Rabesiaka, Mihasina; Porte, Catherine; Bonnin-Paris, Johanne; Havet, Jean-Louis

    2011-10-01

    An essential tool in the study of crystallization is the saturation curve and metastable zone width, since the shape of the solubility curve defines the crystallization mode and the supersaturation conditions, which are the driving force of crystallization. The purpose of this work was to determine saturation and supersaturation curves of lysine monohydrochloride by an automatic method based on the turbidity of the crystallization medium. As lysine solution is colored, the interest of turbidimetry is demonstrated. An automated installation and the procedure to determine several points on the saturation curve and metastable zone width were set up in the laboratory. On-line follow-up of the solution turbidity and temperature enabled the dissolution and nucleation temperatures of the crystals to be determined by measuring attenuation of the light beam by suspended particles. The thermal regulation system was programmed so that the heating rate took into account the system inertia, i.e. duration related to the dissolution rate of the compound. Using this automatic method, the saturation curve and the metastable zone width of lysine monohydrochloride were plotted.

  11. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2018-04-01

    Full Text Available Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT. First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  12. On-line dynamic fractionation and automatic determination of inorganic phosphorous in environmental solid substrates exploiting sequential injection microcolumn extraction and flow injection analysi

    DEFF Research Database (Denmark)

    Buanuam, Janya; Miró, Manuel; Hansen, Elo Harald

    2006-01-01

    associations for phosphorus, that is, exchangeable, Al- and Fe-bound and Ca-bound fractions, were elucidated by accommodation in the flow manifold of the 3 steps of the Hietjles-Litjkema (HL) scheme involving the use of 1.0 M NH4Cl, 0.1 M NaOH and 0.5 M HCl, respectively, as sequential leaching reagents....... The precise timing and versatility of SI for tailoring various operational extraction modes were utilised for investigating the extractability and extent of phosphorous re-distribution for variable partitioning times. Automatic spectrophotometric determination of soluble reactive phosphorous in soil extracts...

  13. A comparison of DNA extraction methods using Petunia hybrida tissues.

    Science.gov (United States)

    Tamari, Farshad; Hinkley, Craig S; Ramprashad, Naderia

    2013-09-01

    Extraction of DNA from plant tissue is often problematic, as many plants contain high levels of secondary metabolites that can interfere with downstream applications, such as the PCR. Removal of these secondary metabolites usually requires further purification of the DNA using organic solvents or other toxic substances. In this study, we have compared two methods of DNA purification: the cetyltrimethylammonium bromide (CTAB) method that uses the ionic detergent hexadecyltrimethylammonium bromide and chloroform-isoamyl alcohol and the Edwards method that uses the anionic detergent SDS and isopropyl alcohol. Our results show that the Edwards method works better than the CTAB method for extracting DNA from tissues of Petunia hybrida. For six of the eight tissues, the Edwards method yielded more DNA than the CTAB method. In four of the tissues, this difference was statistically significant, and the Edwards method yielded 27-80% more DNA than the CTAB method. Among the different tissues tested, we found that buds, 4 days before anthesis, had the highest DNA concentrations and that buds and reproductive tissue, in general, yielded higher DNA concentrations than other tissues. In addition, DNA extracted using the Edwards method was more consistently PCR-amplified than that of CTAB-extracted DNA. Based on these results, we recommend using the Edwards method to extract DNA from plant tissues and to use buds and reproductive structures for highest DNA yields.

  14. A novel automatic method for monitoring Tourette motor tics through a wearable device.

    Science.gov (United States)

    Bernabei, Michel; Preatoni, Ezio; Mendez, Martin; Piccini, Luca; Porta, Mauro; Andreoni, Giuseppe

    2010-09-15

    The aim of this study was to propose a novel automatic method for quantifying motor-tics caused by the Tourette Syndrome (TS). In this preliminary report, the feasibility of the monitoring process was tested over a series of standard clinical trials in a population of 12 subjects affected by TS. A wearable instrument with an embedded three-axial accelerometer was used to detect and classify motor tics during standing and walking activities. An algorithm was devised to analyze acceleration data by: eliminating noise; detecting peaks connected to pathological events; and classifying intensity and frequency of motor tics into quantitative scores. These indexes were compared with the video-based ones provided by expert clinicians, which were taken as the gold-standard. Sensitivity, specificity, and accuracy of tic detection were estimated, and an agreement analysis was performed through the least square regression and the Bland-Altman test. The tic recognition algorithm showed sensitivity = 80.8% ± 8.5% (mean ± SD), specificity = 75.8% ± 17.3%, and accuracy = 80.5% ± 12.2%. The agreement study showed that automatic detection tended to overestimate the number of tics occurred. Although, it appeared this may be a systematic error due to the different recognition principles of the wearable and video-based systems. Furthermore, there was substantial concurrency with the gold-standard in estimating the severity indexes. The proposed methodology gave promising performances in terms of automatic motor-tics detection and classification in a standard clinical context. The system may provide physicians with a quantitative aid for TS assessment. Further developments will focus on the extension of its application to everyday long-term monitoring out of clinical environments. © 2010 Movement Disorder Society.

  15. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    Science.gov (United States)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-02-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the down-dip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multi-scale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of

  16. A Learning-Based Wrapper Method to Correct Systematic Errors in Automatic Image Segmentation: Consistently Improved Performance in Hippocampus, Cortex and Brain Segmentation

    Science.gov (United States)

    Wang, Hongzhi; Das, Sandhitsu R.; Suh, Jung Wook; Altinay, Murat; Pluta, John; Craige, Caryne; Avants, Brian; Yushkevich, Paul A.

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter

  17. Optimization strategies of in-tube extraction (ITEX) methods.

    Science.gov (United States)

    Laaks, Jens; Jochmann, Maik A; Schilling, Beat; Schmidt, Torsten C

    2015-09-01

    Microextraction techniques, especially dynamic techniques like in-tube extraction (ITEX), can require an extensive method optimization procedure. This work summarizes the experiences from several methods and gives recommendations for the setting of proper extraction conditions to minimize experimental effort. Therefore, the governing parameters of the extraction and injection stages are discussed. This includes the relative extraction efficiencies of 11 kinds of sorbent tubes, either commercially available or custom made, regarding 53 analytes from different classes of compounds. They cover aromatics, heterocyclic aromatics, halogenated hydrocarbons, fuel oxygenates, alcohols, esters, and aldehydes. The number of extraction strokes and the corresponding extraction flow, also in dependence of the expected analyte concentrations, are discussed as well as the interactions between sample and extraction phase temperature. The injection parameters cover two different injection methods. The first is intended for the analysis of highly volatile analytes and the second either for the analysis of lower volatile analytes or when the analytes can be re-focused by a cold trap. The desorption volume, the desorption temperature, and the desorption flow are compared, together with the suitability of both methods for analytes of varying volatilities. The results are summarized in a flow chart, which can be used to select favorable starting conditions for further method optimization.

  18. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    International Nuclear Information System (INIS)

    Péron, Stéphanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  19. Seamless Ligation Cloning Extract (SLiCE) Cloning Method

    OpenAIRE

    Zhang, Yongwei; Werling, Uwe; Edelmann, Winfried

    2014-01-01

    SLiCE (Seamless Ligation Cloning Extract) is a novel cloning method that utilizes easy to generate bacterial cell extracts to assemble multiple DNA fragments into recombinant DNA molecules in a single in vitro recombination reaction. SLiCE overcomes the sequence limitations of traditional cloning methods, facilitates seamless cloning by recombining short end homologies (15–52 bp) with or without flanking heterologous sequences and provides an effective strategy for directional subcloning of D...

  20. Automatic Extraction of Appendix from Ultrasonography with Self-Organizing Map and Shape-Brightness Pattern Learning

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2016-01-01

    Full Text Available Accurate diagnosis of acute appendicitis is a difficult problem in practice especially when the patient is too young or women in pregnancy. In this paper, we propose a fully automatic appendix extractor from ultrasonography by applying a series of image processing algorithms and an unsupervised neural learning algorithm, self-organizing map. From the suggestions of clinical practitioners, we define four shape patterns of appendix and self-organizing map learns those patterns in pixel clustering phase. In the experiment designed to test the performance for those four frequently found shape patterns, our method is successful in 3 types (1 failure out of 45 cases but leaves a question for one shape pattern (80% correct.

  1. Automatic Extraction of Appendix from Ultrasonography with Self-Organizing Map and Shape-Brightness Pattern Learning.

    Science.gov (United States)

    Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun

    2016-01-01

    Accurate diagnosis of acute appendicitis is a difficult problem in practice especially when the patient is too young or women in pregnancy. In this paper, we propose a fully automatic appendix extractor from ultrasonography by applying a series of image processing algorithms and an unsupervised neural learning algorithm, self-organizing map. From the suggestions of clinical practitioners, we define four shape patterns of appendix and self-organizing map learns those patterns in pixel clustering phase. In the experiment designed to test the performance for those four frequently found shape patterns, our method is successful in 3 types (1 failure out of 45 cases) but leaves a question for one shape pattern (80% correct).

  2. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    Directory of Open Access Journals (Sweden)

    Zečević, N.

    2008-12-01

    Full Text Available There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon black4. mass flow rate of hydrocarbon oil feedstock5. type and quantity of additive for adjustment the structure of oil-furnace carbon black6. quantity and position of the quench water for cooling the reaction of oil-furnace carbon black.The control of oil-furnace carbon black adsorption capacity is made with mass flow rate of hydrocarbon feedstock, which is the most important inlet process factor. Oil-furnace carbon black adsorption capacity in industrial process is determined with laboratory analyze of iodine adsorption number. It is shown continuously and automatically method for controlling iodine adsorption number in carbon black oil-furnaces to get as much as possible efficient control of adsorption capacity. In the proposed method it can be seen the correlation between qualitatively-quantitatively composition of the process tail gasses in the production of oil-furnace carbon black and relationship between air for combustion and hydrocarbon feedstock. It is shown that the ratio between air for combustion and hydrocarbon oil feedstock is depended of adsorption capacity summarized by iodine adsorption number, regarding to BMCI index of hydrocarbon oil feedstock.The mentioned correlation can be seen through the figures from 1. to 4. From the whole composition of the process tail gasses the best correlation for continuously and automatically control of iodine adsorption number is show the volume fraction of methane. The volume fraction of methane in the

  3. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  4. A semi-automatic computer-aided method for surgical template design.

    Science.gov (United States)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-04

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  5. Forward gated-diode method for parameter extraction of MOSFETs

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Chenfei; He Jin; Wang Guozeng; Yang Zhang; Liu Zhiwei [Peking University Shenzhen SOC Key Laboratory, PKU HKUST Shenzhen Institute, Shenzhen 518057 (China); Ma Chenyue; Guo Xinjie; Zhang Xiufang, E-mail: frankhe@pku.edu.cn [TSRC, Institute of Microelectronics, School of Electronic Engineering and Computer Science, Peking University, Beijing 100871 (China)

    2011-02-15

    The forward gated-diode method is used to extract the dielectric oxide thickness and body doping concentration of MOSFETs, especially when both of the variables are unknown previously. First, the dielectric oxide thickness and the body doping concentration as a function of forward gated-diode peak recombination-generation (R-G) current are derived from the device physics. Then the peak R-G current characteristics of the MOSFETs with different dielectric oxide thicknesses and body doping concentrations are simulated with ISE-Dessis for parameter extraction. The results from the simulation data demonstrate excellent agreement with those extracted from the forward gated-diode method. (semiconductor devices)

  6. A Circular Statistical Method for Extracting Rotation Measures

    Indian Academy of Sciences (India)

    Abstract. We propose a new method for the extraction of Rotation Measures from spectral polarization data. The method is based on maximum likelihood analysis and takes into account the circular nature of the polarization data. The method is unbiased and statistically more efficient than the standard 2 procedure.

  7. An interactive tool for semi-automatic feature extraction of hyperspectral data

    Directory of Open Access Journals (Sweden)

    Kovács Zoltán

    2016-09-01

    Full Text Available The spectral reflectance of the surface provides valuable information about the environment, which can be used to identify objects (e.g. land cover classification or to estimate quantities of substances (e.g. biomass. We aimed to develop an MS Excel add-in – Hyperspectral Data Analyst (HypDA – for a multipurpose quantitative analysis of spectral data in VBA programming language. HypDA was designed to calculate spectral indices from spectral data with user defined formulas (in all possible combinations involving a maximum of 4 bands and to find the best correlations between the quantitative attribute data of the same object. Different types of regression models reveal the relationships, and the best results are saved in a worksheet. Qualitative variables can also be involved in the analysis carried out with separability and hypothesis testing; i.e. to find the wavelengths responsible for separating data into predefined groups. HypDA can be used both with hyperspectral imagery and spectrometer measurements. This bivariate approach requires significantly fewer observations than popular multivariate methods; it can therefore be applied to a wide range of research areas.

  8. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    Science.gov (United States)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  9. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    Science.gov (United States)

    García-Remesal, Miguel; García-Ruiz, Alejandro; Pérez-Rey, David; de la Iglesia, Diana; Maojo, Víctor

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets) to 93.0% (routes of exposure), while recall values range from 82.6% (routes of exposure) to 87.4% (toxic effects). These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER)-dependent tasks, such as for instance augmented reading or semantic searches. This research is a “proof of concept” that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine. PMID:23509721

  10. Comparing extraction buffers to identify optimal method to extract somatic coliphages from sewage sludges.

    Science.gov (United States)

    Murthi, Poornima; Praveen, Chandni; Jesudhasan, Palmy R; Pillai, Suresh D

    2012-08-01

    Somatic coliphages are present in high numbers in sewage sludge. Since they are conservative indicators of viruses during wastewater treatment processes, they are being used to evaluate the effectiveness of sludge treatment processes. However, efficient methods to extract them from sludge are lacking. The objective was to compare different virus extraction procedures and develop a method to extract coliphages from sewage sludge. Twelve different extraction buffers and procedures varying in composition, pH, and sonication were compared in their ability to recover indigenous phages from sludges. The 3% buffered beef extract (BBE) (pH 9.0), the 10% BBE (pH 9.0), and the 10% BBE (pH 7.0) with sonication were short-listed and their recovery efficiency was determined using coliphage-spiked samples. The highest recovery was 16% for the extraction that involved 10% BBE at pH 9.0. There is a need to develop methods to extract somatic phages from sludges for monitoring sludge treatment processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    Directory of Open Access Journals (Sweden)

    Enrique Valero

    2012-11-01

    Full Text Available In this paper we present a method that automatically yields Boundary Representation Models (B-rep for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.

  12. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    Science.gov (United States)

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  13. Research on automatic current sharing control methods for control power supply

    Directory of Open Access Journals (Sweden)

    Dai Xian Bin

    2016-01-01

    Full Text Available High-power switching devices in control power supply have different saturated forward voltage drops and the inconsistency of turning on/off times and they lead to the inconsistency in the external characteristics of the inverter modules in parallel operation. Modules with good performance in external characteristics undertake more currents and lead to overloading status and modules with bad performance in external characteristics stay in light-loading status, which increases the thermal stress of module undertaking more currents and influences the service life of high-power switching devices. Based on the simulation analysis of the small-signal module using control power supply automatic current sharing method, it is able to find out the characteristics of current-sharing loop control, namely, slow response speed of the current-sharing loop, which is beneficial for improving the stability of the entire control power supply system.

  14. Comparison of manual and semi-automatic DNA extraction protocols for the barcoding characterization of hematophagous louse flies (Diptera: Hippoboscidae).

    Science.gov (United States)

    Gutiérrez-López, Rafael; Martínez-de la Puente, Josué; Gangoso, Laura; Soriguer, Ramón C; Figuerola, Jordi

    2015-06-01

    The barcoding of life initiative provides a universal molecular tool to distinguish animal species based on the amplification and sequencing of a fragment of the subunit 1 of the cytochrome oxidase (COI) gene. Obtaining good quality DNA for barcoding purposes is a limiting factor, especially in studies conducted on small-sized samples or those requiring the maintenance of the organism as a voucher. In this study, we compared the number of positive amplifications and the quality of the sequences obtained using DNA extraction methods that also differ in their economic costs and time requirements and we applied them for the genetic characterization of louse flies. Four DNA extraction methods were studied: chloroform/isoamyl alcohol, HotShot procedure, Qiagen DNeasy(®) Tissue and Blood Kit and DNA Kit Maxwell(®) 16LEV. All the louse flies were morphologically identified as Ornithophila gestroi and a single COI-based haplotype was identified. The number of positive amplifications did not differ significantly among DNA extraction procedures. However, the quality of the sequences was significantly lower for the case of the chloroform/isoamyl alcohol procedure with respect to the rest of methods tested here. These results may be useful for the genetic characterization of louse flies, leaving most of the remaining insect as a voucher. © 2015 The Society for Vector Ecology.

  15. Comparison of reconstruction methods for computed tomography with industrial robots using automatic object position recognition

    International Nuclear Information System (INIS)

    Klein, Philipp; Herold, Frank

    2016-01-01

    The Computed Tomography (CT) is one main imaging technique in the field of non-destructive testing. Newly, industrial robots are used to manipulate the object during the whole CT scan, instead of just placing the object onto a standard turntable as it was usual for industrial CT the times before. Using industrial robots for the object manipulation in CT systems provides an increase in spatial freedom and therefore more flexibility for various applications. For example complete CT trajectories concerning the Tuy-Smith Theorem are applied more easily than using conventional manipulators. These advantages are accompanied by a loss of precision in positioning, caused by mechanical limitations of the robotic systems. In this article we will present a comparison of established reconstruction methods for CT with industrial robots using a so-called Automatic Object Position Recognition (AOPR). AOPR is a new automatic method which improves the position-accuracy online by using a priori information about fix markers in space. The markers are used to reconstruct the position of the object during each image acquisition. These more precise positions lead to a higher quality of the reconstructed volume after the image reconstruction. We will study the image quality of several different reconstruction techniques. For example we will reconstruct real robot-CT datasets by filtered back-projection (FBP), simultaneous algebraic reconstruction technique (SART) or Siemens's theoretically exact reconstruction (TXR). Each time, we will evaluate the datasets with and without AOPR and will present the dedicated image quality. Moreover we will measure the computation time of AOPR to proof that we still fulfill the real time conditions.

  16. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    Directory of Open Access Journals (Sweden)

    Baraka D. Sija

    2018-01-01

    Full Text Available A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards Protocol Reverse Engineering (PRE and classifies them into four divisions, approaches that reverse engineer protocol finite state machines, protocol formats, and both protocol finite state machines and protocol formats to approaches that focus directly on neither reverse engineering protocol formats nor protocol finite state machines. The efficiency of all approaches’ outputs based on their selected inputs is analyzed in general along with appropriate reverse engineering inputs format. Additionally, we present discussion and extended classification in terms of automated to manual approaches, known and novel categories of reverse engineered protocols, and a literature of reverse engineered protocols in relation to the seven layers’ OSI (Open Systems Interconnection model.

  17. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  18. Two-dimensional Morlet wavelet transform and its application to wave recognition methodology of automatically extracting two-dimensional wave packets from lidar observations in Antarctica

    Science.gov (United States)

    Chen, Cao; Chu, Xinzhao

    2017-09-01

    Waves in the atmosphere and ocean are inherently intermittent, with amplitudes, frequencies, or wavelengths varying in time and space. Most waves exhibit wave packet-like properties, propagate at oblique angles, and are often observed in two-dimensional (2-D) datasets. These features make the wavelet transforms, especially the 2-D wavelet approach, more appealing than the traditional windowed Fourier analysis, because the former allows adaptive time-frequency window width (i.e., automatically narrowing window size at high frequencies and widening at low frequencies), while the latter uses a fixed envelope function. This study establishes the mathematical formalism of modified 1-D and 2-D Morlet wavelet transforms, ensuring that the power of the wavelet transform in the frequency/wavenumber domain is equivalent to the mean power of its counterpart in the time/space domain. Consequently, the modified wavelet transforms eliminate the bias against high-frequency/small-scale waves in the conventional wavelet methods and many existing codes. Based on the modified 2-D Morlet wavelet transform, we put forward a wave recognition methodology that automatically identifies and extracts 2-D quasi-monochromatic wave packets and then derives their wave properties including wave periods, wavelengths, phase speeds, and time/space spans. A step-by-step demonstration of this methodology is given on analyzing the lidar data taken during 28-30 June 2014 at McMurdo, Antarctica. The newly developed wave recognition methodology is then applied to two more lidar observations in May and July 2014, to analyze the recently discovered persistent gravity waves in Antarctica. The decomposed inertia-gravity wave characteristics are consistent with the conclusion in Chen et al. (2016a) that the 3-10 h waves are persistent and dominant, and exhibit lifetimes of multiple days. They have vertical wavelengths of 20-30 km, vertical phase speeds of 0.5-2 m/s, and horizontal wavelengths up to several

  19. An automatic patient-specific seizure onset detection method in intracranial EEG based on incremental nonlinear dimensionality reduction.

    Science.gov (United States)

    Zhang, Yizhuo; Xu, Guanghua; Wang, Jing; Liang, Lin

    2010-01-01

    Epileptic seizure features always include the morphology and spatial distribution of nonlinear waveforms in the electroencephalographic (EEG) signals. In this study, we propose a novel incremental learning scheme based on nonlinear dimensionality reduction for automatic patient-specific seizure onset detection. The method allows for identification of seizure onset times in long-term EEG signals acquired from epileptic patients. Firstly, a nonlinear dimensionality reduction (NDR) method called local tangent space alignment (LTSA) is used to reduce the dimensionality of available initial feature sets extracted with continuous wavelet transform (CWT). One-dimensional manifold which reflects the intrinsic dynamics of seizure onset is obtained. For each patient, IEEG recordings containing one seizure onset is sufficient to train the initial one-dimensional manifold. Secondly, an unsupervised incremental learning scheme is proposed to update the initial manifold when the unlabelled EEG segments flow in sequentially. The incremental learning scheme can cluster the new coming samples into the trained patterns (containing or not containing seizure onsets). Intracranial EEG recordings from 21 patients with duration of 193.8h and 82 seizures are used for the evaluation of the method. Average sensitivity of 98.8%, average uninteresting false positive rate of 0.24/h, average interesting false positives rate of 0.25/h, and average detection delay of 10.8s are obtained. Our method offers simple, accurate training with less human intervening and can be well used in off-line seizure detection. The unsupervised incremental learning scheme has the potential in identifying novel IEEG classes (different onset patterns) within the data. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. An efficient method for DNA extraction from Cladosporioid fungi

    OpenAIRE

    Moslem, M.A.; Bahkali, A.H.; Abd-Elsalam, K.A.; Wit, de, P.J.G.M.

    2010-01-01

    We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on agar plates and extracted DNA from mycelium mats after manual or electric homogenization. High-quality DNA was isolated, with an A260/A280 ratio ranging between 1.6 and 2.0. Isolated genomic DNA w...

  1. An automatic segmentation method of a parameter-adaptive PCNN for medical images.

    Science.gov (United States)

    Lian, Jing; Shi, Bin; Li, Mingcong; Nan, Ziwei; Ma, Yide

    2017-09-01

    Since pre-processing and initial segmentation steps in medical images directly affect the final segmentation results of the regions of interesting, an automatic segmentation method of a parameter-adaptive pulse-coupled neural network is proposed to integrate the above-mentioned two segmentation steps into one. This method has a low computational complexity for different kinds of medical images and has a high segmentation precision. The method comprises four steps. Firstly, an optimal histogram threshold is used to determine the parameter [Formula: see text] for different kinds of images. Secondly, we acquire the parameter [Formula: see text] according to a simplified pulse-coupled neural network (SPCNN). Thirdly, we redefine the parameter V of the SPCNN model by sub-intensity distribution range of firing pixels. Fourthly, we add an offset [Formula: see text] to improve initial segmentation precision. Compared with the state-of-the-art algorithms, the new method achieves a comparable performance by the experimental results from ultrasound images of the gallbladder and gallstones, magnetic resonance images of the left ventricle, and mammogram images of the left and the right breast, presenting the overall metric UM of 0.9845, CM of 0.8142, TM of 0.0726. The algorithm has a great potential to achieve the pre-processing and initial segmentation steps in various medical images. This is a premise for assisting physicians to detect and diagnose clinical cases.

  2. Standard test methods for determining average grain size using semiautomatic and automatic image analysis

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2015-01-01

    1.1 These test methods are used to determine grain size from measurements of grain intercept lengths, intercept counts, intersection counts, grain boundary length, and grain areas. 1.2 These measurements are made with a semiautomatic digitizing tablet or by automatic image analysis using an image of the grain structure produced by a microscope. 1.3 These test methods are applicable to any type of grain structure or grain size distribution as long as the grain boundaries can be clearly delineated by etching and subsequent image processing, if necessary. 1.4 These test methods are applicable to measurement of other grain-like microstructures, such as cell structures. 1.5 This standard deals only with the recommended test methods and nothing in it should be construed as defining or establishing limits of acceptability or fitness for purpose of the materials tested. 1.6 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user ...

  3. Extraction methods of Amaranthus sp. grain oil isolation.

    Science.gov (United States)

    Krulj, Jelena; Brlek, Tea; Pezo, Lato; Brkljača, Jovana; Popović, Sanja; Zeković, Zoran; Bodroža Solarov, Marija

    2016-08-01

    Amaranthus sp. is a fast-growing crop with well-known beneficial nutritional values (rich in protein, fat, dietary fiber, ash, and minerals, especially calcium and sodium, and containing a higher amount of lysine than conventional cereals). Amaranthus sp. is an underexploited plant source of squalene, a compound of high importance in the food, cosmetic and pharmaceutical industries. This paper has examined the effects of the different extraction methods (Soxhlet, supercritical fluid and accelerated solvent extraction) on the oil and squalene yield of three genotypes of Amaranthus sp. grain. The highest yield of the extracted oil (78.1 g kg(-1) ) and squalene (4.7 g kg(-1) ) in grain was obtained by accelerated solvent extraction (ASE) in genotype 16. Post hoc Tukey's HSD test at 95% confidence limit showed significant differences between observed samples. Principal component analysis (PCA) and cluster analysis (CA) were used for assessing the effect of different genotypes and extraction methods on oil and squalene yield, and also the fatty acid composition profile. Using coupled PCA and CA of observed samples, possible directions for improving the quality of product can be realized. The results of this study indicate that it is very important to choose both the right genotype and the right method of extraction for optimal oil and squalene yield. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  4. Airway Segmentation and Centerline Extraction from Thoracic CT - Comparison of a New Method to State of the Art Commercialized Methods.

    Science.gov (United States)

    Reynisson, Pall Jens; Scali, Marta; Smistad, Erik; Hofstad, Erlend Fagertun; Leira, Håkon Olav; Lindseth, Frank; Nagelhus Hernes, Toril Anita; Amundsen, Tore; Sorger, Hanne; Langø, Thomas

    2015-01-01

    . Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature. The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system.

  5. Airway Segmentation and Centerline Extraction from Thoracic CT - Comparison of a New Method to State of the Art Commercialized Methods.

    Directory of Open Access Journals (Sweden)

    Pall Jens Reynisson

    centerlines. Reference segmentation comparison averages and standard deviations for MPM and TSF correspond to literature.The TSF is able to segment the airways and extract the centerlines in one single step. The number of branches found is lower for the TSF method than in Mimics. OsiriX demands the highest number of clicks to process the data, the segmentation is often sparse and extracting the centerline requires the use of another software system. Two of the software systems performed satisfactory with respect to be used in preprocessing CT images for navigated bronchoscopy, i.e. the TSF method and the MPM. According to reference segmentation both TSF and MPM are comparable with other segmentation methods. The level of automaticity and the resulting high number of branches plus the fact that both centerline and the surface of the airways were extracted, are requirements we considered particularly important. The in house method has the advantage of being an integrated part of a navigation platform for bronchoscopy, whilst the other methods can be considered preprocessing tools to a navigation system.

  6. [Determination of characteristic compound in manuka honey by automatic on-line solid phase extraction-liquid chromatography-high resolution mass spectrometry].

    Science.gov (United States)

    Shen, Chongyu; Guo, Siyan; Ding, Tao; Liu, Yun; Chen, Lei; Fei, Xiaoqing; Zhang, Rui; Wu, Bin; Shen, Weijian; Chen, Lei; Zhang, Feng; Feng, Feng; Deng, Xiaojun; Yi, Xionghai; Yang, Gongjun; Chen, Guoqiang

    2017-10-08

    A method for the determination of characteristic compound 3,5-dimethoxybenzoate-4-diglucoside (leptosperin) in manuka honey was developed by automatic on-line solid phase extraction-liquid chromatography-high resolution mass spectrometry (SPE-LC-HRMS). The samples were separated on a Dikma Diamonsil Plus C 18 column (150 mm×4.6 mm, 5 μm) using the mobile phases of 0.1% (v/v) formic acid aqueous solution and acetonitrile with gradient elution. The compound was detected with negative electrospray ionization (ESI - ) in Target-MS 2 mode. The results showed that the linear range was 0.5-100.0 mg/L, the correlation coefficient was 0.9993. The limit of detection (LOD, S/N ≥ 3) and limit of quantification (LOQ, S/N ≥ 10) of the method was 3 mg/kg and 10 mg/kg, respectively. The recoveries at the spiked levels of 50.0, 100.0, 200.0 mg/kg (10.0, 20.0, 50.0 mg/kg in black locust samples) were in the range of 82.0%-95.2% with the relative standard deviations ranging from 2.7% to 9.7% ( n =6). The proposed method was applied to 95 mature honey samples from hives in New Zealand including 12 different kinds and 50 commercial honey samples from four different countries. The method is fast, sensitive and accurate to provide technical support to solve the judgment of the manuka honey imported from New Zealand.

  7. Microscale extraction method for HPLC carotenoid analysis in vegetable matrices

    Directory of Open Access Journals (Sweden)

    Sidney Pacheco

    2014-10-01

    Full Text Available In order to generate simple, efficient analytical methods that are also fast, clean, and economical, and are capable of producing reliable results for a large number of samples, a micro scale extraction method for analysis of carotenoids in vegetable matrices was developed. The efficiency of this adapted method was checked by comparing the results obtained from vegetable matrices, based on extraction equivalence, time required and reagents. Six matrices were used: tomato (Solanum lycopersicum L., carrot (Daucus carota L., sweet potato with orange pulp (Ipomoea batatas (L. Lam., pumpkin (Cucurbita moschata Duch., watermelon (Citrullus lanatus (Thunb. Matsum. & Nakai and sweet potato (Ipomoea batatas (L. Lam. flour. Quantification of the total carotenoids was made by spectrophotometry. Quantification and determination of carotenoid profiles were formulated by High Performance Liquid Chromatography with photodiode array detection. Microscale extraction was faster, cheaper and cleaner than the commonly used one, and advantageous for analytical laboratories.

  8. Phenolic content and antibacterial activity of extracts of Hamelia patens obtained by different extraction methods.

    Science.gov (United States)

    Paz, Jorge Enrique Wong; Contreras, Carolina Rubio; Munguía, Abigail Reyes; Aguilar, Cristóbal Noé; Inungaray, María Luisa Carrillo

    2017-12-06

    Hamelia patens, is a plant traditionally used to treat a variety of conditions among the Huastec people of Mexico. The objective of this study is to characterize the phenolic content and critically examine the antimicrobial activity of leaf extracts H. patens, obtained by maceration, Soxhlet and percolation, using ethanol as 70% solvent. Phenolic compounds are characterized by liquid chromatography, coupled to a High Resolution Mass Spectrometry, and the antimicrobial activity was studied from the inhibitory effect of each extract for Escherichia coli, Staphylococcus aureus, Salmonella typhi and S. paratyphi, and by the Minimum Bactericidal Concentration, the percentage of activity and the Index of Bacterial Susceptibility of each extract. The phenolic compound identified in different concentrations in the three extracts was epicatechin. The extracts obtained by the three methods had antimicrobial activity, however, there was no significant difference (p<0.05) between the Minimum Bactericidal Concentration of the extracts obtained by maceration, percolation and Soxhlet. The results of this study contribute to the body of knowledge on the use of extracts in controlling microorganisms with natural antimicrobials. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  9. Influence of Extraction Methods on the Yield of Steviol Glycosides and Antioxidants in Stevia rebaudiana Extracts.

    Science.gov (United States)

    Periche, Angela; Castelló, Maria Luisa; Heredia, Ana; Escriche, Isabel

    2015-06-01

    This study evaluated the application of ultrasound techniques and microwave energy, compared to conventional extraction methods (high temperatures at atmospheric pressure), for the solid-liquid extraction of steviol glycosides (sweeteners) and antioxidants (total phenols, flavonoids and antioxidant capacity) from dehydrated Stevia leaves. Different temperatures (from 50 to 100 °C), times (from 1 to 40 min) and microwave powers (1.98 and 3.30 W/g extract) were used. There was a great difference in the resulting yields according to the treatments applied. Steviol glycosides and antioxidants were negatively correlated; therefore, there is no single treatment suitable for obtaining the highest yield in both groups of compounds simultaneously. The greatest yield of steviol glycosides was obtained with microwave energy (3.30 W/g extract, 2 min), whereas, the conventional method (90 °C, 1 min) was the most suitable for antioxidant extraction. Consequently, the best process depends on the subsequent use (sweetener or antioxidant) of the aqueous extract of Stevia leaves.

  10. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    Science.gov (United States)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  11. Analysis of medicinal plant extracts by neutron activation method

    International Nuclear Information System (INIS)

    Vaz, Sandra Muntz

    1995-01-01

    This dissertation has presented the results from analysis of medicinal plant extracts using neutron activation method. Instrumental neutron activation analysis was applied to the determination of the elements Al, Br, Ca, Ce, Cl, Cr, Cs, Fe, K, La, Mg, Mn, Na, Rb, Sb, Sc and Zn in medicinal extracts obtained from Achyrolcline satureoides DC, Casearia sylvestris, Centella asiatica, Citrus aurantium L., Solano lycocarpum, Solidago microglossa, Stryphnondedron barbatiman and Zingiber officinale R. plants. The elements Hg and Se were determined using radiochemical separation by means of retention of Se in HMD inorganic exchanger and solvent extraction of Hg by bismuth diethyl-dithiocarbamate solution. Precision and accuracy of the results have been evaluated by analysing reference materials. The therapeutic action of some elements found in plant extracts analyzed was briefly discussed

  12. A Neutral-Network-Fusion Architecture for Automatic Extraction of Oceanographic Features from Satellite Remote Sensing Imagery

    National Research Council Canada - National Science Library

    Askari, Farid

    1999-01-01

    This report describes an approach for automatic feature detection from fusion of remote sensing imagery using a combination of neural network architecture and the Dempster-Shafer (DS) theory of evidence...

  13. Automatic segmentation of 4D cardiac MR images for extraction of ventricular chambers using a spatio-temporal approach

    Science.gov (United States)

    Atehortúa, Angélica; Zuluaga, Maria A.; Ourselin, Sébastien; Giraldo, Diana; Romero, Eduardo

    2016-03-01

    An accurate ventricular function quantification is important to support evaluation, diagnosis and prognosis of several cardiac pathologies. However, expert heart delineation, specifically for the right ventricle, is a time consuming task with high inter-and-intra observer variability. A fully automatic 3D+time heart segmentation framework is herein proposed for short-axis-cardiac MRI sequences. This approach estimates the heart using exclusively information from the sequence itself without tuning any parameters. The proposed framework uses a coarse-to-fine approach, which starts by localizing the heart via spatio-temporal analysis, followed by a segmentation of the basal heart that is then propagated to the apex by using a non-rigid-registration strategy. The obtained volume is then refined by estimating the ventricular muscle by locally searching a prior endocardium- pericardium intensity pattern. The proposed framework was applied to 48 patients datasets supplied by the organizers of the MICCAI 2012 Right Ventricle segmentation challenge. Results show the robustness, efficiency and competitiveness of the proposed method both in terms of accuracy and computational load.

  14. Correction method for line extraction in vision measurement.

    Science.gov (United States)

    Shao, Mingwei; Wei, Zhenzhong; Hu, Mengjie; Zhang, Guangjun

    2015-01-01

    Over-exposure and perspective distortion are two of the main factors underlying inaccurate feature extraction. First, based on Steger's method, we propose a method for correcting curvilinear structures (lines) extracted from over-exposed images. A new line model based on the Gaussian line profile is developed, and its description in the scale space is provided. The line position is analytically determined by the zero crossing of its first-order derivative, and the bias due to convolution with the normal Gaussian kernel function is eliminated on the basis of the related description. The model considers over-exposure features and is capable of detecting the line position in an over-exposed image. Simulations and experiments show that the proposed method is not significantly affected by the exposure level and is suitable for correcting lines extracted from an over-exposed image. In our experiments, the corrected result is found to be more precise than the uncorrected result by around 45.5%. Second, we analyze perspective distortion, which is inevitable during line extraction owing to the projective camera model. The perspective distortion can be rectified on the basis of the bias introduced as a function of related parameters. The properties of the proposed model and its application to vision measurement are discussed. In practice, the proposed model can be adopted to correct line extraction according to specific requirements by employing suitable parameters.

  15. A semi-automatic calibration method for seismic arrays applied to an Alaskan array

    Science.gov (United States)

    Lindquist, K. G.; Tibuleac, I. M.; Hansen, R. A.

    2001-12-01

    Well-calibrated, small (less than 22 km) aperture seismic arrays are of great importance for event location and characterization. We have implemented the crosscorrelation method of Tibuleac and Herrin (Seis. Res. Lett. 1997) as a semi-automatic procedure, applicable to any seismic array. With this we are able to process thousands of phases with several days of computer time on a Sun Blade 1000 workstation. Complicated geology beneath elements and elevation differences amonst the array stations made station corrections necessary. 328 core phases (including PcP, PKiKP, PKP, PKKP) were used in order to determine the static corrections. To demonstrate this application and method, we have analyzed P and PcP arrivals at the ILAR array (Eielson, Alaska) between years 1995-2000. The arrivals were picked by PIDC, for events (mb>4.0) well located by the USGS. We calculated backazimuth and horizontal velocity residuals for all events. We observed large backazimuth residuals for regional and near-regional phases. We are discussing the possibility of a dipping Moho (strike E-W, dip N) beneath the array versus other local structure that would produce the residuals.

  16. A Review on Energy-Saving Optimization Methods for Robotic and Automatic Systems

    Directory of Open Access Journals (Sweden)

    Giovanni Carabin

    2017-12-01

    Full Text Available In the last decades, increasing energy prices and growing environmental awareness have driven engineers and scientists to find new solutions for reducing energy consumption in manufacturing. Although many processes of a high energy consumption (e.g., chemical, heating, etc. are considered to have reached high levels of efficiency, this is not the case for many other industrial manufacturing activities. Indeed, this is the case for robotic and automatic systems, for which, in the past, the minimization of energy demand was not considered a design objective. The proper design and operation of industrial robots and automation systems represent a great opportunity for reducing energy consumption in the industry, for example, by the substitution with more efficient systems and the energy optimization of operation. This review paper classifies and analyses several methodologies and technologies that have been developed with the aim of providing a reference of existing methods, techniques and technologies for enhancing the energy performance of industrial robotic and mechatronic systems. Hardware and software methods, including several subcategories, are considered and compared, and emerging ideas and possible future perspectives are discussed.

  17. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  18. Photoelectric scanning-based method for positioning omnidirectional automatic guided vehicle

    Science.gov (United States)

    Huang, Zhe; Yang, Linghui; Zhang, Yunzhi; Guo, Yin; Ren, Yongjie; Lin, Jiarui; Zhu, Jigui

    2016-03-01

    Automatic guided vehicle (AGV) as a kind of mobile robot has been widely used in many applications. For better adapting to the complex working environment, more and more AGVs are designed to be omnidirectional by being equipped with Mecanum wheels for increasing their flexibility and maneuverability. However, as the AGV with this kind of wheels suffers from the position errors mainly because of the frequent slipping property, how to measure its position accurately in real time is an extremely important issue. Among the ways of achieving it, the photoelectric scanning methodology based on angle measurement is efficient. Hence, we propose a feasible method to ameliorate the positioning process, which mainly integrates four photoelectric receivers and one laser transmitter. To verify the practicality and accuracy, actual experiments and computer simulations have been conducted. In the simulation, the theoretical positioning error is less than 0.28 mm in a 10 m×10 m space. In the actual experiment, the performances about the stability, accuracy, and dynamic capability of this method were inspected. It demonstrates that the system works well and the performance of the position measurement is high enough to fulfill the mainstream tasks.

  19. Evaluation of an automatic dry eye test using MCDM methods and rank correlation.

    Science.gov (United States)

    Peteiro-Barral, Diego; Remeseiro, Beatriz; Méndez, Rebeca; Penedo, Manuel G

    2017-04-01

    Dry eye is an increasingly common disease in modern society which affects a wide range of population and has a negative impact on their daily activities, such as working with computers or driving. It can be diagnosed through an automatic clinical test for tear film lipid layer classification based on color and texture analysis. Up to now, researchers have mainly focused on the improvement of the image analysis step. However, there is still large room for improvement on the machine learning side. This paper presents a methodology to optimize this problem by means of class binarization, feature selection, and classification. The methodology can be used as a baseline in other classification problems to provide several solutions and evaluate their performance using a set of representative metrics and decision-making methods. When several decision-making methods are used, they may offer disagreeing rankings that will be solved by conflict handling in which rankings are merged into a single one. The experimental results prove the effectiveness of the proposed methodology in this domain. Also, its general purpose allows to adapt it to other classification problems in different fields such as medicine and biology.

  20. Comparison of RNA extraction methods in Thai aromatic coconut water

    Directory of Open Access Journals (Sweden)

    Nopporn Jaroonchon

    2015-10-01

    Full Text Available Many researches have reported that nucleic acid in coconut water is in free form and at very low yields which makes it difficult to process in molecular studies. Our research attempted to compare two extraction methods to obtain a higher yield of total RNA in aromatic coconut water and monitor its change at various fruit stages. The first method used ethanol and sodium acetate as reagents; the second method used lithium chloride. We found that extraction using only lithium chloride gave a higher total RNA yield than the method using ethanol to precipitate nucleic acid. In addition, the total RNA from both methods could be used in amplification of betaine aldehyde dehydrogenase2 (Badh2 genes, which is involved in coconut aroma biosynthesis, and could be used to perform further study as we expected. From the molecular study, the nucleic acid found in coconut water increased with fruit age.

  1. A method of extracting stomach region for double contrast radiograph

    International Nuclear Information System (INIS)

    Nakamura, Shizuo; Itagaki, Hidenobu

    1981-01-01

    A method is described, which extracts stomach region by picture processing technique for double contrast radiographs. The double contrast radiographs, which are used for observing the mucosa folds or morbid state of stomachs, play an important role in X-ray stomach examination. However, the difference in density between the exterior and interior of stomachs is small, and the stomach region overlaps with the complicated background, so that difficulties are present in computer processing. In the present method, the edge intensification with FIR filter and then the thinning of lines are made. Barium-filled regions were extracted by boundary searching, and excluded from the thinned line image. Subsequently, check-up on the connection and branching of the remaining lines and circular and fanshaped searches at line ends are made, so that the stomach boundaries are partly obtained. To extract the stomach region, these are joined together with the boundaries of barium-filled regions already obtained. (J.P.N.)

  2. DNA extraction method for PCR in mycorrhizal fungi.

    Science.gov (United States)

    Manian, S; Sreenivasaprasad, S; Mills, P R

    2001-10-01

    To develop a simple and rapid DNA extraction protocol for PCR in mycorrhizal fungi. The protocol combines the application of rapid freezing and boiling cycles and passage of the extracts through DNA purification columns. PCR amplifiable DNA was obtained from a number of endo- and ecto-mycorrhizal fungi using minute quantities of spores and mycelium, respectively. DNA extracted following the method, was used to successfully amplify regions of interest from high as well as low copy number genes. The amplicons were suitable for further downstream applications such as sequencing and PCR-RFLPs. The protocol described is simple, short and facilitates rapid isolation of PCR amplifiable genomic DNA from a large number of fungal isolates in a single day. The method requires only minute quantities of starting material and is suitable for mycorrhizal fungi as well as a range of other fungi.

  3. An efficient method for DNA extraction from Cladosporioid fungi.

    Science.gov (United States)

    Moslem, M A; Bahkali, A H; Abd-Elsalam, K A; Wit, P J G M

    2010-11-23

    We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on agar plates and extracted DNA from mycelium mats after manual or electric homogenization. High-quality DNA was isolated, with an A(260)/A(280) ratio ranging between 1.6 and 2.0. Isolated genomic DNA was efficiently digested with restriction enzymes and produced distinct banding patterns on agarose gels for the different Cladosporium species. Clear DNA fragments from the isolated DNA were amplified by PCR using small and large subunit rDNA primers, demonstrating that this method provides DNA of sufficiently high quality for molecular analyses.

  4. Seamless Ligation Cloning Extract (SLiCE) cloning method.

    Science.gov (United States)

    Zhang, Yongwei; Werling, Uwe; Edelmann, Winfried

    2014-01-01

    SLiCE (Seamless Ligation Cloning Extract) is a novel cloning method that utilizes easy to generate bacterial cell extracts to assemble multiple DNA fragments into recombinant DNA molecules in a single in vitro recombination reaction. SLiCE overcomes the sequence limitations of traditional cloning methods, facilitates seamless cloning by recombining short end homologies (15-52 bp) with or without flanking heterologous sequences and provides an effective strategy for directional subcloning of DNA fragments from bacterial artificial chromosomes or other sources. SLiCE is highly cost-effective and demonstrates the versatility as a number of standard laboratory bacterial strains can serve as sources for SLiCE extract. We established a DH10B-derived E. coli strain expressing an optimized λ prophage Red recombination system, termed PPY, which facilitates SLiCE with very high efficiencies.

  5. Deep Learning Based Regression and Multiclass Models for Acute Oral Toxicity Prediction with Automatic Chemical Feature Extraction.

    Science.gov (United States)

    Xu, Youjun; Pei, Jianfeng; Lai, Luhua

    2017-11-27

    Median lethal death, LD 50 , is a general indicator of compound acute oral toxicity (AOT). Various in silico methods were developed for AOT prediction to reduce costs and time. In this study, we developed an improved molecular graph encoding convolutional neural networks (MGE-CNN) architecture to construct three types of high-quality AOT models: regression model (deepAOT-R), multiclassification model (deepAOT-C), and multitask model (deepAOT-CR). These predictive models highly outperformed previously reported models. For the two external data sets containing 1673 (test set I) and 375 (test set II) compounds, the R 2 and mean absolute errors (MAEs) of deepAOT-R on the test set I were 0.864 and 0.195, and the prediction accuracies of deepAOT-C were 95.5% and 96.3% on test sets I and II, respectively. The two external prediction accuracies of deepAOT-CR are 95.0% and 94.1%, while the R 2 and MAE are 0.861 and 0.204 for test set I, respectively. We then performed forward and backward exploration of deepAOT models for deep fingerprints, which could support shallow machine learning methods more efficiently than traditional fingerprints or descriptors. We further performed automatic feature learning, a key essence of deep learning, to map the corresponding activation values into fragment space and derive AOT-related chemical substructures by reverse mining of the features. Our deep learning architecture for AOT is generally applicable in predicting and exploring other toxicity or property end points of chemical compounds. The two deepAOT models are freely available at http://repharma.pku.edu.cn/DLAOT/DLAOThome.php or http://www.pkumdl.cn/DLAOT/DLAOThome.php .

  6. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    Science.gov (United States)

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  7. Automatic lung segmentation method for MRI-based lung perfusion studies of patients with chronic obstructive pulmonary disease.

    Science.gov (United States)

    Kohlmann, Peter; Strehlow, Jan; Jobst, Betram; Krass, Stefan; Kuhnigk, Jan-Martin; Anjorin, Angela; Sedlaczek, Oliver; Ley, Sebastian; Kauczor, Hans-Ulrich; Wielpütz, Mark Oliver

    2015-04-01

    A novel fully automatic lung segmentation method for magnetic resonance (MR) images of patients with chronic obstructive pulmonary disease (COPD) is presented. The main goal of this work was to ease the tedious and time-consuming task of manual lung segmentation, which is required for region-based volumetric analysis of four-dimensional MR perfusion studies which goes beyond the analysis of small regions of interest. The first step in the automatic algorithm is the segmentation of the lungs in morphological MR images with higher spatial resolution than corresponding perfusion MR images. Subsequently, the segmentation mask of the lungs is transferred to the perfusion images via nonlinear registration. Finally, the masks for left and right lungs are subdivided into a user-defined number of partitions. Fourteen patients with two time points resulting in 28 perfusion data sets were available for the preliminary evaluation of the developed methods. Resulting lung segmentation masks are compared with reference segmentations from experienced chest radiologists, as well as with total lung capacity (TLC) acquired by full-body plethysmography. TLC results were available for thirteen patients. The relevance of the presented method is indicated by an evaluation, which shows high correlation between automatically generated lung masks with corresponding ground-truth estimates. The evaluation of the developed methods indicates good accuracy and shows that automatically generated lung masks differ from expert segmentations about as much as segmentations from different experts.

  8. Genomic DNA extraction method from pearl millet ( Pennisetum ...

    African Journals Online (AJOL)

    DNA extraction is difficult in a variety of plants because of the presence of metabolites that interfere with DNA isolation procedures and downstream applications such as DNA restriction, amplification, and cloning. Here we describe a modified procedure based on the hexadecyltrimethylammonium bromide (CTAB) method to ...

  9. effects of extraction method on the physicochemical and mycological

    African Journals Online (AJOL)

    DR. AMINU

    it ease separation of oil from carbohydrate, proteins and water phase (Rahayu et al., 2008). Despite its long application, the traditional oil extraction methods always lack the aseptic procedures and could result in microbial contamination that may infer quality deterioration of vegetable oils (Ekwenye, 2006). Work by Soeka et ...

  10. Quantitative extraction of Meiofauna: A comparison of two methods ...

    African Journals Online (AJOL)

    Two methods for the quantitative extraction of meiofauna from natural sandy sediments were investigated and compared: Cobb's decanting and sieving technique and the Oostenbrink elutriator. Both techniques were more efficient with pre-fixed samples than with fresh samples. The results indicated that elutriation is the ...

  11. Sesame ( Sesamum indicum L.) Seed Oil Methods of Extraction and ...

    African Journals Online (AJOL)

    The relative abundance of sesame seed oil coupled with the little knowledge of its cosmetic usage prompted the need for this review. The aim is to discuss the various extraction methods of the sesame seed oil and its industrial applications particularly its application in cosmetic production. The review focused mainly on the ...

  12. An effective method for extraction and polymerase chain reaction ...

    African Journals Online (AJOL)

    Formalin-preserved biological samples obtained from endangered species are valuable in assessing genetic diversity. To make use of snow leopard samples preserved in formalin over a period of two to seven years, we optimized the method of extracting DNA from these samples. We used (a) phenol chloroform : isoamyl ...

  13. An Improved Method for Extraction and Separation of Photosynthetic Pigments

    Science.gov (United States)

    Katayama, Nobuyasu; Kanaizuka, Yasuhiro; Sudarmi, Rini; Yokohama, Yasutsugu

    2003-01-01

    The method for extracting and separating hydrophobic photosynthetic pigments proposed by Katayama "et al." ("Japanese Journal of Phycology," 42, 71-77, 1994) has been improved to introduce it to student laboratories at the senior high school level. Silica gel powder was used for removing water from fresh materials prior to…

  14. Using automatic calibration method for optimizing the performance of Pedotransfer functions of saturated hydraulic conductivity

    Directory of Open Access Journals (Sweden)

    Ahmed M. Abdelbaki

    2016-06-01

    Full Text Available Pedotransfer functions (PTFs are an easy way to predict saturated hydraulic conductivity (Ksat without measurements. This study aims to auto calibrate 22 PTFs. The PTFs were divided into three groups according to its input requirements and the shuffled complex evolution algorithm was used in calibration. The results showed great modification in the performance of the functions compared to the original published functions. For group 1 PTFs, the geometric mean error ratio (GMER and the geometric standard deviation of error ratio (GSDER values were modified from range (1.27–6.09, (5.2–7.01 to (0.91–1.15, (4.88–5.85 respectively. For group 2 PTFs, the GMER and the GSDER values were modified from (0.3–1.55, (5.9–12.38 to (1.00–1.03, (5.5–5.9 respectively. For group 3 PTFs, the GMER and the GSDER values were modified from (0.11–2.06, (5.55–16.42 to (0.82–1.01, (5.1–6.17 respectively. The result showed that the automatic calibration is an efficient and accurate method to enhance the performance of the PTFs.

  15. Microchannels Effective Method for the Extraction of Oleuropein Compared with Conventional Methods

    Directory of Open Access Journals (Sweden)

    Mahnaz Yasemi

    2017-01-01

    Full Text Available Different methods of oleuropein extraction from olive leaf were investigated, including maceration, soxhlet, ultrasonic-assisted extraction, and microchannel. In current research, a response surface methodology (RSM was used for prediction of the optimal values of parameters affecting the extraction of oleuropein through two methods of ultrasound and microchannel. Frequency (F, temperature (T, and power of ultrasound (P were the parameters which were studied in ultrasound method, but in microchannel system effects of pH and temperature (T, volumetric flow rate ratio of two phases (VR, and contact time (CT of two phases were optimized. UV detector device at 254 nm was used to recognize oleuropein through comparison of the retention time of the extracts with standard compound in chromatogram. The analysis of extracts was performed using HPLC. Optimum conditions for ultrasound were obtained as follows: F=80 kHz, T = 25°C, and P=100 w. Using these optimum conditions, the extraction of oleuropein was 81.29%. Amount of oleuropein extraction by microchannel method in optimum conditions was 96.29%, which was way more than other applied methods. Microchannel system as a continuous method has many advantages including low solvent consumption, being environment friendly, short time for extraction, and high efficiency.

  16. Retinal status analysis method based on feature extraction and quantitative grading in OCT images.

    Science.gov (United States)

    Fu, Dongmei; Tong, Hejun; Zheng, Shuang; Luo, Ling; Gao, Fulin; Minar, Jiri

    2016-07-22

    Optical coherence tomography (OCT) is widely used in ophthalmology for viewing the morphology of the retina, which is important for disease detection and assessing therapeutic effect. The diagnosis of retinal diseases is based primarily on the subjective analysis of OCT images by trained ophthalmologists. This paper describes an OCT images automatic analysis method for computer-aided disease diagnosis and it is a critical part of the eye fundus diagnosis. This study analyzed 300 OCT images acquired by Optovue Avanti RTVue XR (Optovue Corp., Fremont, CA). Firstly, the normal retinal reference model based on retinal boundaries was presented. Subsequently, two kinds of quantitative methods based on geometric features and morphological features were proposed. This paper put forward a retinal abnormal grading decision-making method which was used in actual analysis and evaluation of multiple OCT images. This paper showed detailed analysis process by four retinal OCT images with different abnormal degrees. The final grading results verified that the analysis method can distinguish abnormal severity and lesion regions. This paper presented the simulation of the 150 test images, where the results of analysis of retinal status showed that the sensitivity was 0.94 and specificity was 0.92.The proposed method can speed up diagnostic process and objectively evaluate the retinal status. This paper aims on studies of retinal status automatic analysis method based on feature extraction and quantitative grading in OCT images. The proposed method can obtain the parameters and the features that are associated with retinal morphology. Quantitative analysis and evaluation of these features are combined with reference model which can realize the target image abnormal judgment and provide a reference for disease diagnosis.

  17. System and method for extracting a sample from a surface

    Science.gov (United States)

    Van Berkel, Gary; Covey, Thomas

    2015-06-23

    A system and method is disclosed for extracting a sample from a sample surface. A sample is provided and a sample surface receives the sample which is deposited on the sample surface. A hydrophobic material is applied to the sample surface, and one or more devices are configured to dispense a liquid on the sample, the liquid dissolving the sample to form a dissolved sample material, and the one or more devices are configured to extract the dissolved sample material from the sample surface.

  18. Development of Directly Suspended Droplet Micro Extraction Method for Extraction of Organochlorine Pesticides in Water Samples

    Directory of Open Access Journals (Sweden)

    Seyed Kamal Rajabi

    2015-04-01

    Full Text Available A simple and efficient directly suspended droplet micro extraction in conjunction with gas chromatography-electron capture detector (GC-ECD has been developed for extraction and determination of organochlorine pesticides (OCPs from water samples. In this technique a micro drop of 1-dodecanol is delivered to the surface of an aqueous sample while being agitated by a stirring bar in the bulk of solution. Factors relevant to the extraction efficiency were studied and optimized. The optimized extraction conditions were extraction solvent: 1-dodecanol; extraction temperature: 60◦C; NaCl concentration: 0.5M; solvent extraction volume: 10 µL; stirring rate: 800rpm and the extraction time: 20 min. The detection limits of the method were in the range of 0.066–1.85 ngL−1, relation standard deviation (n=5 range were 0.102 - 0.964. A good linearity (r 2 ≥0.995 and a relatively broad dynamic linear range (25–2600ng.L−1 were obtained and recoveries of method were in the range of 90.729% - 102.343%. Finally, the proposedmethod was successfully utilized for pre concentration and determination of OCPs in different real samples.We successfully developed a method based on the DSDME technique combined with capillary GC-ECD for the analysis of OCPs in the water samples and compared with the conventional sample preparation method such as LPME. Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso

  19. Effect of supercritical fluid extraction on the reduction of toxic elements in fish oil compared with other extraction methods.

    Science.gov (United States)

    Hajeb, Parvaneh; Selamat, Jinap; Afsah-Hejri, Leili; Mahyudin, Nor Ainy; Shakibazadeh, Shahram; Sarker, Mohd Zaidul Islam

    2015-01-01

    High-quality fish oil for human consumption requires low levels of toxic elements. The aim of this study was to compare different oil extraction methods to identify the most efficient method for extracting fish oil of high quality with the least contamination. The methods used in this study were Soxhlet extraction, enzymatic extraction, wet reduction, and supercritical fluid extraction. The results showed that toxic elements in fish oil could be reduced using supercritical CO2 at a modest temperature (60°C) and pressure (35 MPa) with little reduction in the oil yield. There were significant reductions in mercury (85 to 100%), cadmium (97 to 100%), and lead (100%) content of the fish oil extracted using the supercritical fluid extraction method. The fish oil extracted using conventional methods contained toxic elements at levels much higher than the accepted limits of 0.1 μg/g.

  20. Gaharu oil processing: gaharu oil from conventional extraction method

    International Nuclear Information System (INIS)

    Mohd Fajri Osman; Mat Rasol Awang; Ahsanulkhaliqin Abd Wahab; Chow Saw Peng; Shyful Azizi Abd Rahman; Khairuddin Abdul Rahim

    2006-01-01

    Gaharu oil is extracted through water or steam distillation of gaharu wood powder. Gaharu oil can fetch prices ranging from RM 25,000 to RM 50,000 per kg, depending on the quality or grade of gaharu wood used to produce the oil. The oil is commonly exported to the Middle East and customarily used as a perfume base. This paper describes gaharu oil extraction technique from traditional method which is commonly practiced by gaharu entrepreneurs in Malaysia. Gaharu woods are initially chopped, dried and ground into powder form. The gaharu wood powder is then soaked in water for a week. After the soaking process, the fermented powder is distilled with water using a special distiller for 4 to 10 days depending on the quality of gaharu wood used in the extraction process. (Author)

  1. Analytical methods and problems for the diamides type of extractants

    International Nuclear Information System (INIS)

    Cuillerdier, C.; Nigond, L.; Musikas, C.; Vitart, H.; Hoel, P.

    1989-01-01

    Diamides of carboxylic acids and especially malonamides are able to extract alpha emitters (including trivalent ions such as Am and Cm) contained in the wastes solutions of the nuclear industry. As they are completely incinerable and easy to purify, they could be an alternative to the mixture CMPO-TBP which is used in the TRUEX process. A large oxyalkyl radical enhances the distribution coefficients of americium in nitric acid sufficiently to permit the decontamination of wastes solutions in a classical mixers-settlers battery. Now researches are pursued with the aim of optimizing the formula of extractant, the influence of the structure of the extractant on its basicity and stability under radiolysis and hydrolysis is investigated. Analytical methods (potentiometry and NMR of C 13 ) have been developed for solvent titration and to evaluate the percentage of degradation and to identify some of the degradation products

  2. Automatic Mapping Extraction from Multiecho T2-Star Weighted Magnetic Resonance Images for Improving Morphological Evaluations in Human Brain

    Directory of Open Access Journals (Sweden)

    Shaode Yu

    2013-01-01

    Full Text Available Mapping extraction is useful in medical image analysis. Similarity coefficient mapping (SCM replaced signal response to time course in tissue similarity mapping with signal response to TE changes in multiecho T2-star weighted magnetic resonance imaging without contrast agent. Since different tissues are with different sensitivities to reference signals, a new algorithm is proposed by adding a sensitivity index to SCM. It generates two mappings. One measures relative signal strength (SSM and the other depicts fluctuation magnitude (FMM. Meanwhile, the new method is adaptive to generate a proper reference signal by maximizing the sum of contrast index (CI from SSM and FMM without manual delineation. Based on four groups of images from multiecho T2-star weighted magnetic resonance imaging, the capacity of SSM and FMM in enhancing image contrast and morphological evaluation is validated. Average contrast improvement index (CII of SSM is 1.57, 1.38, 1.34, and 1.41. Average CII of FMM is 2.42, 2.30, 2.24, and 2.35. Visual analysis of regions of interest demonstrates that SSM and FMM show better morphological structures than original images, T2-star mapping and SCM. These extracted mappings can be further applied in information fusion, signal investigation, and tissue segmentation.

  3. Automatic urban debris zone extraction from post-hurricane very high-resolution satellite and aerial imagery

    Directory of Open Access Journals (Sweden)

    Shasha Jiang

    2016-05-01

    Full Text Available Automated remote sensing methods have not gained widespread usage for damage assessment after hurricane events, especially for low-rise buildings, such as individual houses and small businesses. Hurricane wind, storm surge with waves, and inland flooding have unique damage signatures, further complicating the development of robust automated assessment methodologies. As a step toward realizing automated damage assessment for multi-hazard hurricane events, this paper presents a mono-temporal image classification methodology that quickly and accurately differentiates urban debris from non-debris areas using post-event images. Three classification approaches are presented: spectral, textural, and combined spectral–textural. The methodology is demonstrated for Gulfport, Mississippi, using IKONOS panchromatic satellite and NOAA aerial colour imagery collected after 2005 Hurricane Katrina. The results show that multivariate texture information significantly improves debris class detection performance by decreasing the confusion between debris and other land cover types, and the extracted debris zone accurately captures debris distribution. Additionally, the extracted debris boundary is approximately equivalent regardless of imagery type, demonstrating the flexibility and robustness of the debris mapping methodology. While the test case presents results for hurricane hazards, the proposed methodology is generally developed and expected to be effective in delineating debris zones for other natural hazards, including tsunamis, tornadoes, and earthquakes.

  4. Evaluation of advanced automatic PET segmentation methods using nonspherical thin-wall inserts

    International Nuclear Information System (INIS)

    Berthon, B.; Marshall, C.; Evans, M.; Spezi, E.

    2014-01-01

    Purpose: The use of positron emission tomography (PET) within radiotherapy treatment planning requires the availability of reliable and accurate segmentation tools. PET automatic segmentation (PET-AS) methods have been recommended for the delineation of tumors, but there is still a lack of thorough validation and cross-comparison of such methods using clinically relevant data. In particular, studies validating PET segmentation tools mainly use phantoms with thick plastic walls inserts of simple spherical geometry and have not specifically investigated the effect of the target object geometry on the delineation accuracy. Our work therefore aimed at generating clinically realistic data using nonspherical thin-wall plastic inserts, for the evaluation and comparison of a set of eight promising PET-AS approaches. Methods: Sixteen nonspherical inserts were manufactured with a plastic wall of 0.18 mm and scanned within a custom plastic phantom. These included ellipsoids and toroids derived with different volumes, as well as tubes, pear- and drop-shaped inserts with different aspect ratios. A set of six spheres of volumes ranging from 0.5 to 102 ml was used for a baseline study. A selection of eight PET-AS methods, written in house, was applied to the images obtained. The methods represented promising segmentation approaches such as adaptive iterative thresholding, region-growing, clustering and gradient-based schemes. The delineation accuracy was measured in terms of overlap with the computed tomography reference contour, using the dice similarity coefficient (DSC), and error in dimensions. Results: The delineation accuracy was lower for nonspherical inserts than for spheres of the same volume in 88% cases. Slice-by-slice gradient-based methods, showed particularly lower DSC for tori (DSC 0.76 except for tori) but showed the largest errors in the recovery of pears and drops dimensions (higher than 10% and 30% of the true length, respectively). Large errors were visible

  5. Automatic Evaluation Of Interferograms

    Science.gov (United States)

    Becker, Friedhelm; Meier, Gerd E. A.; Wegner, Horst

    1983-03-01

    A system for the automatic evaluation of interference patterns has been developed. After digitizing the interferograms from classical and holografic interferometers with a television digitizer and performing different picture enhancement operations the fringe loci are extracted by use of a floating-threshold method. The fringes are numbered using a special scheme after the removal of any fringe disconnections which might appear if there was insufficient contrast in the interferograms. The reconstruction of the object function from the numbered fringe field is achieved by a local polynomial least-squares approximation. Applications are given, demonstrating the evaluation of interferograms of supersonic flow fields and the analysis of holografic interferograms of car-tyres.

  6. Automatic extraction of corpus callosum from midsagittal head MR image and examination of Alzheimer-type dementia objective diagnostic system in feature analysis

    International Nuclear Information System (INIS)

    Kaneko, Tomoyuki; Kodama, Naoki; Kaeriyama, Tomoharu; Fukumoto, Ichiro

    2004-01-01

    We studied the objective diagnosis of Alzheimer-type dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 40 Alzheimer-type dementia patients (15 men and 25 women; mean age, 75.4±5.5 years) and 31 healthy elderly persons (10 men and 21 women; mean age, 73.4±7.5 years), 71 subjects altogether. First, the corpus callosum was automatically extracted from midsagittal head MR images. Next, Alzheimer-type dementia was compared with the healthy elderly individuals using the features of shape factor and six features of Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum succeeded in 64 of 71 individuals, for an extraction rate of 90.1%. A statistically significant difference was found in 7 of the 9 features between Alzheimer-type dementia patients and the healthy elderly adults. Discriminant analysis using the 7 features demonstrated a sensitivity rate of 82.4%, specificity of 89.3%, and overall accuracy of 85.5%. These results indicated the possibility of an objective diagnostic system for Alzheimer-type dementia using feature analysis based on change in the corpus callosum. (author)

  7. An automatic gain matching method for {gamma}-ray spectra obtained with a multi-detector array

    Energy Technology Data Exchange (ETDEWEB)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S. E-mail: ssg@alpha.iuc.res.in

    2004-07-01

    The increasing size of data sets from large multi-detector arrays makes the traditional approach to the pre-evaluation of the data difficult and time consuming. The pre-sorting involves detection and correction of the observed on-line drifts followed by calibration of the raw data. A new method for automatic detection and correction of these instrumental drifts is presented. An application of this method to the data acquired using a multi-Clover array is discussed.

  8. An automatic gain matching method for γ-ray spectra obtained with a multi-detector array

    International Nuclear Information System (INIS)

    Pattabiraman, N.S.; Chintalapudi, S.N.; Ghugre, S.S.

    2004-01-01

    The increasing size of data sets from large multi-detector arrays makes the traditional approach to the pre-evaluation of the data difficult and time consuming. The pre-sorting involves detection and correction of the observed on-line drifts followed by calibration of the raw data. A new method for automatic detection and correction of these instrumental drifts is presented. An application of this method to the data acquired using a multi-Clover array is discussed

  9. Gray-Matter Volume Estimate Score: A Novel Semi-Automatic Method Measuring Early Ischemic Change on CT

    OpenAIRE

    Song, Dongbeom; Lee, Kijeong; Kim, Eun Hye; Kim, Young Dae; Lee, Hye Sun; Kim, Jinkwon; Song, Tae-Jin; Ahn, Sung Soo; Nam, Hyo Suk; Heo, Ji Hoe

    2015-01-01

    Background and Purpose We developed a novel method named Gray-matter Volume Estimate Score (GRAVES), measuring early ischemic changes on Computed Tomography (CT) semi-automatically by computer software. This study aimed to compare GRAVES and Alberta Stroke Program Early CT Score (ASPECTS) with regards to outcome prediction and inter-rater agreement. Methods This was a retrospective cohort study. Among consecutive patients with ischemic stroke in the anterior circulation who received intra-art...

  10. Comparative analysis of methods for extracting vessel network on breast MRI images

    Science.gov (United States)

    Gaizer, Bence T.; Vassiou, Katerina G.; Lavdas, Eleftherios; Arvanitis, Dimitrios L.; Fezoulidis, Ioannis V.; Glotsos, Dimitris T.

    2017-11-01

    Digital processing of MRI images aims to provide an automatized diagnostic evaluation of regular health screenings. Cancerous lesions are proven to cause an alteration in the vessel structure of the diseased organ. Currently there are several methods used for extraction of the vessel network in order to quantify its properties. In this work MRI images (Signa HDx 3.0T, GE Healthcare, courtesy of University Hospital of Larissa) of 30 female breasts were subjected to three different vessel extraction algorithms to determine the location of their vascular network. The first method is an experiment to build a graph over known points of the vessel network; the second algorithm aims to determine the direction and diameter of vessels at these points; the third approach is a seed growing algorithm, spreading selection to neighbors of the known vessel pixels. The possibilities shown by the different methods were analyzed, and quantitative measurements were performed. The data provided by these measurements showed no clear correlation with the presence or malignancy of tumors, based on the radiological diagnosis of skilled physicians.

  11. Evaluation of in vitro antioxidant potential of different polarities stem crude extracts by different extraction methods of Adenium obesum

    Directory of Open Access Journals (Sweden)

    Mohammad Amzad Hossain

    2014-09-01

    Full Text Available Objective: To select best extraction method for the isolated antioxidant compounds from the stems of Adenium obesum. Methods: Two methods used for the extraction are Soxhlet and maceration methods. Methanol solvent was used for both extraction method. The methanol crude extract was defatted with water and extracted successively with hexane, chloroform, ethyl acetate and butanol solvents. The antioxidant potential for all crude extracts were determined by using 1, 1-diphenyl-2- picrylhydrazyl method. Results: The percentage of extraction yield by Soxhlet method is higher compared to maceration method. The antioxidant potential for methanol and its derived fractions by Soxhlet extractor method was highest in ethyl acetate and lowest in hexane crude extracts and found in the order of ethyl acetate>butanol>water>chloroform>methanol>hexane. However, the antioxidant potential for methanol and its derived fractions by maceration method was highest in butanol and lowest in hexane followed in the order of butanol>methanol>chloroform>water>ethyl acetate>hexane. Conclusions: The results showed that isolate antioxidant compounds effected on the extraction method and condition of extraction.

  12. Integrated Phoneme Subspace Method for Speech Feature Extraction

    Directory of Open Access Journals (Sweden)

    Park Hyunsin

    2009-01-01

    Full Text Available Speech feature extraction has been a key focus in robust speech recognition research. In this work, we discuss data-driven linear feature transformations applied to feature vectors in the logarithmic mel-frequency filter bank domain. Transformations are based on principal component analysis (PCA, independent component analysis (ICA, and linear discriminant analysis (LDA. Furthermore, this paper introduces a new feature extraction technique that collects the correlation information among phoneme subspaces and reconstructs feature space for representing phonemic information efficiently. The proposed speech feature vector is generated by projecting an observed vector onto an integrated phoneme subspace (IPS based on PCA or ICA. The performance of the new feature was evaluated for isolated word speech recognition. The proposed method provided higher recognition accuracy than conventional methods in clean and reverberant environments.

  13. Exact extraction method for road rutting laser lines

    Science.gov (United States)

    Hong, Zhiming

    2018-02-01

    This paper analyzes the importance of asphalt pavement rutting detection in pavement maintenance and pavement administration in today's society, the shortcomings of the existing rutting detection methods are presented and a new rutting line-laser extraction method based on peak intensity characteristic and peak continuity is proposed. The intensity of peak characteristic is enhanced by a designed transverse mean filter, and an intensity map of peak characteristic based on peak intensity calculation for the whole road image is obtained to determine the seed point of the rutting laser line. Regarding the seed point as the starting point, the light-points of a rutting line-laser are extracted based on the features of peak continuity, which providing exact basic data for subsequent calculation of pavement rutting depths.

  14. Extraction spectrophotometric method for determination of aluminium in silicates.

    Science.gov (United States)

    Banerjee, N L; Sinha, B C

    1990-10-01

    A simple, rapid and sensitive method has been worked out for spectrophotometric determination of macro and micro amounts of alumina in ceramic raw materials and finished products, including glasses. The method is based on the extraction of aluminum oxinate into chloroform after masking of titanium with chromotropic acid and of iron with ascorbic acid and 1,10-phenanthroline or ferrocyanide at pH 5.2. The absorbance is measured at 385 nm. Interference by Cu, Zn, Cd, Ni and Co, when present, is overcome by stripping them as cyanide complexes by shaking the chloroform extract with potassium cyanide solution. Zr is masked with quinalizarin sulphonic acid and fluoride with BeSO(4).

  15. EFFECTS OF EXTRACTION METHODS ON PHYSICO-CHEMICAL ...

    African Journals Online (AJOL)

    The relative density value ranged from 0.9 to 0.92 at 29°C (room temperature). Both oil samples were in liquid state at room temperature and boiling points varied from 94°C-to 98°C for solvent extracted oil and hydraulic press oil respectively. The results showed thatJhe method ofextraction imposed significant changes on ...

  16. Extractive method for obtaining gas inclusions from ice

    International Nuclear Information System (INIS)

    Strauch, G.; Kowski, P.

    1982-01-01

    Doubtless important for glaciological investigations of firn and ice is the knowledge about the chemical composition of gases included in ice. A method for quantitative extraction of gases from about 30 kg ice under vacuum is presented in this paper. The procedure was tested with ice cores from a thermoelectrical drill hole near Soviet Antarctic station Novolazarevskaya. The chemical compositions of inclusion gases and the specific gas contents from 6 horizons are pointed out by a table and some graphics. (author)

  17. A novel method and software for automatically classifying Alzheimer's disease patients by magnetic resonance imaging analysis.

    Science.gov (United States)

    Previtali, F; Bertolazzi, P; Felici, G; Weitschek, E

    2017-05-01

    The cause of the Alzheimer's disease is poorly understood and to date no treatment to stop or reverse its progression has been discovered. In developed countries, the Alzheimer's disease is one of the most financially costly diseases due to the requirement of continuous treatments as well as the need of assistance or supervision with the most cognitively demanding activities as time goes by. The objective of this work is to present an automated approach for classifying the Alzheimer's disease from magnetic resonance imaging (MRI) patient brain scans. The method is fast and reliable for a suitable and straightforward deploy in clinical applications for helping diagnosing and improving the efficacy of medical treatments by recognising the disease state of the patient. Many features can be extracted from magnetic resonance images, but most are not suitable for the classification task. Therefore, we propose a new feature extraction technique from patients' MRI brain scans that is based on a recent computer vision method, called Oriented FAST and Rotated BRIEF. The extracted features are processed with the definition and the combination of two new metrics, i.e., their spatial position and their distribution around the patient's brain, and given as input to a function-based classifier (i.e., Support Vector Machines). We report the comparison with recent state-of-the-art approaches on two established medical data sets (ADNI and OASIS). In the case of binary classification (case vs control), our proposed approach outperforms most state-of-the-art techniques, while having comparable results with the others. Specifically, we obtain 100% (97%) of accuracy, 100% (97%) sensitivity and 99% (93%) specificity for the ADNI (OASIS) data set. When dealing with three or four classes (i.e., classification of all subjects) our method is the only one that reaches remarkable performance in terms of classification accuracy, sensitivity and specificity, outperforming the state

  18. A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system.

    Science.gov (United States)

    Liu, Yinlong; Song, Zhijian; Wang, Manning

    2017-12-01

    Compared with the traditional point-based registration in the image-guided neurosurgery system, surface-based registration is preferable because it does not use fiducial markers before image scanning and does not require image acquisition dedicated for navigation purposes. However, most existing surface-based registration methods must include a manual step for coarse registration, which increases the registration time and elicits some inconvenience and uncertainty. A new automatic surface-based registration method is proposed, which applies 3D surface feature description and matching algorithm to obtain point correspondences for coarse registration and uses the iterative closest point (ICP) algorithm in the last step to obtain an image-to-patient registration. Both phantom and clinical data were used to execute automatic registrations and target registration error (TRE) calculated to verify the practicality and robustness of the proposed method. In phantom experiments, the registration accuracy was stable across different downsampling resolutions (18-26 mm) and different support radii (2-6 mm). In clinical experiments, the mean TREs of two patients by registering full head surfaces were 1.30 mm and 1.85 mm. This study introduced a new robust automatic surface-based registration method based on 3D feature matching. The method achieved sufficient registration accuracy with different real-world surface regions in phantom and clinical experiments.

  19. Automatic limit switch system for scintillation device and method of operation

    International Nuclear Information System (INIS)

    Brunnett, C.J.; Ioannou, B.N.

    1976-01-01

    A scintillation scanner is described having an automatic limit switch system for setting the limits of travel of the radiation detection device which is carried by a scanning boom. The automatic limit switch system incorporates position responsive circuitry for developing a signal representative of the position of the boom, reference signal circuitry for developing a signal representative of a selected limit of travel of the boom, and comparator circuitry for comparng these signals in order to control the operation of a boom drive and indexing mechanism. (author)

  20. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  1. Methods for extracting social network data from chatroom logs

    Science.gov (United States)

    Osesina, O. Isaac; McIntire, John P.; Havig, Paul R.; Geiselman, Eric E.; Bartley, Cecilia; Tudoreanu, M. Eduard

    2012-06-01

    Identifying social network (SN) links within computer-mediated communication platforms without explicit relations among users poses challenges to researchers. Our research aims to extract SN links in internet chat with multiple users engaging in synchronous overlapping conversations all displayed in a single stream. We approached this problem using three methods which build on previous research. Response-time analysis builds on temporal proximity of chat messages; word context usage builds on keywords analysis and direct addressing which infers links by identifying the intended message recipient from the screen name (nickname) referenced in the message [1]. Our analysis of word usage within the chat stream also provides contexts for the extracted SN links. To test the capability of our methods, we used publicly available data from Internet Relay Chat (IRC), a real-time computer-mediated communication (CMC) tool used by millions of people around the world. The extraction performances of individual methods and their hybrids were assessed relative to a ground truth (determined a priori via manual scoring).

  2. Vanadium extraction from slimes by the lime-bicarbonate method

    International Nuclear Information System (INIS)

    Lishchenko, T.V.; Vdovina, L.V.; Slobodchikova, R.I.

    1978-01-01

    Some main parameters of the lime-bicarbonate method of extracting vanadium from residues obtained in washing waters of mazut boilers on thermal stations have been determined. To study the process of vanadium extraction during caking of the residues with lime and subsequent leaching of water-soluble vanadium, a ''Minsk-22'' computer has been used for computation. Analysis of the equation derived has shown that a change in temperature of vanadium leaching, density of pulp, and a kind of heating of the charge affect the process only slightly. It has also been shown that the calcination temperature is expedient to be kept above 850 deg C and consumption temperature is expedient to be kept above 85O deg C and consumption of lime must not exceed 20% of the residues weight. Bicarbonate consumption exerts a decisive influence on completeness of vanadium extraction and must be increased up to >35%; duration of leaching should be raised up to 30-45 minutes. With increasing calcination temperature the duration of leaching decreases. When temperature and duration of calcination increase, the formation of water-soluble vanadium intensifies. With the aid of optimization program seven variants have been chosen, which ensure vanadium extraction into solution by 95-100%

  3. Road Extraction from High-Resolution SAR Images via Automatic Local Detecting and Human-Guided Global Tracking

    Directory of Open Access Journals (Sweden)

    Jianghua Cheng

    2012-01-01

    Full Text Available Because of existence of various kinds of disturbances, layover effects, and shadowing, it is difficult to extract road from high-resolution SAR images. A new road center-point searching method is proposed by two alternant steps: local detection and global tracking. In local detection step, double window model is set, which consists of the outer fixed square window and the inner rotary rectangular one. The outer window is used to obtain the local road direction by using orientation histogram, based on the fact that the surrounding objects always range along with roads. The inner window rotates its orientation in accordance with the result of local road direction calculation and searches the center points of a road segment. In global tracking step, particle filter of variable-step is used to deal with the problem of tracking frequently broken by shelters along the roadside and obstacles on the road. Finally, the center-points are linked by quadratic curve fitting. In 1 m high-resolution airborne SAR image experiment, the results show that this method is effective.

  4. A hybrid method for pancreas extraction from CT image based on level set methods.

    Science.gov (United States)

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  5. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    International Nuclear Information System (INIS)

    Felfer, P.; Ceguerra, A.V.; Ringer, S.P.; Cairney, J.M.

    2015-01-01

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms

  6. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    Energy Technology Data Exchange (ETDEWEB)

    Felfer, P., E-mail: peter.felfer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ceguerra, A.V., E-mail: anna.ceguerra@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Ringer, S.P., E-mail: simon.ringer@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia); Cairney, J.M., E-mail: julie.cairney@sydney.edu.au [Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW 2006 (Australia)

    2015-03-15

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms.

  7. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    Science.gov (United States)

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    NARCIS (Netherlands)

    Weijers, G.; Starke, A.; Haudum, A.; Thijssen, J.M.; Rehage, J.; Korte, C.L. de

    2010-01-01

    The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty

  9. Research of an Automatic Control Method of NO Removal System by Silent Discharge

    Science.gov (United States)

    Kimura, Kouhei; Hayashi, Kenji; Yoshioka, Yoshio

    An automatic NOx control device was developed for NOx removal system by silent discharge targeting diesel engine generator. A new algorithm of controlling the exit NO concentration at specified values was developed. The control system was actually made in our laboratory and it was confirmed that exit NO concentration could be controlled in the specified value.

  10. [Automatic classification method of star spectrum data based on classification pattern tree].

    Science.gov (United States)

    Zhao, Xu-Jun; Cai, Jiang-Hui; Zhang, Ji-Fu; Yang, Hai-Feng; Ma, Yang

    2013-10-01

    Frequent pattern, frequently appearing in the data set, plays an important role in data mining. For the stellar spectrum classification tasks, a classification rule mining method based on classification pattern tree is presented on the basis of frequent pattern. The procedures can be shown as follows. Firstly, a new tree structure, i. e., classification pattern tree, is introduced based on the different frequencies of stellar spectral attributes in data base and its different importance used for classification. The related concepts and the construction method of classification pattern tree are also described in this paper. Then, the characteristics of the stellar spectrum are mapped to the classification pattern tree. Two modes of top-to-down and bottom-to-up are used to traverse the classification pattern tree and extract the classification rules. Meanwhile, the concept of pattern capability is introduced to adjust the number of classification rules and improve the construction efficiency of the classification pattern tree. Finally, the SDSS (the Sloan Digital Sky Survey) stellar spectral data provided by the National Astronomical Observatory are used to verify the accuracy of the method. The results show that a higher classification accuracy has been got.

  11. A robust automatic leukocyte recognition method based on island-clustering texture

    Directory of Open Access Journals (Sweden)

    Xiaoshun Li

    2016-01-01

    Full Text Available A leukocyte recognition method for human peripheral blood smear based on island-clustering texture (ICT is proposed. By analyzing the features of the five typical classes of leukocyte images, a new ICT model is established. Firstly, some feature points are extracted in a gray leukocyte image by mean-shift clustering to be the centers of islands. Secondly, the growing region is employed to create regions of the islands in which the seeds are just these feature points. These islands distribution can describe a new texture. Finally, a distinguished parameter vector of these islands is created as the ICT features by combining the ICT features with the geometric features of the leukocyte. Then the five typical classes of leukocytes can be recognized successfully at the correct recognition rate of more than 92.3% with a total sample of 1310 leukocytes. Experimental results show the feasibility of the proposed method. Further analysis reveals that the method is robust and results can provide important information for disease diagnosis.

  12. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy

    Science.gov (United States)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.

    2017-10-01

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a

  13. A Supporting Platform for Semi-Automatic Hyoid Bone Tracking and Parameter Extraction from Videofluoroscopic Images for the Diagnosis of Dysphagia Patients.

    Science.gov (United States)

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Paik, Nam Jong; Ryu, Ju Seok; Kim, In Young

    2017-04-01

    Conventional kinematic analysis of videofluoroscopic (VF) swallowing image, most popular for dysphagia diagnosis, requires time-consuming and repetitive manual extraction of diagnostic information from multiple images representing one swallowing period, which results in a heavy work load for clinicians and excessive hospital visits for patients to receive counseling and prescriptions. In this study, a software platform was developed that can assist in the VF diagnosis of dysphagia by automatically extracting a two-dimensional moving trajectory of the hyoid bone as well as 11 temporal and kinematic parameters. Fifty VF swallowing videos containing both non-mandible-overlapped and mandible-overlapped cases from eight patients with dysphagia of various etiologies and 19 videos from ten healthy controls were utilized for performance verification. Percent errors of hyoid bone tracking were 1.7 ± 2.1% for non-overlapped images and 4.2 ± 4.8% for overlapped images. Correlation coefficients between manually extracted and automatically extracted moving trajectories of the hyoid bone were 0.986 ± 0.017 (X-axis) and 0.992 ± 0.006 (Y-axis) for non-overlapped images, and 0.988 ± 0.009 (X-axis) and 0.991 ± 0.006 (Y-axis) for overlapped images. Based on the experimental results, we believe that the proposed platform has the potential to improve the satisfaction of both clinicians and patients with dysphagia.

  14. Rapid new methods for paint collection and lead extraction.

    Science.gov (United States)

    Gutknecht, William F; Harper, Sharon L; Winstead, Wayne; Sorrell, Kristen; Binstock, David A; Salmons, Cynthia A; Haas, Curtis; McCombs, Michelle; Studabaker, William; Wall, Constance V; Moore, Curtis

    2009-01-01

    Chronic exposure of children to lead can result in permanent physiological impairment. In adults, it can cause irritability, poor muscle coordination, and nerve damage to the sense organs and nerves controlling the body. Surfaces coated with lead-containing paints are potential sources of exposure to lead. In April 2008, the U.S. Environmental Protection Agency (EPA) finalized new requirements that would reduce exposure to lead hazards created by renovation, repair, and painting activities, which disturb lead-based paint. On-site, inexpensive identification of lead-based paint is required. Two steps have been taken to meet this challenge. First, this paper presents a new, highly efficient method for paint collection that is based on the use of a modified wood drill bit. Second, this paper presents a novel, one-step approach for quantitatively grinding and extracting lead from paint samples for subsequent lead determination. This latter method is based on the use of a high-revolutions per minute rotor with stator to break up the paint into approximately 50 micron-size particles. Nitric acid (25%, v/v) is used to extract the lead in 95% for real-world paints, National Institute of Standards and Technology's standard reference materials, and audit samples from the American Industrial Hygiene Association's Environmental Lead Proficiency Analytical Testing Program. This quantitative extraction procedure, when paired with quantitative paint sample collection and lead determination, may enable the development of a lead paint test kit that will meet the specifications of the final EPA rule.

  15. Portable Rule Extraction Method for Neural Network Decisions Reasoning

    Directory of Open Access Journals (Sweden)

    Darius PLIKYNAS

    2005-08-01

    Full Text Available Neural network (NN methods are sometimes useless in practical applications, because they are not properly tailored to the particular market's needs. We focus thereinafter specifically on financial market applications. NNs have not gained full acceptance here yet. One of the main reasons is the "Black Box" problem (lack of the NN decisions explanatory power. There are though some NN decisions rule extraction methods like decompositional, pedagogical or eclectic, but they suffer from low portability of the rule extraction technique across various neural net architectures, high level of granularity, algorithmic sophistication of the rule extraction technique etc. The authors propose to eliminate some known drawbacks using an innovative extension of the pedagogical approach. The idea is exposed by the use of a widespread MLP neural net (as a common tool in the financial problems' domain and SOM (input data space clusterization. The feedback of both nets' performance is related and targeted through the iteration cycle by achievement of the best matching between the decision space fragments and input data space clusters. Three sets of rules are generated algorithmically or by fuzzy membership functions. Empirical validation of the common financial benchmark problems is conducted with an appropriately prepared software solution.

  16. Biodiesel Production from Microalgae by Extraction – Transesterification Method

    Directory of Open Access Journals (Sweden)

    Nguyen Thi Phuong Thao

    2013-11-01

    Full Text Available The environmental impact of using petroleum fuels has led to a quest to find a suitable alternative fuel source. In this study, microalgae were explored as a highly potential feedstock to produce biodiesel fuel. Firstly, algal oil is extracted from algal biomass by using organic solvents (n–hexan.  Lipid is contained in microalgae up to 60% of their weight. Then, Biodiesel is created through a chemical reaction known as transesterification between algal oil and alcohol (methanol with strong acid (such as H2SO4 as the catalyst. The extraction – transesterification method resulted in a high biodiesel yield (10 % of algal biomass and high FAMEs content (5.2 % of algal biomass. Biodiesel production from microalgae was studied through experimental investigation of transesterification conditions such as reaction time, methanol to oil ration and catalyst dosage which are deemed to have main impact on reaction conversion efficiency. All the parameters which were characterized for purified biodiesel such as free glycerin, total glycerin, flash point, sulfur content were analyzed according to ASTM standardDoi: http://dx.doi.org/10.12777/wastech.1.1.6-9Citation:  Thao, N.T.P., Tin, N.T., and Thanh, B.X. 2013. Biodiesel Production from Microalgae by Extraction – Transesterification Method. Waste Technology 1(1:6-9. Doi: http://dx.doi.org/10.12777/wastech.1.1.6-9

  17. Analytical laboratories method No. 4001 - automatic determination of U-235 wt% in a uranium matrix by gamma spectrometry

    International Nuclear Information System (INIS)

    1987-01-01

    This method is designed to automatically measure the U-235 concentration of various uranium-containing matrices (e.g., UO 3 , UF 4 , U 3 O 8 , sump samples, UNH, residues, etc.). Analyses are performed using a computer controlled sample changer. The technique is applicable to samples ranging from 0.20 to 20.0 wt% U-235. A complete gamma spectrometric U-235 analysis can be performed in two hours, or less

  18. An automatic method to analyze the Capacity-Voltage and Current-Voltage curves of a sensor

    CERN Document Server

    AUTHOR|(CDS)2261553

    2017-01-01

    An automatic method to perform Capacity versus voltage analysis for all kind of silicon sensor is provided. It successfully calculates the depletion voltage to unirradiated and irradiated sensors, and with measurements with outliers or reaching breakdown. It is built using C++ and using ROOT trees with an analogous skeleton as TRICS, where the data as well as the results of the ts are saved, to make further analysis.

  19. Comparison antioxidant activity of Tarom Mahali rice bran extracted from different extraction methods and its effect on canola oil stabilization

    OpenAIRE

    Farahmandfar, Reza; Asnaashari, Maryam; Sayyad, Ruhollah

    2015-01-01

    In this study, Tarom Mahali rice bran extracts by ultrasound assisted and traditional solvent (ethanol and ethanol: water (50:50)) extraction method were compared. The total phenolic and tocopherol content and antioxidant activity of the extracts was determined and compared with TBHQ by DPPH assay and β-carotene bleaching method. The results show that the extract from ethanol: water (50:50) ultrasonic treatment with high amount of phenols (919.66 mg gallic acid/g extract, tocopherols (438.4 μ...

  20. Methods for CT automatic exposure control protocol translation between scanner platforms.

    Science.gov (United States)

    McKenney, Sarah E; Seibert, J Anthony; Lamba, Ramit; Boone, John M

    2014-03-01

    An imaging facility with a diverse fleet of CT scanners faces considerable challenges when propagating CT protocols with consistent image quality and patient dose across scanner makes and models. Although some protocol parameters can comfortably remain constant among scanners (eg, tube voltage, gantry rotation time), the automatic exposure control (AEC) parameter, which selects the overall mA level during tube current modulation, is difficult to match among scanners, especially from different CT manufacturers. Objective methods for converting tube current modulation protocols among CT scanners were developed. Three CT scanners were investigated, a GE LightSpeed 16 scanner, a GE VCT scanner, and a Siemens Definition AS+ scanner. Translation of the AEC parameters such as noise index and quality reference mAs across CT scanners was specifically investigated. A variable-diameter poly(methyl methacrylate) phantom was imaged on the 3 scanners using a range of AEC parameters for each scanner. The phantom consisted of 5 cylindrical sections with diameters of 13, 16, 20, 25, and 32 cm. The protocol translation scheme was based on matching either the volumetric CT dose index or image noise (in Hounsfield units) between two different CT scanners. A series of analytic fit functions, corresponding to different patient sizes (phantom diameters), were developed from the measured CT data. These functions relate the AEC metric of the reference scanner, the GE LightSpeed 16 in this case, to the AEC metric of a secondary scanner. When translating protocols between different models of CT scanners (from the GE LightSpeed 16 reference scanner to the GE VCT system), the translation functions were linear. However, a power-law function was necessary to convert the AEC functions of the GE LightSpeed 16 reference scanner to the Siemens Definition AS+ secondary scanner, because of differences in the AEC functionality designed by these two companies. Protocol translation on the basis of

  1. Free radical scavenging and anti-acne activities of mangosteen fruit rind extracts prepared by different extraction methods.

    Science.gov (United States)

    Pothitirat, Werayut; Chomnawang, Mullika Traidej; Supabphol, Roongtawan; Gritsanapan, Wandee

    2010-02-01

    The ethanol extracts of mangosteen fruit rinds prepared by several extraction methods were examined for their contents of bioactive compounds, DPPH-scavenging activity, and anti-acne producing bacteria against Propionibacterium acnes and Staphylococcus epidermidis. The dried powder of the fruit rind was extracted with 95% ethanol by maceration, percolation, Soxhlet extraction, ultrasonic extraction, and extraction using a magnetic stirrer. Soxhlet extraction promoted the maximum contents of crude extract (26.60% dry weight) and alpha-mangostin (13.51%, w/w of crude extract), and also gave the highest anti-acne activity with MIC 7.81 and 15.63 microg/mL and MBC 15.53 and 31.25 microg/mL against P. acnes and S. epidermidis, respectively. Ethanol 70% and 50% (v/v) were also compared in Soxhlet extraction. Ethanol 50% promoted the extract with maximum amounts of total phenolic compounds (26.96 g gallic acid equivalents/100 g extract) and total tannins (46.83 g tannic acid equivalents/100 g extract), and also exhibited the most effective DPPH-scavenging activity (EC(50) 12.84 microg/mL). Considering various factors involved in the process, Soxhlet extraction carried a low cost in terms of reagents and extraction time. It appears to be the recommended extraction method for mangosteen fruit rind. Ethanol 50% should be the appropriate solvent for extracting free radical-scavenging components, phenolic compounds, and tannins, while 95% ethanol is recommended for extraction of alpha-mangostin, a major anti-acne component from this plant.

  2. Advanced Extraction Methods for Actinide/Lanthanide Separations

    Energy Technology Data Exchange (ETDEWEB)

    Scott, M.J.

    2005-12-01

    The separation of An(III) ions from chemically similar Ln(III) ions is perhaps one of the most difficult problems encountered during the processing of nuclear waste. In the 3+ oxidation states, the metal ions have an identical charge and roughly the same ionic radius. They differ strictly in the relative energies of their f- and d-orbitals, and to separate these metal ions, ligands will need to be developed that take advantage of this small but important distinction. The extraction of uranium and plutonium from nitric acid solution can be performed quantitatively by the extraction with the TBP (tributyl phosphate). Commercially, this process has found wide use in the PUREX (plutonium uranium extraction) reprocessing method. The TRUEX (transuranium extraction) process is further used to coextract the trivalent lanthanides and actinides ions from HLLW generated during PUREX extraction. This method uses CMPO [(N, N-diisobutylcarbamoylmethyl) octylphenylphosphineoxide] intermixed with TBP as a synergistic agent. However, the final separation of trivalent actinides from trivalent lanthanides still remains a challenging task. In TRUEX nitric acid solution, the Am(III) ion is coordinated by three CMPO molecules and three nitrate anions. Taking inspiration from this data and previous work with calix[4]arene systems, researchers on this project have developed a C3-symmetric tris-CMPO ligand system using a triphenoxymethane platform as a base. The triphenoxymethane ligand systems have many advantages for the preparation of complex ligand systems. The compounds are very easy to prepare. The steric and solubility properties can be tuned through an extreme range by the inclusion of different alkoxy and alkyl groups such as methyoxy, ethoxy, t-butoxy, methyl, octyl, t-pentyl, or even t-pentyl at the ortho- and para-positions of the aryl rings. The triphenoxymethane ligand system shows promise as an improved extractant for both tetravalent and trivalent actinide recoveries form

  3. Advanced Extraction Methods for Actinide/Lanthanide Separations

    International Nuclear Information System (INIS)

    Scott, M.J.

    2005-01-01

    The separation of An(III) ions from chemically similar Ln(III) ions is perhaps one of the most difficult problems encountered during the processing of nuclear waste. In the 3+ oxidation states, the metal ions have an identical charge and roughly the same ionic radius. They differ strictly in the relative energies of their f- and d-orbitals, and to separate these metal ions, ligands will need to be developed that take advantage of this small but important distinction. The extraction of uranium and plutonium from nitric acid solution can be performed quantitatively by the extraction with the TBP (tributyl phosphate). Commercially, this process has found wide use in the PUREX (plutonium uranium extraction) reprocessing method. The TRUEX (transuranium extraction) process is further used to coextract the trivalent lanthanides and actinides ions from HLLW generated during PUREX extraction. This method uses CMPO [(N, N-diisobutylcarbamoylmethyl) octylphenylphosphineoxide] intermixed with TBP as a synergistic agent. However, the final separation of trivalent actinides from trivalent lanthanides still remains a challenging task. In TRUEX nitric acid solution, the Am(III) ion is coordinated by three CMPO molecules and three nitrate anions. Taking inspiration from this data and previous work with calix[4]arene systems, researchers on this project have developed a C3-symmetric tris-CMPO ligand system using a triphenoxymethane platform as a base. The triphenoxymethane ligand systems have many advantages for the preparation of complex ligand systems. The compounds are very easy to prepare. The steric and solubility properties can be tuned through an extreme range by the inclusion of different alkoxy and alkyl groups such as methyoxy, ethoxy, t-butoxy, methyl, octyl, t-pentyl, or even t-pentyl at the ortho- and para-positions of the aryl rings. The triphenoxymethane ligand system shows promise as an improved extractant for both tetravalent and trivalent actinide recoveries form

  4. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    Science.gov (United States)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  5. A Review of Automatic Methods Based on Image Processing Techniques for Tuberculosis Detection from Microscopic Sputum Smear Images.

    Science.gov (United States)

    Panicker, Rani Oomman; Soman, Biju; Saini, Gagan; Rajan, Jeny

    2016-01-01

    Tuberculosis (TB) is an infectious disease caused by the bacteria Mycobacterium tuberculosis. It primarily affects the lungs, but it can also affect other parts of the body. TB remains one of the leading causes of death in developing countries, and its recent resurgences in both developed and developing countries warrant global attention. The number of deaths due to TB is very high (as per the WHO report, 1.5 million died in 2013), although most are preventable if diagnosed early and treated. There are many tools for TB detection, but the most widely used one is sputum smear microscopy. It is done manually and is often time consuming; a laboratory technician is expected to spend at least 15 min per slide, limiting the number of slides that can be screened. Many countries, including India, have a dearth of properly trained technicians, and they often fail to detect TB cases due to the stress of a heavy workload. Automatic methods are generally considered as a solution to this problem. Attempts have been made to develop automatic approaches to identify TB bacteria from microscopic sputum smear images. In this paper, we provide a review of automatic methods based on image processing techniques published between 1998 and 2014. The review shows that the accuracy of algorithms for the automatic detection of TB increased significantly over the years and gladly acknowledges that commercial products based on published works also started appearing in the market. This review could be useful to researchers and practitioners working in the field of TB automation, providing a comprehensive and accessible overview of methods of this field of research.

  6. A fast, simple and green method for the extraction of carbamate pesticides from rice by microwave assisted steam extraction coupled with solid phase extraction.

    Science.gov (United States)

    Song, Weitao; Zhang, Yiqun; Li, Guijie; Chen, Haiyan; Wang, Hui; Zhao, Qi; He, Dong; Zhao, Chun; Ding, Lan

    2014-01-15

    This paper presented a fast, simple and green sample pretreatment method for the extraction of 8 carbamate pesticides in rice. The carbamate pesticides were extracted by microwave assisted water steam extraction method, and the extract obtained was immediately applied on a C18 solid phase extraction cartridge for clean-up and concentration. The eluate containing target compounds was finally analysed by high performance liquid chromatography with mass spectrometry. The parameters affecting extraction efficiency were investigated and optimised. The limits of detection ranging from 1.1 to 4.2ngg(-1) were obtained. The recoveries of 8 carbamate pesticides ranged from 66% to 117% at three spiked levels, and the inter- and intra-day relative standard deviation values were less than 9.1%. Compared with traditional methods, the proposed method cost less extraction time and organic solvent. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Interest Extraction Using Relevance Feedback with Kernel Method

    Science.gov (United States)

    Hidekazu, Yanagimoto; Sigeru, Omatu

    In this paper, we propose interest extraction using the relevance feedback with the kernel method. In the field of machine learning, the kernel method has been used. Since the classifier using the kernel method creates a discriminant function in a feature space, the discriminant function is a nonlinear function in a input space. The kernel method is used for the Support Vector Machine (SVM), the Kernel PCA, and so on. The SVM set a discriminant hyperplane between positive data and negative data. Hence, a distance between the hyperplane and a training sample is not important in the SVM. It is difficult to use the SVM to score other samples. Our goal is to create a method which scores the other samples in the feature space. We propose the relevance feedback which is carried out in the feature space. Hence, this relevance feedback can deal with nonlinearity of data. We compare the proposed method with the common relevance feedback using test collection NTCIR2. Finally, we comfirm the proposed method is superior to the common method through simulations.

  8. Control method and device for automatic drift stabilization in radiation detection

    International Nuclear Information System (INIS)

    Berthold, F.; Kubisiak, H.

    1979-01-01

    In the automatic control circuit individual electron peaks in the detectors, e.g. NaI crystals or proportional counters, are used. These peaks exhibit no drift dependence; they may be produced in the detectors in different ways. The control circuit may be applied in nuclear radiation measurement techniques, photometry, gamma cameras and for measuring the X-ray fine structure with proportional counters. (DG) [de

  9. Datasets of Odontocete Sounds Annotated for Developing Automatic Detection Methods, FY09-10

    Science.gov (United States)

    2012-09-01

    automatic call detection and classification; make them publicly available in an archive on the Internet ; continue developing and publishing detection and...out of 85 glider dives. Manual analysis revealed that 7 of these detections were actual beaked whale encounters. During the other 3 glider dives...28 Sept.-1 Oct. 2011. Spatially explicit capture-recapture minke whale density estimation. Proc. XIX Congresso Anual da Sociedade Portuguesa de

  10. Photogrammetric Model Based Method of Automatic Orientation of Space Cargo Ship Relative to the International Space Station

    Science.gov (United States)

    Blokhinov, Y. B.; Chernyavskiy, A. S.; Zheltov, S. Y.

    2012-07-01

    The technical problem of creating the new Russian version of an automatic Space Cargo Ship (SCS) for the International Space Station (ISS) is inseparably connected to the development of a digital video system for automatically measuring the SCS position relative to ISS in the process of spacecraft docking. This paper presents a method for estimating the orientation elements based on the use of a highly detailed digital model of the ISS. The input data are digital frames from a calibrated video system and the initial values of orientation elements, these can be estimated from navigation devices or by fast-and-rough viewpoint-dependent algorithm. Then orientation elements should be defined precisely by means of algorithmic processing. The main idea is to solve the exterior orientation problem mainly on the basis of contour information of the frame image of ISS instead of ground control points. A detailed digital model is used for generating raster templates of ISS nodes; the templates are used to detect and locate the nodes on the target image with the required accuracy. The process is performed for every frame, the resulting parameters are considered to be the orientation elements. The Kalman filter is used for statistical support of the estimation process and real time pose tracking. Finally, the modeling results presented show that the proposed method can be regarded as one means to ensure the algorithmic support of automatic space ships docking.

  11. PHOTOGRAMMETRIC MODEL BASED METHOD OF AUTOMATIC ORIENTATION OF SPACE CARGO SHIP RELATIVE TO THE INTERNATIONAL SPACE STATION

    Directory of Open Access Journals (Sweden)

    Y. B. Blokhinov

    2012-07-01

    Full Text Available The technical problem of creating the new Russian version of an automatic Space Cargo Ship (SCS for the International Space Station (ISS is inseparably connected to the development of a digital video system for automatically measuring the SCS position relative to ISS in the process of spacecraft docking. This paper presents a method for estimating the orientation elements based on the use of a highly detailed digital model of the ISS. The input data are digital frames from a calibrated video system and the initial values of orientation elements, these can be estimated from navigation devices or by fast-and-rough viewpoint-dependent algorithm. Then orientation elements should be defined precisely by means of algorithmic processing. The main idea is to solve the exterior orientation problem mainly on the basis of contour information of the frame image of ISS instead of ground control points. A detailed digital model is used for generating raster templates of ISS nodes; the templates are used to detect and locate the nodes on the target image with the required accuracy. The process is performed for every frame, the resulting parameters are considered to be the orientation elements. The Kalman filter is used for statistical support of the estimation process and real time pose tracking. Finally, the modeling results presented show that the proposed method can be regarded as one means to ensure the algorithmic support of automatic space ships docking.

  12. Automatic mesh refinement and local multigrid methods for contact problems: application to the Pellet-Cladding mechanical Interaction

    International Nuclear Information System (INIS)

    Liu, Hao

    2016-01-01

    This Ph.D. work takes place within the framework of studies on Pellet-Cladding mechanical Interaction (PCI) which occurs in the fuel rods of pressurized water reactor. This manuscript focuses on automatic mesh refinement to simulate more accurately this phenomena while maintaining acceptable computational time and memory space for industrial calculations. An automatic mesh refinement strategy based on the combination of the Local Defect Correction multigrid method (LDC) with the Zienkiewicz and Zhu a posteriori error estimator is proposed. The estimated error is used to detect the zones to be refined, where the local sub-grids of the LDC method are generated. Several stopping criteria are studied to end the refinement process when the solution is accurate enough or when the refinement does not improve the global solution accuracy anymore. Numerical results for elastic 2D test cases with pressure discontinuity show the efficiency of the proposed strategy. The automatic mesh refinement in case of unilateral contact problems is then considered. The strategy previously introduced can be easily adapted to the multi-body refinement by estimating solution error on each body separately. Post-processing is often necessary to ensure the conformity of the refined areas regarding the contact boundaries. A variety of numerical experiments with elastic contact (with or without friction, with or without an initial gap) confirms the efficiency and adaptability of the proposed strategy. (author) [fr

  13. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    International Nuclear Information System (INIS)

    Gallivanone, F.; Interlenghi, M.; Castiglioni, I.; Canervari, C.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in

  14. Morphotectonic mapping from the analysis of automatically extracted lineaments using Landsat 8 images and SRTM data in the Hindukush-Pamir

    Science.gov (United States)

    Rahnama, Mehdi; Gloaguen, Richard

    2014-05-01

    Modern deformation, fault movements, and induced earthquakes in the Hindukush-Pamir region are driven by the collision between the northward-moving Indian subcontinent and Eurasia. We investigated neotectonic activity and generated tectonic maps of this area. We developed a Matlab based toolbox for the automatic extraction of image discontinuities. The approach consists of frequency domain filtering, edge detection in the spatial domain, Hough transformation, segment grouping, polynomial interpolation and geostatistical analysis of the lineaments patterns. Statistical quantification of counts, lengths, azimuth frequency, density distribution, and orientations are analyzed to understand the tectonic activities, to explain the prominent structural trends, and to demarcate the contribution of different faulting styles. Morphotectonic lineaments observed on the study area were automatically extracted from panchromatic band of Landsat 8 with 15-m resolution and SRTM digital elevation model (DEM) with 90-m resolution. Then, this data was analyzed to characterize the tectonic trends that dominated the geologic evolution of this area. We show that the SW-Pamir is mainly controlled by the Chaman-Herat-Central Badakhshan fault systems and, to a lesser extent by the Darvaz fault zone. Extracted lineaments and the intensity of the characterized tectonic trends correspond well with reference data. In Addition, results are consistent with the styles of faulting determined from focal mechanisms of the historical earthquake epicenters in the region. The presented results could be applicable in different geological aspects that are based on a good knowledge of the system patterns and the spatial relationship between them. These aspects included geodynamics, seismic and risk assessment, mineral exploration and hydrogeological research.

  15. Automatic extraction of plots from geo-registered UAS imagery of crop fields with complex planting schemes

    Science.gov (United States)

    Hearst, Anthony A.

    Complex planting schemes are common in experimental crop fields and can make it difficult to extract plots of interest from high-resolution imagery of the fields gathered by Unmanned Aircraft Systems (UAS). This prevents UAS imagery from being applied in High-Throughput Precision Phenotyping and other areas of agricultural research. If the imagery is accurately geo-registered, then it may be possible to extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. Future work will focus on further enhancing the plot extraction accuracy through additional image processing techniques so that it becomes sufficiently accurate for all practical purposes in agricultural research and potentially other areas of research.

  16. Method Specific Calibration Corrects for DNA Extraction Method Effects on Relative Telomere Length Measurements by Quantitative PCR.

    Science.gov (United States)

    Seeker, Luise A; Holland, Rebecca; Underwood, Sarah; Fairlie, Jennifer; Psifidi, Androniki; Ilska, Joanna J; Bagnall, Ainsley; Whitelaw, Bruce; Coffey, Mike; Banos, Georgios; Nussey, Daniel H

    2016-01-01

    Telomere length (TL) is increasingly being used as a biomarker in epidemiological, biomedical and ecological studies. A wide range of DNA extraction techniques have been used in telomere experiments and recent quantitative PCR (qPCR) based studies suggest that the choice of DNA extraction method may influence average relative TL (RTL) measurements. Such extraction method effects may limit the use of historically collected DNA samples extracted with different methods. However, if extraction method effects are systematic an extraction method specific (MS) calibrator might be able to correct for them, because systematic effects would influence the calibrator sample in the same way as all other samples. In the present study we tested whether leukocyte RTL in blood samples from Holstein Friesian cattle and Soay sheep measured by qPCR was influenced by DNA extraction method and whether MS calibration could account for any observed differences. We compared two silica membrane-based DNA extraction kits and a salting out method. All extraction methods were optimized to yield enough high quality DNA for TL measurement. In both species we found that silica membrane-based DNA extraction methods produced shorter RTL measurements than the non-membrane-based method when calibrated against an identical calibrator. However, these differences were not statistically detectable when a MS calibrator was used to calculate RTL. This approach produced RTL measurements that were highly correlated across extraction methods (r > 0.76) and had coefficients of variation lower than 10% across plates of identical samples extracted by different methods. Our results are consistent with previous findings that popular membrane-based DNA extraction methods may lead to shorter RTL measurements than non-membrane-based methods. However, we also demonstrate that these differences can be accounted for by using an extraction method-specific calibrator, offering researchers a simple means of accounting for

  17. GPR Signal Denoising and Target Extraction With the CEEMD Method

    KAUST Repository

    Li, Jing

    2015-04-17

    In this letter, we apply a time and frequency analysis method based on the complete ensemble empirical mode decomposition (CEEMD) method in ground-penetrating radar (GPR) signal processing. It decomposes the GPR signal into a sum of oscillatory components, with guaranteed positive and smoothly varying instantaneous frequencies. The key idea of this method relies on averaging the modes obtained by empirical mode decomposition (EMD) applied to several realizations of Gaussian white noise added to the original signal. It can solve the mode-mixing problem in the EMD method and improve the resolution of ensemble EMD (EEMD) when the signal has a low signal-to-noise ratio. First, we analyze the difference between the basic theory of EMD, EEMD, and CEEMD. Then, we compare the time and frequency analysis with Hilbert-Huang transform to test the results of different methods. The synthetic and real GPR data demonstrate that CEEMD promises higher spectral-spatial resolution than the other two EMD methods in GPR signal denoising and target extraction. Its decomposition is complete, with a numerically negligible error.

  18. Coconut oil extraction by the Java method: An investigation of its potential application in aqueous Jatropha oil extraction

    NARCIS (Netherlands)

    Marasabessy, A.; Moeis, M.R.; Sanders, J.P.M.; Weusthuis, R.A.

    2010-01-01

    A traditional Java method of coconut oil extraction assisted by paddy crabs was investigated to find out if crabs or crab-derived components can be used to extract oil from Jatropha curcas seed kernels. Using the traditional Java method the addition of crab paste liberated 54% w w-1 oil from grated

  19. Coconut oil extraction by the traditional Java method : An investigation of its potential application in aqueous Jatropha oil extraction

    NARCIS (Netherlands)

    Marasabessy, Ahmad; Moeis, Maelita R.; Sanders, Johan P. M.; Weusthuis, Ruud A.

    A traditional Java method of coconut oil extraction assisted by paddy crabs was investigated to find out if crabs or crab-derived components can be used to extract oil from Jatropha curcas seed kernels. Using the traditional Java method the addition of crab paste liberated 54% w w(-1) oil from

  20. Highly efficient DNA extraction method from skeletal remains

    Directory of Open Access Journals (Sweden)

    Irena Zupanič Pajnič

    2011-03-01

    Full Text Available Background: This paper precisely describes the method of DNA extraction developed to acquire high quality DNA from the Second World War skeletal remains. The same method is also used for molecular genetic identification of unknown decomposed bodies in routine forensic casework where only bones and teeth are suitable for DNA typing. We analysed 109 bones and two teeth from WWII mass graves in Slovenia. Methods: We cleaned the bones and teeth, removed surface contaminants and ground the bones into powder, using liquid nitrogen . Prior to isolating the DNA in parallel using the BioRobot EZ1 (Qiagen, the powder was decalcified for three days. The nuclear DNA of the samples were quantified by real-time PCR method. We acquired autosomal genetic profiles and Y-chromosome haplotypes of the bones and teeth with PCR amplification of microsatellites, and mtDNA haplotypes 99. For the purpose of traceability in the event of contamination, we prepared elimination data bases including genetic profiles of the nuclear and mtDNA of all persons who have been in touch with the skeletal remains in any way. Results: We extracted up to 55 ng DNA/g of the teeth, up to 100 ng DNA/g of the femurs, up to 30 ng DNA/g of the tibias and up to 0.5 ng DNA/g of the humerus. The typing of autosomal and YSTR loci was successful in all of the teeth, in 98 % dekalof the femurs, and in 75 % to 81 % of the tibias and humerus. The typing of mtDNA was successful in all of the teeth, and in 96 % to 98 % of the bones. Conclusions: We managed to obtain nuclear DNA for successful STR typing from skeletal remains that were over 60 years old . The method of DNA extraction described here has proved to be highly efficient. We obtained 0.8 to 100 ng DNA/g of teeth or bones and complete genetic profiles of autosomal DNA, Y-STR haplotypes, and mtDNA haplotypes from only 0.5g bone and teeth samples.

  1. Alternative and Efficient Extraction Methods for Marine-Derived Compounds

    Directory of Open Access Journals (Sweden)

    Clara Grosso

    2015-05-01

    Full Text Available Marine ecosystems cover more than 70% of the globe’s surface. These habitats are occupied by a great diversity of marine organisms that produce highly structural diverse metabolites as a defense mechanism. In the last decades, these metabolites have been extracted and isolated in order to test them in different bioassays and assess their potential to fight human diseases. Since traditional extraction techniques are both solvent- and time-consuming, this review emphasizes alternative extraction techniques, such as supercritical fluid extraction, pressurized solvent extraction, microwave-assisted extraction, ultrasound-assisted extraction, pulsed electric field-assisted extraction, enzyme-assisted extraction, and extraction with switchable solvents and ionic liquids, applied in the search for marine compounds. Only studies published in the 21st century are considered.

  2. DNA extraction on bio-chip: history and preeminence over conventional and solid-phase extraction methods.

    Science.gov (United States)

    Ayoib, Adilah; Hashim, Uda; Gopinath, Subash C B; Md Arshad, M K

    2017-11-01

    This review covers a developmental progression on early to modern taxonomy at cellular level following the advent of electron microscopy and the advancement in deoxyribonucleic acid (DNA) extraction for expatiation of biological classification at DNA level. Here, we discuss the fundamental values of conventional chemical methods of DNA extraction using liquid/liquid extraction (LLE) followed by development of solid-phase extraction (SPE) methods, as well as recent advances in microfluidics device-based system for DNA extraction on-chip. We also discuss the importance of DNA extraction as well as the advantages over conventional chemical methods, and how Lab-on-a-Chip (LOC) system plays a crucial role for the future achievements.

  3. Extracting historical time periods from the Web

    NARCIS (Netherlands)

    de Boer, V.; van Someren, M.; Wielinga, B.J.

    2010-01-01

    In this work we present an automatic method for the extraction of time periods related to ontological concepts from the Web. The method consists of two parts: an Information Extraction phase and a Semantic Representation phase. In the Information Extraction phase, temporal information about events

  4. DEVELOPMENT OF THE EFFECTIVE METHOD FOR THE EXTRACTION OF SUCROSE

    Directory of Open Access Journals (Sweden)

    N. G. Kulneva

    2014-01-01

    Full Text Available Summary. Application of slanted diffusers is accompanied with irregular heating of juice- and chips mixture in the unit length, which reduces the degree of extraction of sucrose from chips and microorganisms intensive growth inside the apparatuses, increases the sucrose loss during the extraction and the time of the whole process. A method for preprocessing of beet chips prior to extraction with hot solutions of chemical agents was suggested. It was experimentally found out that the best quality indicators are inherent to the juice obtained from chips treated with a solution of 0.05 % aluminum sulfate or with 0.10% bleach solution. Thermal processing of beet chips with the solutions of Al2(SO43 with a concentration of 0.05% and bleach with a concentration of 0.10 % results in a gradual beet chips uniform heating and denaturation of the proteins, which increases the mass transfer coefficient of sugarbeet tissue, increasing its permeability. Beet chips surface washing aluminum sulfate solution reduces the solubility of the protein and pectin substances, increasing the strength and elasticity of beet chips. pH of the medium is stabilized, which reduces the transition of non-sugars from beet chips into the diffusion juice in the process of sucrose extraction. Combination of thermal and chemical treatment allows to stabilize the colloids of sugarbeet tissue and to heat beet chips to the optimum temperature of the diffusion process of 70-72 °C before entering the diffusion apparatus and to improve its structural and mechanical properties. The use of preliminary heat treatment of beet chips: improves the efficiency of diffusion processes; blocks the transition of substances of protein-pectin complex of beet chips into the raw juice, whereby their content in the diffusion juice is reduced; reduces the color of purified juice by 15.1 %, the content of calcium salts by 31.3 % in comparison with the standard method; -improves the purity of the purified

  5. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  6. Automatic Detection of Microaneurysms in Color Fundus Images using a Local Radon Transform Method

    OpenAIRE

    Hamid Reza Pourreza; Mohammad Hossein Bahreyni Toossi; Alireza Mehdizadeh; Reza Pourreza; Meysam Tavakoli

    2009-01-01

    Introduction: Diabetic retinopathy (DR) is one of the most serious and most frequent eye diseases in the world and the most common cause of blindness in adults between 20 and 60 years of age. Following 15 years of diabetes, about 2% of the diabetic patients are blind and 10% suffer from vision impairment due to DR complications. This paper addresses the automatic detection of microaneurysms (MA) in color fundus images, which plays a key role in computer-assisted early diagnosis of diabetic re...

  7. Study on the Automatic Detection Method and System of Multifunctional Hydrocephalus Shunt

    Science.gov (United States)

    Sun, Xuan; Wang, Guangzhen; Dong, Quancheng; Li, Yuzhong

    2017-07-01

    Aiming to the difficulty of micro pressure detection and the difficulty of micro flow control in the testing process of hydrocephalus shunt, the principle of the shunt performance detection was analyzed.In this study, the author analyzed the principle of several items of shunt performance detection,and used advanced micro pressure sensor and micro flow peristaltic pump to overcome the micro pressure detection and micro flow control technology.At the same time,This study also puted many common experimental projects integrated, and successfully developed the automatic detection system for a shunt performance detection function, to achieve a test with high precision, high efficiency and automation.

  8. How far away is far enough for extracting numerical waveforms, and how much do they depend on the extraction method?

    International Nuclear Information System (INIS)

    Pazos, Enrique; Dorband, Ernst Nils; Nagar, Alessandro; Palenzuela, Carlos; Schnetter, Erik; Tiglio, Manuel

    2007-01-01

    We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors

  9. How far away is far enough for extracting numerical waveforms, and how much do they depend on the extraction method?

    Energy Technology Data Exchange (ETDEWEB)

    Pazos, Enrique [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Dorband, Ernst Nils [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Nagar, Alessandro [Dipartimento di Fisica, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Torino (Italy); Palenzuela, Carlos [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Schnetter, Erik [Center for Computation and Technology, 216 Johnston Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Tiglio, Manuel [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2007-06-21

    We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors

  10. [Comparison study of different methods for extracting volatile oil from bergamot].

    Science.gov (United States)

    Chen, Fei; Li, Qun-li; Sheng, Liu-qing; Qiu, Jiao-ying

    2008-08-01

    To test different methods for extracting volatile oil from bergamot. The determination of bergapten was carried out by RP-HPLC. Four different ways of organic solvent extraction, steam-input distillation, distillation of the material mixed with water and press extraction were compared. Bergapten wasnt extracted by ways of steam-input distillation and distillation of the material mixed with water. The steam distillation extraction can be taken to extract volatile oil from bergamot for protecting humans' skins.

  11. Polyphenols: Extraction Methods, Antioxidative Action, Bioavailability and Anticarcinogenic Effects

    Directory of Open Access Journals (Sweden)

    Eva Brglez Mojzer

    2016-07-01

    Full Text Available Being secondary plant metabolites, polyphenols represent a large and diverse group of substances abundantly present in a majority of fruits, herbs and vegetables. The current contribution is focused on their bioavailability, antioxidative and anticarcinogenic properties. An overview of extraction methods is also given, with supercritical fluid extraction highlighted as a promising eco-friendly alternative providing exceptional separation and protection from degradation of unstable polyphenols. The protective role of polyphenols against reactive oxygen and nitrogen species, UV light, plant pathogens, parasites and predators results in several beneficial biological activities giving rise to prophylaxis or possibly even to a cure for several prevailing human diseases, especially various cancer types. Omnipresence, specificity of the response and the absence of or low toxicity are crucial advantages of polyphenols as anticancer agents. The main problem represents their low bioavailability and rapid metabolism. One of the promising solutions lies in nanoformulation of polyphenols that prevents their degradation and thus enables significantly higher concentrations to reach the target cells. Another, more practiced, solution is the use of mixtures of various polyphenols that bring synergistic effects, resulting in lowering of the required therapeutic dose and in multitargeted action. The combination of polyphenols with existing drugs and therapies also shows promising results and significantly reduces their toxicity.

  12. A window-based time series feature extraction method.

    Science.gov (United States)

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-10-01

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Analytical Validation of a New Enzymatic and Automatable Method for d-Xylose Measurement in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Israel Sánchez-Moreno

    2017-01-01

    Full Text Available Hypolactasia, or intestinal lactase deficiency, affects more than half of the world population. Currently, xylose quantification in urine after gaxilose oral administration for the noninvasive diagnosis of hypolactasia is performed with the hand-operated nonautomatable phloroglucinol reaction. This work demonstrates that a new enzymatic xylose quantification method, based on the activity of xylose dehydrogenase from Caulobacter crescentus, represents an excellent alternative to the manual phloroglucinol reaction. The new method is automatable and facilitates the use of the gaxilose test for hypolactasia diagnosis in the clinical practice. The analytical validation of the new technique was performed in three different autoanalyzers, using buffer or urine samples spiked with different xylose concentrations. For the comparison between the phloroglucinol and the enzymatic assays, 224 urine samples of patients to whom the gaxilose test had been prescribed were assayed by both methods. A mean bias of −16.08 mg of xylose was observed when comparing the results obtained by both techniques. After adjusting the cut-off of the enzymatic method to 19.18 mg of xylose, the Kappa coefficient was found to be 0.9531, indicating an excellent level of agreement between both analytical procedures. This new assay represents the first automatable enzymatic technique validated for xylose quantification in urine.

  14. A SIMPLE METHOD FOR THE EXTRACTION AND QUANTIFICATION OF PHOTOPIGMENTS FROM SYMBIODINIUM SPP.

    Science.gov (United States)

    John E. Rogers and Dragoslav Marcovich. Submitted. Simple Method for the Extraction and Quantification of Photopigments from Symbiodinium spp.. Limnol. Oceanogr. Methods. 19 p. (ERL,GB 1192). We have developed a simple, mild extraction procedure using methanol which, when...

  15. Automatic Prosodic Segmentation by F0 Clustering Using Superpositional Modeling.

    OpenAIRE

    Nakai, Mitsuru; Harald, Singer; Sagisaka, Yoshinori; Shimodaira, Hiroshi

    1995-01-01

    In this paper, we propose an automatic method for detecting accent phrase boundaries in Japanese continuous speech by using F0 information. In the training phase, hand labeled accent patterns are parameterized according to a superpositional model proposed by Fujisaki, and assigned to some clusters by a clustering method, in which accent templates are calculated as centroid of each cluster. In the segmentation phase, automatic N-best extraction of boundaries is performe...

  16. Method for quantifying the uncertainty with the extraction of the raw data of a gamma ray spectrum by deconvolution software

    International Nuclear Information System (INIS)

    Vigineix, Thomas; Guillot, Nicolas; Saurel, Nicolas

    2013-06-01

    Gamma ray spectrometry is a passive non destructive assay most commonly used to identify and quantify the radionuclides present in complex huge objects such as nuclear waste packages. The treatment of spectra from the measurement of nuclear waste is done in two steps: the first step is to extract the raw data from the spectra (energies and the net photoelectric absorption peaks area) and the second step is to determine the detection efficiency of the measuring scene. Commercial software use different methods to extract the raw data spectrum but none are optimal in the treatment of spectra containing actinides. Spectra should be handled individually and requires settings and an important feedback part from the operator, which prevents the automatic process of spectrum and increases the risk of human error. In this context the Nuclear Measurement and Valuation Laboratory (LMNE) in the Atomic Energy Commission Valduc (CEA Valduc) has developed a new methodology for quantifying the uncertainty associated with the extraction of the raw data over spectrum. This methodology was applied with raw data and commercial software that need configuration by the operator (GENIE2000, Interwinner...). This robust and fully automated methodology of uncertainties calculation is performed on the entire process of the software. The methodology ensures for all peaks processed by the deconvolution software an extraction of energy peaks closed to 2 channels and an extraction of net areas with an uncertainty less than 5 percents. The methodology was tested experimentally with actinides spectrum. (authors)

  17. An efficient and cost-effective method for DNA extraction from athalassohaline soil using a newly formulated cell extraction buffer.

    Science.gov (United States)

    Narayan, Avinash; Jain, Kunal; Shah, Amita R; Madamwar, Datta

    2016-06-01

    The present study describes the rapid and efficient indirect lysis method for environmental DNA extraction from athalassohaline soil by newly formulated cell extraction buffer. The available methods are mostly based on direct lysis which leads to DNA shearing and co-extraction of extra cellular DNA that influences the community and functional analysis. Moreover, during extraction of DNA by direct lysis from athalassohaline soil, it was observed that, upon addition of poly ethylene glycol (PEG), isopropanol or absolute ethanol for precipitation of DNA, salt precipitates out and affecting DNA yield significantly. Therefore, indirect lysis method was optimized for extraction of environmental DNA from such soil containing high salts and low microbial biomass (CFU 4.3 × 10 4 per gram soil) using newly formulated cell extraction buffer in combination with low and high speed centrifugation. The cell extraction buffer composition and its concentration were optimized and PEG 8000 (1 %; w/v) and 1 M NaCl gave maximum cell mass for DNA extraction. The cell extraction efficiency was assessed with acridine orange staining of soil samples before and after cell extraction. The efficiency, reproducibility and purity of extracted DNA by newly developed procedure were compared with previously recognized methods and kits having different protocols including indirect lysis. The extracted environmental DNA showed better yield (5.6 ± 0.7 μg g -1 ) along with high purity ratios. The purity of DNA was validated by assessing its usability in various molecular techniques like restriction enzyme digestion, amplification of 16S rRNA gene using PCR and UV-Visible spectroscopy analysis.

  18. Precision of a plutonium analytical method using solvent extraction and spectrophotometry

    International Nuclear Information System (INIS)

    Mendoza, P.G.; Jackson, D.D.; Niemczyk, T.M.

    1991-01-01

    The plutonium assay method was investigated that uses the plutonyl trinitrate tetrapropyl-ammonium ion-pair solvent extraction with spectrophotometry of the extract as a candidate method capable of providing robustness and precision. To identify and asses the effect of factors on the precision, we looked at sampling techniques, silver oxide oxidation conditions extraction time, extract stability, and temperature dependence of the extract analytical peak height and position. A precision of 0.12% was obtained. (author) 18 refs.; 2 figs

  19. Proactive Response to Potential Material Shortages Arising from Environmental Restrictions Using Automatic Discovery and Extraction of Information from Technical Documents

    Science.gov (United States)

    2012-12-21

    documents is via web links on manufacturer and distributor product catalogs on the web . These links were discovered using XSB’s focused crawler technology...regulatory information with specific items Focused Crawler Data In addition to PDF documents, we made use of data collected from the web using XSB, Inc...listed in the catalog. As a rule, the focused crawler extracts attributes and values as they appear on the web page. To be useful for this project

  20. Curvelet based automatic segmentation of supraspinatus tendon from ultrasound image: a focused assistive diagnostic method.

    Science.gov (United States)

    Gupta, Rishu; Elamvazuthi, Irraivan; Dass, Sarat Chandra; Faye, Ibrahima; Vasant, Pandian; George, John; Izza, Faizatul

    2014-12-04

    Disorders of rotator cuff tendons results in acute pain limiting the normal range of motion for shoulder. Of all the tendons in rotator cuff, supraspinatus (SSP) tendon is affected first of any pathological changes. Diagnosis of SSP tendon using ultrasound is considered to be operator dependent with its accuracy being related to operator's level of experience. The automatic segmentation of SSP tendon ultrasound image was performed to provide focused and more accurate diagnosis. The image processing techniques were employed for automatic segmentation of SSP tendon. The image processing techniques combines curvelet transform and mathematical concepts of logical and morphological operators along with area filtering. The segmentation assessment was performed using true positives rate, false positives rate and also accuracy of segmentation. The specificity and sensitivity of the algorithm was tested for diagnosis of partial thickness tears (PTTs) and full thickness tears (FTTs). The ultrasound images of SSP tendon were taken from medical center with the help of experienced radiologists. The algorithm was tested on 116 images taken from 51 different patients. The accuracy of segmentation of SSP tendon was calculated to be 95.61% in accordance with the segmentation performed by radiologists, with true positives rate of 91.37% and false positives rate of 8.62%. The specificity and sensitivity was found to be 93.6%, 94% and 95%, 95.6% for partial thickness tears and full thickness tears respectively. The proposed methodology was successfully tested over a database of more than 116 US images, for which radiologist assessment and validation was performed. The segmentation of SSP tendon from ultrasound images helps in focused, accurate and more reliable diagnosis which has been verified with the help of two experienced radiologists. The specificity and sensitivity for accurate detection of partial and full thickness tears has been considerably increased after segmentation when