WorldWideScience

Sample records for automatable method extract

  1. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  3. Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data

    Science.gov (United States)

    Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.

    2018-04-01

    Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.

  4. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data fr...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank.......This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data from...

  5. Development of automatic extraction method of left ventricular contours on long axis view MR cine images

    International Nuclear Information System (INIS)

    Utsunomiya, Shinichi; Iijima, Naoto; Yamasaki, Kazunari; Fujita, Akinori

    1995-01-01

    In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)

  6. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  7. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  8. A Method for Automatic Extracting Intracranial Region in MR Brain Image

    Science.gov (United States)

    Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro

    It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.

  9. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  10. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  11. Automatic extraction of left ventricle in SPECT myocardial perfusion imaging

    International Nuclear Information System (INIS)

    Liu Li; Zhao Shujun; Yao Zhiming; Wang Daoyu

    1999-01-01

    An automatic method of extracting left ventricle from SPECT myocardial perfusion data was introduced. This method was based on the least square analysis of the positions of all short-axis slices pixels from the half sphere-cylinder myocardial model, and used a iterative reconstruction technique to automatically cut off the non-left ventricular tissue from the perfusion images. Thereby, this technique provided the bases for further quantitative analysis

  12. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  13. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  14. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  15. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  16. Automatic sentence extraction for the detection of scientific paper relations

    Science.gov (United States)

    Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.

    2018-03-01

    The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.

  17. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik

    Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study...... was to validate automated methods for extracting FSV directly from dynamic PET studies for two different tracers and to examine potential scanner hardware bias. Methods: 21 subjects underwent a dynamic 27 min 11C-acetate PET scan on a Siemens Biograph TruePoint 64 PET/CT scanner (scanner I). In addition, 8...... subjects underwent a dynamic 6 min 15O-water PET scan followed by a 27 min 11C-acetate PET scan on a GE Discovery ST PET/CT scanner (scanner II). The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was isolated by automatic...

  18. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  19. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    Science.gov (United States)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  20. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  1. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    Science.gov (United States)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  2. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    Directory of Open Access Journals (Sweden)

    Chuqing Cao

    2014-09-01

    Full Text Available Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.

  3. AUTOMATIC EXTRACTION AND TOPOLOGY RECONSTRUCTION OF URBAN VIADUCTS FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2015-08-01

    Full Text Available Urban viaducts are important infrastructures for the transportation system of a city. In this paper, an original method is proposed to automatically extract urban viaducts and reconstruct topology of the viaduct network just with airborne LiDAR point cloud data. It will greatly simplify the effort-taking procedure of viaducts extraction and reconstruction. In our method, the point cloud first is filtered to divide all the points into ground points and none-ground points. Region growth algorithm is adopted to find the viaduct points from the none-ground points by the features generated from its general prescriptive designation rules. Then, the viaduct points are projected into 2D images to extract the centerline of every viaduct and generate cubic functions to represent passages of viaducts by least square fitting, with which the topology of the viaduct network can be rebuilt by combining the height information. Finally, a topological graph of the viaducts network is produced. The full-automatic method can potentially benefit the application of urban navigation and city model reconstruction.

  4. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  5. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær

    2015-01-01

    Background The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. Methods 35 subjects underwent...... a dynamic 11 C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic 15 O-water PET and 11 C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically...... from PET data using cluster analysis. The first-pass peak was isolated by automatic extrapolation of the downslope of the TAC. FSV was calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured using phase...

  6. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    Science.gov (United States)

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  7. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Chunhua Li

    2017-01-01

    Full Text Available Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  8. Automatically extracting functionally equivalent proteins from SwissProt

    Directory of Open Access Journals (Sweden)

    Martin Andrew CR

    2008-10-01

    Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and

  9. Automatic extraction of forward stroke volume using dynamic 11C-acetate PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik

    Objectives: Dynamic PET with 11C-acetate can be used to quantify myocardial blood flow and oxidative metabolism, the latter of which is used to calculate myocardial external efficiency (MEE). Calculation of MEE requires forward stroke volume (FSV) data. FSV is affected by cardiac loading conditions......, potentially introducing bias if measured with a separate modality. The aim of this study was to develop and validate methods for automatically extracting FSV directly from the dynamic PET used for measuring oxidative metabolism. Methods: 16 subjects underwent a dynamic 27 min PET scan on a Siemens Biograph...... TruePoint 64 PET/CT scanner after bolus injection of 399±27 MBq of 11C-acetate. The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was derived by automatic extrapolation of the down-slope of the TAC. FSV...

  10. AUTOMATIC RAILWAY POWER LINE EXTRACTION USING MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-06-01

    Full Text Available Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  11. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  12. AUTOMATIC EXTRACTION OF ROCK JOINTS FROM LASER SCANNED DATA BY MOVING LEAST SQUARES METHOD AND FUZZY K-MEANS CLUSTERING

    Directory of Open Access Journals (Sweden)

    S. Oh

    2012-09-01

    Full Text Available Recent development of laser scanning device increased the capability of representing rock outcrop in a very high resolution. Accurate 3D point cloud model with rock joint information can help geologist to estimate stability of rock slope on-site or off-site. An automatic plane extraction method was developed by computing normal directions and grouping them in similar direction. Point normal was calculated by moving least squares (MLS method considering every point within a given distance to minimize error to the fitting plane. Normal directions were classified into a number of dominating clusters by fuzzy K-means clustering. Region growing approach was exploited to discriminate joints in a point cloud. Overall procedure was applied to point cloud with about 120,000 points, and successfully extracted joints with joint information. The extraction procedure was implemented to minimize number of input parameters and to construct plane information into the existing point cloud for less redundancy and high usability of the point cloud itself.

  13. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  14. Automatic extraction of drug indications from FDA drug labels.

    Science.gov (United States)

    Khare, Ritu; Wei, Chih-Hsuan; Lu, Zhiyong

    2014-01-01

    Extracting computable indications, i.e. drug-disease treatment relationships, from narrative drug resources is the key for building a gold standard drug indication repository. The two steps to the extraction problem are disease named-entity recognition (NER) to identify disease mentions from a free-text description and disease classification to distinguish indications from other disease mentions in the description. While there exist many tools for disease NER, disease classification is mostly achieved through human annotations. For example, we recently resorted to human annotations to prepare a corpus, LabeledIn, capturing structured indications from the drug labels submitted to FDA by pharmaceutical companies. In this study, we present an automatic end-to-end framework to extract structured and normalized indications from FDA drug labels. In addition to automatic disease NER, a key component of our framework is a machine learning method that is trained on the LabeledIn corpus to classify the NER-computed disease mentions as "indication vs. non-indication." Through experiments with 500 drug labels, our end-to-end system delivered 86.3% F1-measure in drug indication extraction, with 17% improvement over baseline. Further analysis shows that the indication classifier delivers a performance comparable to human experts and that the remaining errors are mostly due to disease NER (more than 50%). Given its performance, we conclude that our end-to-end approach has the potential to significantly reduce human annotation costs.

  15. Automatic extraction of legal concepts and definitions

    NARCIS (Netherlands)

    Winkels, R.; Hoekstra, R.

    2012-01-01

    In this paper we present the results of an experiment in automatic concept and definition extraction from written sources of law using relatively simple natural language and standard semantic web technology. The software was tested on six laws from the tax domain.

  16. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  17. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    Science.gov (United States)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  18. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  19. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  20. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  1. A Risk Assessment System with Automatic Extraction of Event Types

    Science.gov (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  2. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  3. An automatic extraction algorithm of three dimensional shape of brain parenchyma from MR images

    International Nuclear Information System (INIS)

    Matozaki, Takeshi

    2000-01-01

    For the simulation of surgical operations, the extraction of the selected region using MR images is useful. However, this segmentation requires a high level of skill and experience from the technicians. We have developed an unique automatic extraction algorithm for extracting three dimensional brain parenchyma using MR head images. It is named the ''three dimensional gray scale clumsy painter method''. In this method, a template having the shape of a pseudo-circle, a so called clumsy painter (CP), moves along the contour of the selected region and extracts the region surrounded by the contour. This method has advantages compared with the morphological filtering and the region growing method. Previously, this method was applied to binary images, but there were some problems in that the results of the extractions were varied by the value of the threshold level. We introduced gray level information of images to decide the threshold, and depend upon the change of image density between the brain parenchyma and CSF. We decided the threshold level by the vector of a map of templates, and changed the map according to the change of image density. As a result, the over extracted ratio was improved by 36%, and the under extracted ratio was improved by 20%. (author)

  4. Study on automatic control of high uranium concentration solvent extraction with pulse sieve-plate column

    International Nuclear Information System (INIS)

    You Wenzhi; Xing Guangxuan; Long Maoxiong; Zhang Jianmin; Zhou Qin; Chen Fuping; Ye Lingfeng

    1998-01-01

    The author mainly described the working condition of the automatic control system of high uranium concentration solvent extraction with pulse sieve-plate column on a large scale test. The use of the automatic instrument and meter, automatic control circuit, and the best feedback control point of the solvent extraction processing with pulse sieve-plate column are discussed in detail. The writers point out the success of this experiment on automation, also present some questions that should be cared for the automatic control, instruments and meters in production in the future

  5. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    Science.gov (United States)

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  7. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    Science.gov (United States)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  8. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.

  9. An enhanced model for automatically extracting topic phrase from ...

    African Journals Online (AJOL)

    The key benefit foreseen from this automatic document classification is not only related to search engines, but also to many other fields like, document organization, text filtering and semantic index managing. Key words: Keyphrase extraction, machine learning, search engine snippet, document classification, topic tracking ...

  10. Automatic facial animation parameters extraction in MPEG-4 visual communication

    Science.gov (United States)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  11. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    OpenAIRE

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S.; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality i...

  12. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  13. [Automatic Extraction and Analysis of Dosimetry Data in Radiotherapy Plans].

    Science.gov (United States)

    Song, Wei; Zhao, Di; Lu, Hong; Zhang, Biyun; Ma, Jun; Yu, Dahai

    To improve the efficiency and accuracy of extraction and analysis of dosimetry data in radiotherapy plans for a batch of patients. With the interface function provided in Matlab platform, a program was written to extract the dosimetry data exported from treatment planning system in DICOM RT format and exported the dose-volume data to an Excel file with the SPSS compatible format. This method was compared with manual operation for 14 gastric carcinoma patients to validate the efficiency and accuracy. The output Excel data were compatible with SPSS in format, the dosimetry data error for PTV dose interval of 90%-98%, PTV dose interval of 99%-106% and all OARs were -3.48E-5 ± 3.01E-5, -1.11E-3 ± 7.68E-4, -7.85E-5 ± 9.91E-5 respectively. Compared with manual operation, the time required was reduced from 5.3 h to 0.19 h and input error was reduced from 0.002 to 0. The automatic extraction of dosimetry data in DICOM RT format for batch patients, the SPSS compatible data exportation, quick analysis were achieved in this paper. The efficiency of clinical researches based on dosimetry data analysis of large number of patients will be improved with this methods.

  14. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  15. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  16. The study of automatic brain extraction of basal ganglia based on atlas of Talairach in 18F-FDG PET images

    International Nuclear Information System (INIS)

    Zuo Chantao; Guan Yihui; Zhao Jun; Lin Xiangtong; Wang Jian; Zhang Jiange; Zhang Lu

    2005-01-01

    Objective: To establish a method which can extract functional areas of the brain basal ganglia automatically. Methods: 18 F-fluorodeoxyglucose (FDG) PET images were spatial normalized to Talairach atlas space through two steps, image registration and image deformation. The functional areas were extracted from three dimension PET images based on the coordinate obtained from atlas; caudate and putamen were extracted and rendered, the grey value of the area was normalized by whole brain. Results: The normal ratio of left caudate head, body and tail were 1.02 ± 0.04, 0.92 ± 0.07 and 0.71 ± 0.03, the right were 0.98 ± 0.03, 0.87 ± 0.04 and 0.71 ± 0.01 respectively. The normal ratio of left and right putamen were 1.20 ± 0.06 and 1.20 ± 0.04. The mean grey value between left and right basal ganglia had no significant difference (P>0.05). Conclusion: The automatic functional area extracting method based on atlas of Talairach is feasible. (authors)

  17. A method for real-time implementation of HOG feature extraction

    Science.gov (United States)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  18. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    OpenAIRE

    U. Mallast; R. Gloaguen; S. Geyer; T. Rödiger; C. Siebert

    2011-01-01

    In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxili...

  19. Automatic Extraction of High-Resolution Rainfall Series from Rainfall Strip Charts

    Science.gov (United States)

    Saa-Requejo, Antonio; Valencia, Jose Luis; Garrido, Alberto; Tarquis, Ana M.

    2015-04-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on a host of factors, including climate, soil, topography, cropping and land management practices among others. Most models for soil erosion or hydrological processes need an accurate storm characterization. However, this data are not always available and in some cases indirect models are generated to fill this gap. In Spain, the rain intensity data known for time periods less than 24 hours back to 1924 and many studies are limited by it. In many cases this data is stored in rainfall strip charts in the meteorological stations but haven't been transfer in a numerical form. To overcome this deficiency in the raw data a process of information extraction from large amounts of rainfall strip charts is implemented by means of computer software. The method has been developed that largely automates the intensive-labour extraction work based on van Piggelen et al. (2011). The method consists of the following five basic steps: 1) scanning the charts to high-resolution digital images, 2) manually and visually registering relevant meta information from charts and pre-processing, 3) applying automatic curve extraction software in a batch process to determine the coordinates of cumulative rainfall lines on the images (main step), 4) post processing the curves that were not correctly determined in step 3, and 5) aggregating the cumulative rainfall in pixel coordinates to the desired time resolution. A colour detection procedure is introduced that automatically separates the background of the charts and rolls from the grid and subsequently the rainfall curve. The rainfall curve is detected by minimization of a cost function. Some utilities have been added to improve the previous work and automates some auxiliary processes: readjust the bands properly, merge bands when

  20. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, V [The Johns Hopkins University, Computer Science. Baltimore, MD (United States); Jacobs, MA [The Johns Hopkins University School of Medicine, Dept of Radiology and Oncology. Baltimore, MD (United States)

    2016-06-15

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  1. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    International Nuclear Information System (INIS)

    Parekh, V; Jacobs, MA

    2016-01-01

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  2. Comparison of methods of DNA extraction for real-time PCR in a model of pleural tuberculosis.

    Science.gov (United States)

    Santos, Ana; Cremades, Rosa; Rodríguez, Juan Carlos; García-Pachón, Eduardo; Ruiz, Montserrat; Royo, Gloria

    2010-01-01

    Molecular methods have been reported to have different sensitivities in the diagnosis of pleural tuberculosis and this may in part be caused by the use of different methods of DNA extraction. Our study compares nine DNA extraction systems in an experimental model of pleural tuberculosis. An inoculum of Mycobacterium tuberculosis was added to 23 pleural liquid samples with different characteristics. DNA was subsequently extracted using nine different methods (seven manual and two automatic) for analysis with real-time PCR. Only two methods were able to detect the presence of M. tuberculosis DNA in all the samples: extraction using columns (Qiagen) and automated extraction with the TNAI system (Roche). The automatic method is more expensive, but requires less time. Almost all the false negatives were because of the difficulty involved in extracting M. tuberculosis DNA, as in general, all the methods studied are capable of eliminating inhibitory substances that block the amplification reaction. The method of M. tuberculosis DNA extraction used affects the results of the diagnosis of pleural tuberculosis by molecular methods. DNA extraction systems that have been shown to be effective in pleural liquid should be used.

  3. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    ÖZEL, Selma Ayşe; SARAÇ, Esra

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  4. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  5. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    Science.gov (United States)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  6. Automatic extraction of road features in urban environments using dense ALS data

    Science.gov (United States)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  7. Automatic Definition Extraction and Crossword Generation From Spanish News Text

    Directory of Open Access Journals (Sweden)

    Jennifer Esteche

    2017-08-01

    Full Text Available This paper describes the design and implementation of a system that takes Spanish texts and generates crosswords (board and definitions in a fully automatic way using definitions extracted from those texts. Our solution divides the problem in two parts: a definition extraction module that applies pattern matching implemented in Python, and a crossword generation module that uses a greedy strategy implemented in Prolog. The system achieves 73% precision and builds crosswords similar to those built by humans.

  8. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    Science.gov (United States)

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  9. Automatic extraction and identification of users' responses in Facebook medical quizzes.

    Science.gov (United States)

    Rodríguez-González, Alejandro; Menasalvas Ruiz, Ernestina; Mayer Pujadas, Miguel A

    2016-04-01

    In the last few years the use of social media in medicine has grown exponentially, providing a new area of research based on the analysis and use of Web 2.0 capabilities. In addition, the use of social media in medical education is a subject of particular interest which has been addressed in several studies. One example of this application is the medical quizzes of The New England Journal of Medicine (NEJM) that regularly publishes a set of questions through their Facebook timeline. We present an approach for the automatic extraction of medical quizzes and their associated answers on a Facebook platform by means of a set of computer-based methods and algorithms. We have developed a tool for the extraction and analysis of medical quizzes stored on Facebook timeline at the NEJM Facebook page, based on a set of computer-based methods and algorithms using Java. The system is divided into two main modules: Crawler and Data retrieval. The system was launched on December 31, 2014 and crawled through a total of 3004 valid posts and 200,081 valid comments. The first post was dated on July 23, 2009 and the last one on December 30, 2014. 285 quizzes were analyzed with 32,780 different users providing answers to the aforementioned quizzes. Of the 285 quizzes, patterns were found in 261 (91.58%). From these 261 quizzes where trends were found, we saw that users follow trends of incorrect answers in 13 quizzes and trends of correct answers in 248. This tool is capable of automatically identifying the correct and wrong answers to a quiz provided on Facebook posts in a text format to a quiz, with a small rate of false negative cases and this approach could be applicable to the extraction and analysis of other sources after including some adaptations of the information on the Internet. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    H. Ma

    2017-09-01

    Full Text Available Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  11. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  12. ROADS CENTRE-AXIS EXTRACTION IN AIRBORNE SAR IMAGES: AN APPROACH BASED ON ACTIVE CONTOUR MODEL WITH THE USE OF SEMI-AUTOMATIC SEEDING

    Directory of Open Access Journals (Sweden)

    R. G. Lotte

    2013-05-01

    Full Text Available Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM, followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes. The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  13. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  14. A Multi-stage Method to Extract Road from High Resolution Satellite Image

    International Nuclear Information System (INIS)

    Zhijian, Huang; Zhang, Jinfang; Xu, Fanjiang

    2014-01-01

    Extracting road information from high-resolution satellite images is complex and hardly achieves by exploiting only one or two modules. This paper presents a multi-stage method, consisting of automatic information extraction and semi-automatic post-processing. The Multi-scale Enhancement algorithm enlarges the contrast of human-made structures with the background. The Statistical Region Merging segments images into regions, whose skeletons are extracted and pruned according to geometry shape information. Setting the start and the end skeleton points, the shortest skeleton path is constructed as a road centre line. The Bidirectional Adaptive Smoothing technique smoothens the road centre line and adjusts it to right position. With the smoothed line and its average width, a Buffer algorithm reconstructs the road region easily. Seen from the last results, the proposed method eliminates redundant non-road regions, repairs incomplete occlusions, jumps over complete occlusions, and reserves accurate road centre lines and neat road regions. During the whole process, only a few interactions are needed

  15. THE FACE EXTRACTION METHOD FOR MOBILE DEVICES

    Directory of Open Access Journals (Sweden)

    Viktor Borodin

    2013-10-01

    Full Text Available Normal 0 false false false MicrosoftInternetExplorer4 The problem of automatic face recognition on images is considered. The method of face ellipse extraction from photo and methods for face special points extraction are proposed /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Обычная таблица"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-ansi-language:#0400; mso-fareast-language:#0400; mso-bidi-language:#0400;}

  16. Automatic Extraction and Size Distribution of Landslides in Kurdistan Region, NE Iraq

    Directory of Open Access Journals (Sweden)

    Arsalan A. Othman

    2013-05-01

    Full Text Available This study aims to assess the localization and size distribution of landslides using automatic remote sensing techniques in (semi- arid, non-vegetated, mountainous environments. The study area is located in the Kurdistan region (NE Iraq, within the Zagros orogenic belt, which is characterized by the High Folded Zone (HFZ, the Imbricated Zone and the Zagros Suture Zone (ZSZ. The available reference inventory includes 3,190 landslides mapped from sixty QuickBird scenes using manual delineation. The landslide types involve rock falls, translational slides and slumps, which occurred in different lithological units. Two hundred and ninety of these landslides lie within the ZSZ, representing a cumulated surface of 32 km2. The HFZ implicates 2,900 landslides with an overall coverage of about 26 km2. We first analyzed cumulative landslide number-size distributions using the inventory map. We then proposed a very simple and robust algorithm for automatic landslide extraction using specific band ratios selected upon the spectral signatures of bare surfaces as well as posteriori slope and the normalized difference vegetation index (NDVI thresholds. The index is based on the contrast between landslides and their background, whereas the landslides have high reflections in the green and red bands. We applied the slope threshold map to remove low slope areas, which have high reflectance in red and green bands. The algorithm was able to detect ~96% of the recent landslides known from the reference inventory on a test site. The cumulative landslide number-size distribution of automatically extracted landslide is very similar to the one based on visual mapping. The automatic extraction is therefore adapted for the quantitative analysis of landslides and thus can contribute to the assessment of hazards in similar regions.

  17. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten

    2014-01-01

    was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France......BACKGROUND: There are challenges, when extracting bacterial DNA from specimens for molecular diagnostics, since fecal samples also contain DNA from human cells and many different substances derived from food, cell residues and medication that can inhibit downstream PCR. The purpose of the study...... by easyMag® from the same fecal samples. Furthermore, DNA extracts obtained using easyMag® seemed to contain inhibitory compounds, since in order to perform a successful PCR-analysis, the sample should be diluted at least 10 times. DGGE performed on PCR from DNA extracted by QIAamp DNA Stool Mini Kit DNA...

  18. A method for automatically constructing the initial contour of the common carotid artery

    Directory of Open Access Journals (Sweden)

    Yara Omran

    2013-10-01

    Full Text Available In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images of the transverse section of the common carotid artery, but it can be extended to be used on the images of the longitudinal section. The intensity profiles are classified using Support vector machine algorithm, and the results of different kernels are compared. The extracted features used for the classification are basically statistical features of the intensity profiles. The echogenicity of the arterial lumen, and gives the profiles that intersect the artery a special shape that helps recognizing these profiles from other general profiles.The outlining of the arterial walls may seem a classic task in image processing. However, most of the methods used to outline the artery start from a manual, or semi-automatic, initial contour.The proposed method is highly appreciated in automating the entire process of automatic artery detection and segmentation.

  19. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Directory of Open Access Journals (Sweden)

    Fasahat Ullah Siddiqui

    2016-07-01

    Full Text Available Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality. Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state

  20. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  1. A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data

    Science.gov (United States)

    XU, R.; Jia, G.

    2012-12-01

    Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China

  2. Template-based automatic extraction of the joint space of foot bones from CT scan

    Science.gov (United States)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  3. Towards automatic music transcription: note extraction based on independent subspace analysis

    Science.gov (United States)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  4. An image-based automatic recognition method for the flowering stage of maize

    Science.gov (United States)

    Yu, Zhenghong; Zhou, Huabing; Li, Cuina

    2018-03-01

    In this paper, we proposed an image-based approach for automatic recognizing the flowering stage of maize. A modified HOG/SVM detection framework is first adopted to detect the ears of maize. Then, we use low-rank matrix recovery technology to precisely extract the ears at pixel level. At last, a new feature called color gradient histogram, as an indicator, is proposed to determine the flowering stage. Comparing experiment has been carried out to testify the validity of our method and the results indicate that our method can meet the demand for practical observation.

  5. A quality score for coronary artery tree extraction results

    Science.gov (United States)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  6. METHOD OF RARE TERM CONTRASTIVE EXTRACTION FROM NATURAL LANGUAGE TEXTS

    Directory of Open Access Journals (Sweden)

    I. A. Bessmertny

    2017-01-01

    Full Text Available The paper considers a problem of automatic domain term extraction from documents corpus by means of a contrast collection. Existing contrastive methods successfully extract often used terms but mishandle rare terms. This could yield poorness of the resulting thesaurus. Assessment of point-wise mutual information is one of the known statistical methods of term extraction and it finds rare terms successfully. Although, it extracts many false terms at that. The proposed approach consists of point-wise mutual information application for rare terms extraction and filtering of candidates by criterion of joint occurrence with the other candidates. We build “documents-by-terms” matrix that is subjected to singular value decomposition to eliminate noise and reveal strong interconnections. Then we pass on to the resulting matrix “terms-by-terms” that reproduces strength of interconnections between words. This approach was approved on a documents collection from “Geology” domain with the use of contrast documents from such topics as “Politics”, “Culture”, “Economics” and “Accidents” on some Internet resources. The experimental results demonstrate operability of this method for rare terms extraction.

  7. AN AUTOMATIC OPTICAL AND SAR IMAGE REGISTRATION METHOD USING ITERATIVE MULTI-LEVEL AND REFINEMENT MODEL

    Directory of Open Access Journals (Sweden)

    C. Xu

    2016-06-01

    Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  8. An unsupervised text mining method for relation extraction from biomedical literature.

    Directory of Open Access Journals (Sweden)

    Changqin Quan

    Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  9. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  10. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    International Nuclear Information System (INIS)

    Gao, Yang; Zhong, Ruofei; Liu, Xianlin; Tang, Tao; Wang, Liuzhao

    2017-01-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness ( p ) and completeness ( r ) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR. (paper)

  11. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    Science.gov (United States)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  12. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    Directory of Open Access Journals (Sweden)

    Esra SARAÇ

    2016-12-01

    Full Text Available Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the experiments FormSpring.me dataset is used and the effects of preprocessing methods; several classifiers like C4.5, Naïve Bayes, kNN, and SVM; and information gain and chi square feature selection methods are investigated. Experimental results indicate that the best classification results are obtained when alphabetic tokenization, no stemming, and no stopwords removal are applied. Using feature selection also improves cyberbully detection performance. When classifiers are compared, C4.5 performs the best for the used dataset.

  13. Automatization of laboratory extraction installation intended for investigations in the field of reprocessing of spenf fuels

    International Nuclear Information System (INIS)

    Vznuzdaev, E.A.; Galkin, B.Ya.; Gofman, F.Eh.

    1981-01-01

    Automatized stand for solving the problem of optimum control on technological extraction process in the spent fuel reprocessing by means of an automatized control system which is based on the means of computation technick is described in the paper. Preliminary experiments which had been conducted on the stand with spent fuel from WWER-440 reactor have shown high efficiency of automatization and possibility to conduct technological investigations in a short period of time and to have much of information which can not be obtained by ordinary organisation of work [ru

  14. Feature extraction and classification in automatic weld seam radioscopy

    International Nuclear Information System (INIS)

    Heindoerfer, F.; Pohle, R.

    1994-01-01

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM) [de

  15. Automatic extraction of gene ontology annotation and its correlation with clusters in protein networks

    Directory of Open Access Journals (Sweden)

    Mazo Ilya

    2007-07-01

    Full Text Available Abstract Background Uncovering cellular roles of a protein is a task of tremendous importance and complexity that requires dedicated experimental work as well as often sophisticated data mining and processing tools. Protein functions, often referred to as its annotations, are believed to manifest themselves through topology of the networks of inter-proteins interactions. In particular, there is a growing body of evidence that proteins performing the same function are more likely to interact with each other than with proteins with other functions. However, since functional annotation and protein network topology are often studied separately, the direct relationship between them has not been comprehensively demonstrated. In addition to having the general biological significance, such demonstration would further validate the data extraction and processing methods used to compose protein annotation and protein-protein interactions datasets. Results We developed a method for automatic extraction of protein functional annotation from scientific text based on the Natural Language Processing (NLP technology. For the protein annotation extracted from the entire PubMed, we evaluated the precision and recall rates, and compared the performance of the automatic extraction technology to that of manual curation used in public Gene Ontology (GO annotation. In the second part of our presentation, we reported a large-scale investigation into the correspondence between communities in the literature-based protein networks and GO annotation groups of functionally related proteins. We found a comprehensive two-way match: proteins within biological annotation groups form significantly denser linked network clusters than expected by chance and, conversely, densely linked network communities exhibit a pronounced non-random overlap with GO groups. We also expanded the publicly available GO biological process annotation using the relations extracted by our NLP technology

  16. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

    Directory of Open Access Journals (Sweden)

    Roland Pfister

    2013-10-01

    Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

  17. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    Science.gov (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  18. Towards Automatic Music Transcription: Extraction of MIDI-Data out of Polyphonic Piano Music

    Directory of Open Access Journals (Sweden)

    Jens Wellhausen

    2005-06-01

    Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.

  19. Analysis of Technique to Extract Data from the Web for Improved Performance

    Science.gov (United States)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  20. Historical Patterns Based on Automatically Extracted Data: the Case of Classical Composers

    DEFF Research Database (Denmark)

    Borowiecki, Karol; O'Hagan, John

    2012-01-01

    application that automatically extracts and processes information was developed to generate data on the birth location, occupations and importance (using word count methods) of over 12,000 composers over six centuries. Quantitative measures of the relative importance of different types of music......The purpose of this paper is to demonstrate the potential for generating interesting aggregate data on certain aspect of the lives of thousands of composers, and indeed other creative groups, from large on-line dictionaries and to be able to do so relatively quickly. A purpose-built java...... and of the different music instruments over the centuries were also generated. Finally quantitative indicators of the importance of different cities over the different centuries in the lives of these composers are constructed. A range of interesting findings emerge in relation to all of these aspects of the lives...

  1. AN ITERATIVE SEGMENTATION METHOD FOR REGION OF INTEREST EXTRACTION

    Directory of Open Access Journals (Sweden)

    Volkan CETIN

    2013-01-01

    Full Text Available In this paper, a method is presented for applications which include mammographic image segmentation and region of interest extraction. Segmentation is a very critical and difficult stage to accomplish in computer aided detection systems. Although the presented segmentation method is developed for mammographic images, it can be used for any medical image which resembles the same statistical characteristics with mammograms. Fundamentally, the method contains iterative automatic thresholding and masking operations which is applied to the original or enhanced mammograms. Also the effect of image enhancement to the segmentation process was observed. A version of histogram equalization was applied to the images for enhancement. Finally, the results show that enhanced version of the proposed segmentation method is preferable because of its better success rate.

  2. Automatic extraction of via in the CT image of PCB

    Science.gov (United States)

    Liu, Xifeng; Hu, Yuwei

    2018-04-01

    In modern industry, the nondestructive testing of printed circuit board (PCB) can prevent effectively the system failure and is becoming more and more important. In order to detect the via in the PCB base on the CT image automatically accurately and reliably, a novel algorithm for via extraction based on weighting stack combining the morphologic character of via is designed. Every slice data in the vertical direction of the PCB is superimposed to enhanced vias target. The OTSU algorithm is used to segment the slice image. OTSU algorithm of thresholding gray level images is efficient for separating an image into two classes where two types of fairly distinct classes exist in the image. Randomized Hough Transform was used to locate the region of via in the segmented binary image. Then the 3D reconstruction of via based on sequence slice images was done by volume rendering. The accuracy of via positioning and detecting from a CT images of PCB was demonstrated by proposed algorithm. It was found that the method is good in veracity and stability for detecting of via in three dimensional.

  3. The automatic extraction of pitch perturbation using microcomputers: some methodological considerations.

    Science.gov (United States)

    Deem, J F; Manning, W H; Knack, J V; Matesich, J S

    1989-09-01

    A program for the automatic extraction of jitter (PAEJ) was developed for the clinical measurement of pitch perturbations using a microcomputer. The program currently includes 12 implementations of an algorithm for marking the boundary criteria for a fundamental period of vocal fold vibration. The relative sensitivity of these extraction procedures for identifying the pitch period was compared using sine waves. Data obtained to date provide information for each procedure concerning the effects of waveform peakedness and slope, sample duration in cycles, noise level of the analysis system with both direct and tape recorded input, and the influence of interpolation. Zero crossing extraction procedures provided lower jitter values regardless of sine wave frequency or sample duration. The procedures making use of positive- or negative-going zero crossings with interpolation provided the lowest measures of jitter with the sine wave stimuli. Pilot data obtained with normal-speaking adults indicated that jitter measures varied as a function of the speaker, vowel, and sample duration.

  4. Deep Learning Methods for Underwater Target Feature Extraction and Recognition

    Directory of Open Access Journals (Sweden)

    Gang Hu

    2018-01-01

    Full Text Available The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.

  5. Extended morphological processing: a practical method for automatic spot detection of biological markers from microscopic images.

    Science.gov (United States)

    Kimori, Yoshitaka; Baba, Norio; Morone, Nobuhiro

    2010-07-08

    A reliable extraction technique for resolving multiple spots in light or electron microscopic images is essential in investigations of the spatial distribution and dynamics of specific proteins inside cells and tissues. Currently, automatic spot extraction and characterization in complex microscopic images poses many challenges to conventional image processing methods. A new method to extract closely located, small target spots from biological images is proposed. This method starts with a simple but practical operation based on the extended morphological top-hat transformation to subtract an uneven background. The core of our novel approach is the following: first, the original image is rotated in an arbitrary direction and each rotated image is opened with a single straight line-segment structuring element. Second, the opened images are unified and then subtracted from the original image. To evaluate these procedures, model images of simulated spots with closely located targets were created and the efficacy of our method was compared to that of conventional morphological filtering methods. The results showed the better performance of our method. The spots of real microscope images can be quantified to confirm that the method is applicable in a given practice. Our method achieved effective spot extraction under various image conditions, including aggregated target spots, poor signal-to-noise ratio, and large variations in the background intensity. Furthermore, it has no restrictions with respect to the shape of the extracted spots. The features of our method allow its broad application in biological and biomedical image information analysis.

  6. Automatic color preference correction for color reproduction

    Science.gov (United States)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  7. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  8. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    Science.gov (United States)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  9. Forensic Automatic Speaker Recognition Based on Likelihood Ratio Using Acoustic-phonetic Features Measured Automatically

    Directory of Open Access Journals (Sweden)

    Huapeng Wang

    2015-01-01

    Full Text Available Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence. This paper proposes a new method of forensic automatic speaker recognition using the likelihood ratio framework to quantify the strength of voice evidence. The proposed method uses a reference database to calculate the within- and between-speaker variability. Some acoustic-phonetic features are extracted automatically using the software VoiceSauce. The effectiveness of the approach was tested using two Mandarin databases: A mobile telephone database and a landline database. The experiment's results indicate that these acoustic-phonetic features do have some discriminating potential and are worth trying in discrimination. The automatic acoustic-phonetic features have acceptable discriminative performance and can provide more reliable results in evidence analysis when fused with other kind of voice features.

  10. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  11. An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA. So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine, NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.

  12. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Automatic Morphological Sieving: Comparison between Different Methods, Application to DNA Ploidy Measurements

    Directory of Open Access Journals (Sweden)

    Christophe Boudry

    1999-01-01

    Full Text Available The aim of the present study is to propose alternative automatic methods to time consuming interactive sorting of elements for DNA ploidy measurements. One archival brain tumour and two archival breast carcinoma were studied, corresponding to 7120 elements (3764 nuclei, 3356 debris and aggregates. Three automatic classification methods were tested to eliminate debris and aggregates from DNA ploidy measurements (mathematical morphology (MM, multiparametric analysis (MA and neural network (NN. Performances were evaluated by reference to interactive sorting. The results obtained for the three methods concerning the percentage of debris and aggregates automatically removed reach 63, 75 and 85% for MM, MA and NN methods, respectively, with false positive rates of 6, 21 and 25%. Information about DNA ploidy abnormalities were globally preserved after automatic elimination of debris and aggregates by MM and MA methods as opposed to NN method, showing that automatic classification methods can offer alternatives to tedious interactive elimination of debris and aggregates, for DNA ploidy measurements of archival tumours.

  14. Automatic three-dimensional rib centerline extraction from CT scans for enhanced visualization and anatomical context

    Science.gov (United States)

    Ramakrishnan, Sowmya; Alvino, Christopher; Grady, Leo; Kiraly, Atilla

    2011-03-01

    We present a complete automatic system to extract 3D centerlines of ribs from thoracic CT scans. Our rib centerline system determines the positional information for the rib cage consisting of extracted rib centerlines, spinal canal centerline, pairing and labeling of ribs. We show an application of this output to produce an enhanced visualization of the rib cage by the method of Kiraly et al., in which the ribs are digitally unfolded along their centerlines. The centerline extraction consists of three stages: (a) pre-trace processing for rib localization, (b) rib centerline tracing, and (c) post-trace processing to merge the rib traces. Then we classify ribs from non-ribs and determine anatomical rib labeling. Our novel centerline tracing technique uses the Random Walker algorithm to segment the structural boundary of the rib in successive 2D cross sections orthogonal to the longitudinal direction of the ribs. Then the rib centerline is progressively traced along the rib using a 3D Kalman filter. The rib centerline extraction framework was evaluated on 149 CT datasets with varying slice spacing, dose, and under a variety of reconstruction kernels. The results of the evaluation are presented. The extraction takes approximately 20 seconds on a modern radiology workstation and performs robustly even in the presence of partial volume effects or rib pathologies such as bone metastases or fractures, making the system suitable for assisting clinicians in expediting routine rib reading for oncology and trauma applications.

  15. Statistical Analysis of Automatic Seed Word Acquisition to Improve Harmful Expression Extraction in Cyberbullying Detection

    Directory of Open Access Journals (Sweden)

    Suzuha Hatakeyama

    2016-04-01

    Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. [1]. We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.

  16. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    OpenAIRE

    Baraka D. Sija; Young-Hoon Goo; Kyu-Seok Shim; Huru Hasanova; Myung-Sup Kim

    2018-01-01

    A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE) defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards ...

  17. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    Science.gov (United States)

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  18. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2018-04-01

    Full Text Available Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT. First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  19. Oocytes Polar Body Detection for Automatic Enucleation

    Directory of Open Access Journals (Sweden)

    Di Chen

    2016-02-01

    Full Text Available Enucleation is a crucial step in cloning. In order to achieve automatic blind enucleation, we should detect the polar body of the oocyte automatically. The conventional polar body detection approaches have low success rate or low efficiency. We propose a polar body detection method based on machine learning in this paper. On one hand, the improved Histogram of Oriented Gradient (HOG algorithm is employed to extract features of polar body images, which will increase success rate. On the other hand, a position prediction method is put forward to narrow the search range of polar body, which will improve efficiency. Experiment results show that the success rate is 96% for various types of polar bodies. Furthermore, the method is applied to an enucleation experiment and improves the degree of automatic enucleation.

  20. Analysis of Feature Extraction Methods for Speaker Dependent Speech Recognition

    Directory of Open Access Journals (Sweden)

    Gurpreet Kaur

    2017-02-01

    Full Text Available Speech recognition is about what is being said, irrespective of who is saying. Speech recognition is a growing field. Major progress is taking place on the technology of automatic speech recognition (ASR. Still, there are lots of barriers in this field in terms of recognition rate, background noise, speaker variability, speaking rate, accent etc. Speech recognition rate mainly depends on the selection of features and feature extraction methods. This paper outlines the feature extraction techniques for speaker dependent speech recognition for isolated words. A brief survey of different feature extraction techniques like Mel-Frequency Cepstral Coefficients (MFCC, Linear Predictive Coding Coefficients (LPCC, Perceptual Linear Prediction (PLP, Relative Spectra Perceptual linear Predictive (RASTA-PLP analysis are presented and evaluation is done. Speech recognition has various applications from daily use to commercial use. We have made a speaker dependent system and this system can be useful in many areas like controlling a patient vehicle using simple commands.

  1. Improvement in the performance of CAD for the Alzheimer-type dementia based on automatic extraction of temporal lobe from coronal MR images

    International Nuclear Information System (INIS)

    Kaeriyama, Tomoharu; Kodama, Naoki; Kaneko, Tomoyuki; Shimada, Tetsuo; Tanaka, Hiroyuki; Takeda, Ai; Fukumoto, Ichiro

    2004-01-01

    In this study, we extracted whole brain and temporal lobe images from MR images (26 healthy elderly controls and 34 Alzheimer-type dementia patients) by means of binarize, mask processing, template matching, Hough transformation, and boundary tracing etc. We assessed the extraction accuracy by comparing the extracted images to images extracts by a radiological technologist. The results of assessment by consistent rate; brain images 91.3±4.3%, right temporal lobe 83.3±6.9%, left temporal lobe 83.7±7.6%. Furthermore discriminant analysis using 6 textural features demonstrated sensitivity and specificity of 100% when the healthy elderly controls were compared to the Alzheimer-type dementia patients. Our research showed the possibility of automatic objective diagnosis of temporal lobe abnormalities by automatic extracted images of the temporal lobes. (author)

  2. Determination of rifampicin in human plasma by high-performance liquid chromatography coupled with ultraviolet detection after automatized solid-liquid extraction.

    Science.gov (United States)

    Louveau, B; Fernandez, C; Zahr, N; Sauvageon-Martre, H; Maslanka, P; Faure, P; Mourah, S; Goldwirt, L

    2016-12-01

    A precise and accurate high-performance liquid chromatography (HPLC) quantification method of rifampicin in human plasma was developed and validated using ultraviolet detection after an automatized solid-phase extraction. The method was validated with respect to selectivity, extraction recovery, linearity, intra- and inter-day precision, accuracy, lower limit of quantification and stability. Chromatographic separation was performed on a Chromolith RP 8 column using a mixture of 0.05 m acetate buffer pH 5.7-acetonitrile (35:65, v/v) as mobile phase. The compounds were detected at a wavelength of 335 nm with a lower limit of quantification of 0.05 mg/L in human plasma. Retention times for rifampicin and 6,7-dimethyl-2,3-di(2-pyridyl) quinoxaline used as internal standard were respectively 3.77 and 4.81 min. This robust and exact method was successfully applied in routine for therapeutic drug monitoring in patients treated with rifampicin. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Automatic extraction of the cingulum bundle in diffusion tensor tract-specific analysis. Feasibility study in Parkinson's disease with and without dementia

    International Nuclear Information System (INIS)

    Ito, Kenji; Masutani, Yoshitaka; Suzuki, Yuichi; Ino, Kenji; Kunimatsu, Akira; Ohtomo, Kuni; Kamagata, Koji; Yasmin, Hasina; Aoki, Shigeki

    2013-01-01

    Tract-specific analysis (TSA) measures diffusion parameters along a specific fiber that has been extracted by fiber tracking using manual regions of interest (ROIs), but TSA is limited by its requirement for manual operation, poor reproducibility, and high time consumption. We aimed to develop a fully automated extraction method for the cingulum bundle (CB) and to apply the method to TSA in neurobehavioral disorders such as Parkinson's disease (PD). We introduce the voxel classification (VC) and auto diffusion tensor fiber-tracking (AFT) methods of extraction. The VC method directly extracts the CB, skipping the fiber-tracking step, whereas the AFT method uses fiber tracking from automatically selected ROIs. We compared the results of VC and AFT to those obtained by manual diffusion tensor fiber tracking (MFT) performed by 3 operators. We quantified the Jaccard similarity index among the 3 methods in data from 20 subjects (10 normal controls [NC] and 10 patients with Parkinson's disease dementia [PDD]). We used all 3 extraction methods (VC, AFT, and MFT) to calculate the fractional anisotropy (FA) values of the anterior and posterior CB for 15 NC subjects, 15 with PD, and 15 with PDD. The Jaccard index between results of AFT and MFT, 0.72, was similar to the inter-operator Jaccard index of MFT. However, the Jaccard indices between VC and MFT and between VC and AFT were lower. Consequently, the VC method classified among 3 different groups (NC, PD, and PDD), whereas the others classified only 2 different groups (NC, PD or PDD). For TSA in Parkinson's disease, the VC method can be more useful than the AFT and MFT methods for extracting the CB. In addition, the results of patient data analysis suggest that a reduction of FA in the posterior CB may represent a useful biological index for monitoring PD and PDD. (author)

  4. a Fuzzy Automatic CAR Detection Method Based on High Resolution Satellite Imagery and Geodesic Morphology

    Science.gov (United States)

    Zarrinpanjeh, N.; Dadrassjavan, F.

    2017-09-01

    Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  5. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    Science.gov (United States)

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  6. Measuring the accuracy of automatic shoeprint recognition methods.

    Science.gov (United States)

    Luostarinen, Tapio; Lehmussola, Antti

    2014-11-01

    Shoeprints are an important source of information for criminal investigation. Therefore, an increasing number of automatic shoeprint recognition methods have been proposed for detecting the corresponding shoe models. However, comprehensive comparisons among the methods have not previously been made. In this study, an extensive set of methods proposed in the literature was implemented, and their performance was studied in varying conditions. Three datasets of different quality shoeprints were used, and the methods were evaluated also with partial and rotated prints. The results show clear differences between the algorithms: while the best performing method, based on local image descriptors and RANSAC, provides rather good results with most of the experiments, some methods are almost completely unrobust against any unidealities in the images. Finally, the results demonstrate that there is still a need for extensive research to improve the accuracy of automatic recognition of crime scene prints. © 2014 American Academy of Forensic Sciences.

  7. Method for automatic control rod operation using rule-based control

    International Nuclear Information System (INIS)

    Kinoshita, Mitsuo; Yamada, Naoyuki; Kiguchi, Takashi

    1988-01-01

    An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

  8. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    International Nuclear Information System (INIS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K; Ourselin, Sebastien

    2007-01-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis

  9. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    Energy Technology Data Exchange (ETDEWEB)

    Fripp, Jurgen [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia); Crozier, Stuart [School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, QLD 4072 (Australia); Warfield, Simon K [Computational Radiology Laboratory, Harvard Medical School, Children' s Hospital Boston, 300 Longwood Avenue, Boston, MA 02115 (United States); Ourselin, Sebastien [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia)

    2007-03-21

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  10. Automatic extraction of Manhattan-World building masses from 3D laser range scans.

    Science.gov (United States)

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich

    2012-10-01

    We propose a novel approach for the reconstruction of urban structures from 3D point clouds with an assumption of Manhattan World (MW) building geometry; i.e., the predominance of three mutually orthogonal directions in the scene. Our approach works in two steps. First, the input points are classified according to the MW assumption into four local shape types: walls, edges, corners, and edge corners. The classified points are organized into a connected set of clusters from which a volume description is extracted. The MW assumption allows us to robustly identify the fundamental shape types, describe the volumes within the bounding box, and reconstruct visible and occluded parts of the sampled structure. We show results of our reconstruction that has been applied to several synthetic and real-world 3D point data sets of various densities and from multiple viewpoints. Our method automatically reconstructs 3D building models from up to 10 million points in 10 to 60 seconds.

  11. Automatic segmentation of MRI head images by 3-D region growing method which utilizes edge information

    International Nuclear Information System (INIS)

    Jiang, Hao; Suzuki, Hidetomo; Toriwaki, Jun-ichiro

    1991-01-01

    This paper presents a 3-D segmentation method that automatically extracts soft tissue from multi-sliced MRI head images. MRI produces a sequence of two-dimensional (2-D) images which contains three-dimensional (3-D) information of organs. To utilize such information we need effective algorithms to treat 3-D digital images and to extract organs and tissues of interest. We developed a method to extract the brain from MRI images which uses a region growing procedure and integrates information of uniformity of gray levels and information of the presence of edge segments in the local area around the pixel of interest. First we generate a kernel region which is a part of brain tissue by simple thresholding. Then we grow the region by means of a region growing algorithm under the control of 3-D edge existence to obtain the region of the brain. Our method is rather simple because it uses basic 3-D image processing techniques like spatial difference. It is robust for variation of gray levels inside a tissue since it also refers to the edge information in the process of region growing. Therefore, the method is flexible enough to be applicable to the segmentation of other images including soft tissues which have complicated shapes and fluctuation in gray levels. (author)

  12. An Automatic Diagnosis Method of Facial Acne Vulgaris Based on Convolutional Neural Network.

    Science.gov (United States)

    Shen, Xiaolei; Zhang, Jiachi; Yan, Chenjun; Zhou, Hong

    2018-04-11

    In this paper, we present a new automatic diagnosis method for facial acne vulgaris which is based on convolutional neural networks (CNNs). To overcome the shortcomings of previous methods which were the inability to classify enough types of acne vulgaris. The core of our method is to extract features of images based on CNNs and achieve classification by classifier. A binary-classifier of skin-and-non-skin is used to detect skin area and a seven-classifier is used to achieve the classification task of facial acne vulgaris and healthy skin. In the experiments, we compare the effectiveness of our CNN and the VGG16 neural network which is pre-trained on the ImageNet data set. We use a ROC curve to evaluate the performance of binary-classifier and use a normalized confusion matrix to evaluate the performance of seven-classifier. The results of our experiments show that the pre-trained VGG16 neural network is effective in extracting features from facial acne vulgaris images. And the features are very useful for the follow-up classifiers. Finally, we try applying the classifiers both based on the pre-trained VGG16 neural network to assist doctors in facial acne vulgaris diagnosis.

  13. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition

  14. Doping control in Japan. An automated extraction procedure for the doping test.

    Science.gov (United States)

    Nakajima, T.; Matsumoto, T.

    1976-01-01

    Horse racing in Japan consists of two systems, the National (10 racecourses) and the Regional public racing (32 racecourses) having about 2,500 racing meetings in total per year. Urine or saliva samples for dope testing are collected by the officials from thw winner, second and third, and transported to the laboratory in a frozen state. In 1975, 76, 117 samples were analyzed by this laboratory. The laboratory provides the following four methods of analysis, which are variously combined by request. (1) Method for detection of drugs extracted by chloroform from alkalinized sample. (2) Methods for detection of camphor and its derivatives. (3) Method for detection of barbiturates. (4) Method for detection of ethanol. These methods consist of screening, mainly by thin layer chromatography and confirmatory tests using ultra violet spectrophotometry, gas chromatography and mass spectrometry combined with gas chromatography. In the screening test of doping drugs, alkalinized samples are extracted with chloroform. In order to automate the extraction procedure, the authors contrived a new automatic extractor. They also devised a means of pH adjustment of horse urine by using buffer solution and an efficient mechanism of evaporation of organic solvent. Analytical data obtained by the automatic extractor are presented in this paper. In 1972, we started research work to automate the extraction procedure in method (1) above, and the Automatic Extractor has been in use in routine work since last July. One hundred and twnety samples per hour are extracted automatically by three automatic extractors. The analytical data using this apparatus is presented below. PMID:1000163

  15. Comparison of mentha extracts obtained by different extraction methods

    Directory of Open Access Journals (Sweden)

    Milić Slavica

    2006-01-01

    Full Text Available The different methods of mentha extraction, such as steam distillation, extraction by methylene chloride (Soxhlet extraction and supercritical fluid extraction (SFE by carbon dioxide (CO J were investigated. SFE by CO, was performed at pressure of 100 bar and temperature of40°C. The extraction yield, as well as qualitative and quantitative composition of obtained extracts, determined by GC-MS method, were compared.

  16. An automatic device for sample insertion and extraction to/from reactor irradiation facilities

    International Nuclear Information System (INIS)

    Alloni, L.; Venturelli, A.; Meloni, S.

    1990-01-01

    At the previous European Triga Users Conference in Vienna,a paper was given describing a new handling tool for irradiated samples at the L.E.N.A plant. This tool was the first part of an automatic device for the management of samples to be irradiated in the TRIGA MARK ii reactor and successively extracted and stored. So far sample insertion and extraction to/from irradiation facilities available on reactor top (central thimble,rotatory specimen rack and channel f),has been carried out manually by reactor and health-physics operators using the ''traditional'' fishing pole provided by General Atomic, thus exposing reactor personnel to ''unjustified'' radiation doses. The present paper describes the design and the operation of a new device, a ''robot''type machine,which, remotely operated, takes care of sample insertion into the different irradiation facilities,sample extraction after irradiation and connection to the storage pits already described. The extraction of irradiated sample does not require the presence of reactor personnel on the reactor top and,therefore,radiation doses are strongly reduced. All work from design to construction has been carried out by the personnel of the electronic group of the L.E.N.A plant. (orig.)

  17. Automatic extraction analysis of the anatomical functional area for normal brain 18F-FDG PET imaging

    International Nuclear Information System (INIS)

    Guo Wanhua; Jiang Xufeng; Zhang Liying; Lu Zhongwei; Li Peiyong; Zhu Chengmo; Zhang Jiange; Pan Jiapu

    2003-01-01

    Using self-designed automatic extraction software of brain functional area, the grey scale distribution of 18 F-FDG imaging and the relationship between the 18 F-FDG accumulation of brain anatomic function area and the 18 F-FDG injected dose, the level of glucose, the age, etc., were studied. According to the Talairach coordinate system, after rotation, drift and plastic deformation, the 18 F-FDG PET imaging was registered into the Talairach coordinate atlas, and then the average gray value scale ratios between individual brain anatomic functional area and whole brain area was calculated. Further more the statistics of the relationship between the 18 F-FDG accumulation of every brain anatomic function area and the 18 F-FDG injected dose, the level of glucose and the age were tested by using multiple stepwise regression model. After images' registration, smoothing and extraction, main cerebral cortex of the 18 F-FDG PET brain imaging can be successfully localized and extracted, such as frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain ventricle, thalamus and hippocampus. The average ratios to the inner reference of every brain anatomic functional area were 1.01 ± 0.15. By multiple stepwise regression with the exception of thalamus and hippocampus, the grey scale of all the brain functional area was negatively correlated to the ages, but with no correlation to blood sugar and dose in all areas. To the 18 F-FDG PET imaging, the brain functional area extraction program could automatically delineate most of the cerebral cortical area, and also successfully reflect the brain blood and metabolic study, but extraction of the more detailed area needs further investigation

  18. A FUZZY AUTOMATIC CAR DETECTION METHOD BASED ON HIGH RESOLUTION SATELLITE IMAGERY AND GEODESIC MORPHOLOGY

    Directory of Open Access Journals (Sweden)

    N. Zarrinpanjeh

    2017-09-01

    Full Text Available Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  19. Validation of the ICU-DaMa tool for automatically extracting variables for minimum dataset and quality indicators: The importance of data quality assessment.

    Science.gov (United States)

    Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María

    2018-04-01

    Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement

  20. Automatic measurement for solid state track detectors

    International Nuclear Information System (INIS)

    Ogura, Koichi

    1982-01-01

    Since in solid state track detectors, their tracks are measured with a microscope, observers are forced to do hard works that consume time and labour. This causes to obtain poor statistic accuracy or to produce personal error. Therefore, many researches have been done to aim at simplifying and automating track measurement. There are two categories in automating the measurement: simple counting of the number of tracks and the requirements to know geometrical elements such as the size of tracks or their coordinates as well as the number of tracks. The former is called automatic counting and the latter automatic analysis. The method to generally evaluate the number of tracks in automatic counting is the estimation of the total number of tracks in the total detector area or in a field of view of a microscope. It is suitable for counting when the track density is higher. The method to count tracks one by one includes the spark counting and the scanning microdensitometer. Automatic analysis includes video image analysis in which the high quality images obtained with a high resolution video camera are processed with a micro-computer, and the tracks are automatically recognized and measured by feature extraction. This method is described in detail. In many kinds of automatic measurements reported so far, frequently used ones are ''spark counting'' and ''video image analysis''. (Wakatsuki, Y.)

  1. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    Science.gov (United States)

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  2. Method of extracting significant trouble information of nuclear power plants using probabilistic analysis technique

    International Nuclear Information System (INIS)

    Shimada, Yoshio; Miyazaki, Takamasa

    2005-01-01

    In order to analyze and evaluate large amounts of trouble information of overseas nuclear power plants, it is necessary to select information that is significant in terms of both safety and reliability. In this research, a method of efficiently and simply classifying degrees of importance of components in terms of safety and reliability while paying attention to root-cause components appearing in the information was developed. Regarding safety, the reactor core damage frequency (CDF), which is used in the probabilistic analysis of a reactor, was used. Regarding reliability, the automatic plant trip probability (APTP), which is used in the probabilistic analysis of automatic reactor trips, was used. These two aspects were reflected in the development of criteria for classifying degrees of importance of components. By applying these criteria, a simple method of extracting significant trouble information of overseas nuclear power plants was developed. (author)

  3. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  4. A Method for Extracting Suspected Parotid Lesions in CT Images using Feature-based Segmentation and Active Contours based on Stationary Wavelet Transform

    Science.gov (United States)

    Wu, T. Y.; Lin, S. F.

    2013-10-01

    Automatic suspected lesion extraction is an important application in computer-aided diagnosis (CAD). In this paper, we propose a method to automatically extract the suspected parotid regions for clinical evaluation in head and neck CT images. The suspected lesion tissues in low contrast tissue regions can be localized with feature-based segmentation (FBS) based on local texture features, and can be delineated with accuracy by modified active contour models (ACM). At first, stationary wavelet transform (SWT) is introduced. The derived wavelet coefficients are applied to derive the local features for FBS, and to generate enhanced energy maps for ACM computation. Geometric shape features (GSFs) are proposed to analyze each soft tissue region segmented by FBS; the regions with higher similarity GSFs with the lesions are extracted and the information is also applied as the initial conditions for fine delineation computation. Consequently, the suspected lesions can be automatically localized and accurately delineated for aiding clinical diagnosis. The performance of the proposed method is evaluated by comparing with the results outlined by clinical experts. The experiments on 20 pathological CT data sets show that the true-positive (TP) rate on recognizing parotid lesions is about 94%, and the dimension accuracy of delineation results can also approach over 93%.

  5. Automatic Annotation Method on Learners' Opinions in Case Method Discussion

    Science.gov (United States)

    Samejima, Masaki; Hisakane, Daichi; Komoda, Norihisa

    2015-01-01

    Purpose: The purpose of this paper is to annotate an attribute of a problem, a solution or no annotation on learners' opinions automatically for supporting the learners' discussion without a facilitator. The case method aims at discussing problems and solutions in a target case. However, the learners miss discussing some of problems and solutions.…

  6. Automatic method detection of artifacts for control of tomographic uniformity on SPECT; Metodo automatico de dteccion de artefactos para el control de la uniformidad tomografica en SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Reynes Llompart, G.; Puchal, R.

    2013-07-01

    The objective of this work is the find an automatic method for the detection and classification of artifacts produced in tomographic uniformity, extracting the characteristics necessary to apply a classification algorithm using pattern recognition techniques. The method has been trained and validated with synthetic images and tested with real images. (Author)

  7. Reliable clarity automatic-evaluation method for optical remote sensing images

    Science.gov (United States)

    Qin, Bangyong; Shang, Ren; Li, Shengyang; Hei, Baoqin; Liu, Zhiwen

    2015-10-01

    Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.

  8. Comparative analysis of methods for extracting vessel network on breast MRI images

    Science.gov (United States)

    Gaizer, Bence T.; Vassiou, Katerina G.; Lavdas, Eleftherios; Arvanitis, Dimitrios L.; Fezoulidis, Ioannis V.; Glotsos, Dimitris T.

    2017-11-01

    Digital processing of MRI images aims to provide an automatized diagnostic evaluation of regular health screenings. Cancerous lesions are proven to cause an alteration in the vessel structure of the diseased organ. Currently there are several methods used for extraction of the vessel network in order to quantify its properties. In this work MRI images (Signa HDx 3.0T, GE Healthcare, courtesy of University Hospital of Larissa) of 30 female breasts were subjected to three different vessel extraction algorithms to determine the location of their vascular network. The first method is an experiment to build a graph over known points of the vessel network; the second algorithm aims to determine the direction and diameter of vessels at these points; the third approach is a seed growing algorithm, spreading selection to neighbors of the known vessel pixels. The possibilities shown by the different methods were analyzed, and quantitative measurements were performed. The data provided by these measurements showed no clear correlation with the presence or malignancy of tumors, based on the radiological diagnosis of skilled physicians.

  9. Object-oriented Method of Hierarchical Urban Building Extraction from High-resolution Remote-Sensing Imagery

    Directory of Open Access Journals (Sweden)

    TAO Chao

    2016-02-01

    Full Text Available An automatic urban building extraction method for high-resolution remote-sensing imagery,which combines building segmentation based on neighbor total variations with object-oriented analysis,is presented in this paper. Aimed at different extraction complexity from various buildings in the segmented image,a hierarchical building extraction strategy with multi-feature fusion is adopted. Firstly,we extract some rectangle buildings which remain intact after segmentation through shape analysis. Secondly,in order to ensure each candidate building target to be independent,multidirectional morphological road-filtering algorithm is designed which can separate buildings from the neighboring roads with similar spectrum. Finally,we take the extracted buildings and the excluded non-buildings as samples to establish probability model respectively,and Bayesian discriminating classifier is used for making judgment of the other candidate building objects to get the ultimate extraction result. The experimental results have shown that the approach is able to detect buildings with different structure and spectral features in the same image. The results of performance evaluation also support the robustness and precision of the approach developed.

  10. A robust approach to extract biomedical events from literature.

    Science.gov (United States)

    Bui, Quoc-Chinh; Sloot, Peter M A

    2012-10-15

    The abundance of biomedical literature has attracted significant interest in novel methods to automatically extract biomedical relations from the literature. Until recently, most research was focused on extracting binary relations such as protein-protein interactions and drug-disease relations. However, these binary relations cannot fully represent the original biomedical data. Therefore, there is a need for methods that can extract fine-grained and complex relations known as biomedical events. In this article we propose a novel method to extract biomedical events from text. Our method consists of two phases. In the first phase, training data are mapped into structured representations. Based on that, templates are used to extract rules automatically. In the second phase, extraction methods are developed to process the obtained rules. When evaluated against the Genia event extraction abstract and full-text test datasets (Task 1), we obtain results with F-scores of 52.34 and 53.34, respectively, which are comparable to the state-of-the-art systems. Furthermore, our system achieves superior performance in terms of computational efficiency. Our source code is available for academic use at http://dl.dropbox.com/u/10256952/BioEvent.zip.

  11. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  12. Automatic segmentation of mandible in panoramic x-ray

    OpenAIRE

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-01-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of t...

  13. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    Science.gov (United States)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  14. Automatic methods for processing track-detector data at the PAVICOM facility

    International Nuclear Information System (INIS)

    Aleksandrov, A.B.; Goncharova, L.A.; Polukhina, N.G.; Fejnberg, E.L.; Davydov, D.A.; Publichenko, P.A.; Roganova, T.M.

    2007-01-01

    New automatic methods essentially simplify and hasten the data treatment of tracking detectors. It allows handling big data files and appreciably improves their statistics; this fact predetermines an elaboration of new experiments, which suppose to use large volume targets, emulsive and solid-state large square tracking detectors. Thereupon the problem of training competent physicists able to work on modern automatic equipment is very relevant. About ten Moscow students working in LPI at PAVICOM facility master new methods every year. Most of the students working in high-energy physics take the print only about archaic hand methods of data handling from tracking detectors. In 2005 on the base of the PAVICOM facility and physics training of the MSU a new educational work for determination of the energy of neutrons passing through nuclear emulsion, which lets students acquire a base habit of data handling from tracking detectors using an automatic facility, was prepared; it can be included in the training process for students of any physical faculty. Specialists mastering methods of an automatic handling by the simple and obvious example of tracking detectors will be able to use their knowledge in various areas of science and techniques. The organization of upper division courses is a new additional aspect of using the PAVICOM facility described in an earlier paper [4

  15. Extractive Summarisation of Medical Documents

    Directory of Open Access Journals (Sweden)

    Abeed Sarker

    2012-09-01

    Full Text Available Background Evidence Based Medicine (EBM practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medical documents. MethodWe use a corpus that is specifically designed for summarisation in the EBM domain. We use approximately half the corpus for deriving important statistics associated with the best possible extractive summaries. We take into account factors such as sentence position, length, sentence content, and the type of the query posed. Using the statistics from the first set, we evaluate our approach on a separate set. Evaluation of the qualities of the generated summaries is performed automatically using ROUGE, which is a popular tool for evaluating automatic summaries. Results Our summarisation approach outperforms all baselines (best baseline score: 0.1594; our score 0.1653. Further improvements are achieved when query types are taken into account. Conclusion The quality of extractive summarisation in the medical domain can be significantly improved by incorporating domain knowledge and statistics derived from a specialised corpus. Such techniques can therefore be applied for content selection in end-to-end summarisation systems.

  16. Chemical name extraction based on automatic training data generation and rich feature set.

    Science.gov (United States)

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  17. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    Science.gov (United States)

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Automatically identifying gene/protein terms in MEDLINE abstracts.

    Science.gov (United States)

    Yu, Hong; Hatzivassiloglou, Vasileios; Rzhetsky, Andrey; Wilbur, W John

    2002-01-01

    Natural language processing (NLP) techniques are used to extract information automatically from computer-readable literature. In biology, the identification of terms corresponding to biological substances (e.g., genes and proteins) is a necessary step that precedes the application of other NLP systems that extract biological information (e.g., protein-protein interactions, gene regulation events, and biochemical pathways). We have developed GPmarkup (for "gene/protein-full name mark up"), a software system that automatically identifies gene/protein terms (i.e., symbols or full names) in MEDLINE abstracts. As a part of marking up process, we also generated automatically a knowledge source of paired gene/protein symbols and full names (e.g., LARD for lymphocyte associated receptor of death) from MEDLINE. We found that many of the pairs in our knowledge source do not appear in the current GenBank database. Therefore our methods may also be used for automatic lexicon generation. GPmarkup has 73% recall and 93% precision in identifying and marking up gene/protein terms in MEDLINE abstracts. A random sample of gene/protein symbols and full names and a sample set of marked up abstracts can be viewed at http://www.cpmc.columbia.edu/homepages/yuh9001/GPmarkup/. Contact. hy52@columbia.edu. Voice: 212-939-7028; fax: 212-666-0140.

  19. Assessment of automatic segmentation of teeth using a watershed-based method.

    Science.gov (United States)

    Galibourg, Antoine; Dumoncel, Jean; Telmon, Norbert; Calvet, Adèle; Michetti, Jérôme; Maret, Delphine

    2018-01-01

    Tooth 3D automatic segmentation (AS) is being actively developed in research and clinical fields. Here, we assess the effect of automatic segmentation using a watershed-based method on the accuracy and reproducibility of 3D reconstructions in volumetric measurements by comparing it with a semi-automatic segmentation(SAS) method that has already been validated. The study sample comprised 52 teeth, scanned with micro-CT (41 µm voxel size) and CBCT (76; 200 and 300 µm voxel size). Each tooth was segmented by AS based on a watershed method and by SAS. For all surface reconstructions, volumetric measurements were obtained and analysed statistically. Surfaces were then aligned using the SAS surfaces as the reference. The topography of the geometric discrepancies was displayed by using a colour map allowing the maximum differences to be located. AS reconstructions showed similar tooth volumes when compared with SAS for the 41 µm voxel size. A difference in volumes was observed, and increased with the voxel size for CBCT data. The maximum differences were mainly found at the cervical margins and incisal edges but the general form was preserved. Micro-CT, a modality used in dental research, provides data that can be segmented automatically, which is timesaving. AS with CBCT data enables the general form of the region of interest to be displayed. However, our AS method can still be used for metrically reliable measurements in the field of clinical dentistry if some manual refinements are applied.

  20. The Efficiency of Random Forest Method for Shoreline Extraction from LANDSAT-8 and GOKTURK-2 Imageries

    Science.gov (United States)

    Bayram, B.; Erdem, F.; Akpinar, B.; Ince, A. K.; Bozkurt, S.; Catal Reis, H.; Seker, D. Z.

    2017-11-01

    Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718) titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model - Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band) and GOKTURK-2 (4th band) imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.

  1. THE EFFICIENCY OF RANDOM FOREST METHOD FOR SHORELINE EXTRACTION FROM LANDSAT-8 AND GOKTURK-2 IMAGERIES

    Directory of Open Access Journals (Sweden)

    B. Bayram

    2017-11-01

    Full Text Available Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718 titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model – Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band and GOKTURK-2 (4th band imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.

  2. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    Directory of Open Access Journals (Sweden)

    U. Mallast

    2011-08-01

    Full Text Available In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine, the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.

  3. Unsupervised method for automatic construction of a disease dictionary from a large free text collection.

    Science.gov (United States)

    Xu, Rong; Supekar, Kaustubh; Morgan, Alex; Das, Amar; Garber, Alan

    2008-11-06

    Concept specific lexicons (e.g. diseases, drugs, anatomy) are a critical source of background knowledge for many medical language-processing systems. However, the rapid pace of biomedical research and the lack of constraints on usage ensure that such dictionaries are incomplete. Focusing on disease terminology, we have developed an automated, unsupervised, iterative pattern learning approach for constructing a comprehensive medical dictionary of disease terms from randomized clinical trial (RCT) abstracts, and we compared different ranking methods for automatically extracting con-textual patterns and concept terms. When used to identify disease concepts from 100 randomly chosen, manually annotated clinical abstracts, our disease dictionary shows significant performance improvement (F1 increased by 35-88%) over available, manually created disease terminologies.

  4. Automatic crack detection method for loaded coal in vibration failure process.

    Directory of Open Access Journals (Sweden)

    Chengwu Li

    Full Text Available In the coal mining process, the destabilization of loaded coal mass is a prerequisite for coal and rock dynamic disaster, and surface cracks of the coal and rock mass are important indicators, reflecting the current state of the coal body. The detection of surface cracks in the coal body plays an important role in coal mine safety monitoring. In this paper, a method for detecting the surface cracks of loaded coal by a vibration failure process is proposed based on the characteristics of the surface cracks of coal and support vector machine (SVM. A large number of cracked images are obtained by establishing a vibration-induced failure test system and industrial camera. Histogram equalization and a hysteresis threshold algorithm were used to reduce the noise and emphasize the crack; then, 600 images and regions, including cracks and non-cracks, were manually labelled. In the crack feature extraction stage, eight features of the cracks are extracted to distinguish cracks from other objects. Finally, a crack identification model with an accuracy over 95% was trained by inputting the labelled sample images into the SVM classifier. The experimental results show that the proposed algorithm has a higher accuracy than the conventional algorithm and can effectively identify cracks on the surface of the coal and rock mass automatically.

  5. a Method for the Seamlines Network Automatic Selection Based on Building Vector

    Science.gov (United States)

    Li, P.; Dong, Y.; Hu, Y.; Li, X.; Tan, P.

    2018-04-01

    In order to improve the efficiency of large scale orthophoto production of city, this paper presents a method for automatic selection of seamlines network in large scale orthophoto based on the buildings' vector. Firstly, a simple model of the building is built by combining building's vector, height and DEM, and the imaging area of the building on single DOM is obtained. Then, the initial Voronoi network of the measurement area is automatically generated based on the positions of the bottom of all images. Finally, the final seamlines network is obtained by optimizing all nodes and seamlines in the network automatically based on the imaging areas of the buildings. The experimental results show that the proposed method can not only get around the building seamlines network quickly, but also remain the Voronoi network' characteristics of projection distortion minimum theory, which can solve the problem of automatic selection of orthophoto seamlines network in image mosaicking effectively.

  6. Automatic Detection of Microaneurysms in Color Fundus Images using a Local Radon Transform Method

    Directory of Open Access Journals (Sweden)

    Hamid Reza Pourreza

    2009-03-01

    Full Text Available Introduction: Diabetic retinopathy (DR is one of the most serious and most frequent eye diseases in the world and the most common cause of blindness in adults between 20 and 60 years of age. Following 15 years of diabetes, about 2% of the diabetic patients are blind and 10% suffer from vision impairment due to DR complications. This paper addresses the automatic detection of microaneurysms (MA in color fundus images, which plays a key role in computer-assisted early diagnosis of diabetic retinopathy. Materials and Methods: The algorithm can be divided into three main steps. The purpose of the first step or pre-processing is background normalization and contrast enhancement of the images. The second step aims to detect candidates, i.e., all patterns possibly corresponding to MA, which is achieved using a local radon transform, Then, features are extracted, which are used in the last step to automatically classify the candidates into real MA or other objects using the SVM method. A database of 100 annotated images was used to test the algorithm. The algorithm was compared to manually obtained gradings of these images. Results: The sensitivity of diagnosis for DR was 100%, with specificity of 90% and the sensitivity of precise MA localization was 97%, at an average number of 5 false positives per image. Discussion and Conclusion: Sensitivity and specificity of this algorithm make it one of the best methods in this field. Using the local radon transform in this algorithm eliminates the noise sensitivity for MA detection in retinal image analysis.

  7. Automatic Registration Method for Fusion of ZY-1-02C Satellite Images

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2013-12-01

    Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.

  8. Challenges for automatically extracting molecular interactions from full-text articles.

    Science.gov (United States)

    McIntosh, Tara; Curran, James R

    2009-09-24

    The increasing availability of full-text biomedical articles will allow more biomedical knowledge to be extracted automatically with greater reliability. However, most Information Retrieval (IR) and Extraction (IE) tools currently process only abstracts. The lack of corpora has limited the development of tools that are capable of exploiting the knowledge in full-text articles. As a result, there has been little investigation into the advantages of full-text document structure, and the challenges developers will face in processing full-text articles. We manually annotated passages from full-text articles that describe interactions summarised in a Molecular Interaction Map (MIM). Our corpus tracks the process of identifying facts to form the MIM summaries and captures any factual dependencies that must be resolved to extract the fact completely. For example, a fact in the results section may require a synonym defined in the introduction. The passages are also annotated with negated and coreference expressions that must be resolved.We describe the guidelines for identifying relevant passages and possible dependencies. The corpus includes 2162 sentences from 78 full-text articles. Our corpus analysis demonstrates the necessity of full-text processing; identifies the article sections where interactions are most commonly stated; and quantifies the proportion of interaction statements requiring coherent dependencies. Further, it allows us to report on the relative importance of identifying synonyms and resolving negated expressions. We also experiment with an oracle sentence retrieval system using the corpus as a gold-standard evaluation set. We introduce the MIM corpus, a unique resource that maps interaction facts in a MIM to annotated passages within full-text articles. It is an invaluable case study providing guidance to developers of biomedical IR and IE systems, and can be used as a gold-standard evaluation set for full-text IR tasks.

  9. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    OpenAIRE

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scien...

  10. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    Wang Dianxi; Hu Liqin; Wang Guozhong; Zhao Zijia; Nie Fanzhi; Wu Yican; Long Pengcheng

    2013-01-01

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  11. Another Method of Building 2D Entropy to Realize Automatic Segmentation

    International Nuclear Information System (INIS)

    Zhang, Y F; Zhang, Y

    2006-01-01

    2D entropy formed during the process of building 2D histogram can realize automatic segmentation. Traditional method utilizes central pixel grey value and the others or all of pixels grey mean value in 4-neighbor to build 2D histogram. In fact, the change of the greyscale value between two ''invariable position vectors'' cannot represent the total characteristics among neighbour pixels very well. A new method is proposed which makes use of minimum grey value in the 4-neighbor and of maximum grey value in the 3x3 neighbour except pixels of the 4-neighbor. New method and traditional one are used in contrast to realize image automatic segmentation. The experimental results of the classical image prove the new method is effective

  12. An Efficient Platform for the Automatic Extraction of Patterns in Native Code

    Directory of Open Access Journals (Sweden)

    Javier Escalada

    2017-01-01

    Full Text Available Different software tools, such as decompilers, code quality analyzers, recognizers of packed executable files, authorship analyzers, and malware detectors, search for patterns in binary code. The use of machine learning algorithms, trained with programs taken from the huge number of applications in the existing open source code repositories, allows finding patterns not detected with the manual approach. To this end, we have created a versatile platform for the automatic extraction of patterns from native code, capable of processing big binary files. Its implementation has been parallelized, providing important runtime performance benefits for multicore architectures. Compared to the single-processor execution, the average performance improvement obtained with the best configuration is 3.5 factors over the maximum theoretical gain of 4 factors.

  13. METHODS OF AUTOMATIC QUALITY CONTROL OF AGGLUTINANTSANDS IN FOUNDRY

    Directory of Open Access Journals (Sweden)

    D. M. Kukuj

    2004-01-01

    Full Text Available The article is dedicated to comparative analysis of the well-known methods of automatic quality control of agglutinant sands in process of their preparation and to the problems of automation control of the mix preparation processes.

  14. Design and implementation of a control automatic module for the volume extraction of a 99mTc generator

    International Nuclear Information System (INIS)

    Lopez, Yon; Urquizo, Rafael; Gago, Javier; Mendoza, Pablo

    2014-01-01

    A module for the automatic extraction of volume from 0.05 mL to 1 mL has been developed using a 3D printer, using as base material acrylonitrile butadiene styrene (ABS). The design allows automation of the input and ejection eluate 99m Tc in the generator prototype 99 Mo/ 99m Tc processes; use in other systems is feasible due to its high degree of versatility, depending on the selection of the main components: precision syringe and multi-way solenoid valve. An accuracy equivalent to commercial equipment has been obtained, but at lower cost. This article describes the mechanical design, design calculations of the movement mechanism, electronics and automatic syringe dispenser control. (authors).

  15. Automatic lip reading by using multimodal visual features

    Science.gov (United States)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  16. Automatic control logics to eliminate xenon oscillation based on Axial Offsets Trajectory Method

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    1996-01-01

    We have proposed Axial Offsets (AO) Trajectory Method for xenon oscillation control in pressurized water reactors. The features of this method are described as such that it can clearly give necessary control operations to eliminate xenon oscillations. It is expected that using the features automatic control logics for xenon oscillations can be simple and be realized easily. We investigated automatic control logics. The AO Trajectory Method could realize a very simple logic only for eliminating xenon oscillations. However it was necessary to give another considerations to eliminate the xenon oscillation with a given axial power distribution. The other control logic based on the modern control theory was also studied for comparison of the control performance of the new control logic. As the results, it is presented that the automatic control logics based on the AO Trajectory Method are very simple and effective. (author)

  17. Automatic digitization of SMA data

    Science.gov (United States)

    Väänänen, Mika; Tanskanen, Eija

    2017-04-01

    In the 1970's and 1980's the Scandinavian Magnetometer Array produced large amounts of excellent data from over 30 stations In Norway, Sweden and Finland. 620 film reels and 20 kilometers of film have been preserved and the longest time series produced in the campaign span almost uninterrupted for five years, but the data has never seen widespread use due to the choice of medium. Film is a difficult medium to digitize efficiently. Previously events of interest were searched for by hand and digitization was done by projecting the film on paper and plotting it by hand. We propose a method of automatically digitizing geomagnetic data stored on film and extracting the numerical values from the digitized data. The automatic digitization process helps in preserving old, valuable data that might otherwise go unused.

  18. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz

    2007-01-01

    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  19. Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging

    Science.gov (United States)

    Litkey, P.; Nurminen, K.; Honkavaara, E.

    2013-05-01

    The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.

  20. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    Science.gov (United States)

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  1. Semi-Automatic Construction of Skeleton Concept Maps from Case Judgments

    OpenAIRE

    Boer, A.; Sijtsma, B.; Winkels, R.; Lettieri, N.

    2014-01-01

    This paper proposes an approach to generating Skeleton Conceptual Maps (SCM) semi automatically from legal case documents provided by the United Kingdom’s Supreme Court. SCM are incomplete knowledge representations for the purpose of scaffolding learning. The proposed system intends to provide students with a tool to pre-process text and to extract knowledge from documents in a time saving manner. A combination of natural language processing methods and proposition extraction algorithms are u...

  2. Evaluation of semi-automatic arterial stenosis quantification

    International Nuclear Information System (INIS)

    Hernandez Hoyos, M.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne; Univ. de los Andes, Bogota; Serfaty, J.M.; Douek, P.C.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne; Hopital Cardiovasculaire et Pneumologique L. Pradel, Bron; Maghiar, A.; Mansard, C.; Orkisz, M.; Magnin, I.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne

    2006-01-01

    Object: To assess the accuracy and reproducibility of semi-automatic vessel axis extraction and stenosis quantification in 3D contrast-enhanced Magnetic Resonance Angiography (CE-MRA) of the carotid arteries (CA). Materials and methods: A total of 25 MRA datasets was used: 5 phantoms with known stenoses, and 20 patients (40 CAs) drawn from a multicenter trial database. Maracas software extracted vessel centerlines and quantified the stenoses, based on boundary detection in planes perpendicular to the centerline. Centerline accuracy was visually scored. Semi-automatic measurements were compared with: (1) theoretical phantom morphometric values, and (2) stenosis degrees evaluated by two independent radiologists. Results: Exploitable centerlines were obtained in 97% of CA and in all phantoms. In phantoms, the software achieved a better agreement with theoretic stenosis degrees (weighted kappa Κ W = 0.91) than the radiologists (Κ W = 0.69). In patients, agreement between software and radiologists varied from Κ W =0.67 to 0.90. In both, Maracas was substantially more reproducible than the readers. Mean operating time was within 1 min/ CA. Conclusion: Maracas software generates accurate 3D centerlines of vascular segments with minimum user intervention. Semi-automatic quantification of CA stenosis is also accurate, except in very severe stenoses that cannot be segmented. It substantially reduces the inter-observer variability. (orig.)

  3. Automatic vertebral identification using surface-based registration

    Science.gov (United States)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  4. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  5. Automatic digital surface model (DSM) generation from aerial imagery data

    Science.gov (United States)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  6. Automatic extraction of forward stroke volume using dynamic PET/CT: a dual-tracer and dual-scanner validation in patients with heart valve disease.

    Science.gov (United States)

    Harms, Hendrik Johannes; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær; Kero, Tanja; Orndahl, Lovisa Holm; Kim, Won Yong; Bjerner, Tomas; Bouchelouche, Kirsten; Wiggers, Henrik; Frøkiær, Jørgen; Sörensen, Jens

    2015-12-01

    The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. 35 subjects underwent a dynamic (11)C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic (15)O-water PET and (11)C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically from PET data using cluster analysis. The first-pass peak was isolated by automatic extrapolation of the downslope of the TAC. FSV was calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured using phase-contrast cardiovascular magnetic resonance (CMR). FSVPET correlated highly with FSVCMR (r = 0.87, slope = 0.90 for scanner I, r = 0.87, slope = 1.65, and r = 0.85, slope = 1.69 for scanner II for (15)O-water and (11)C-acetate, respectively) although a systematic bias was observed for both scanners (p dynamic PET/CT and cluster analysis. Results are almost identical for (11)C-acetate and (15)O-water. A scanner-dependent bias was observed, and a scanner calibration factor is required for multi-scanner studies. Generalization of the method to other tracers and scanners requires further validation.

  7. A New Variational Method for Bias Correction and Its Applications to Rodent Brain Extraction.

    Science.gov (United States)

    Chang, Huibin; Huang, Weimin; Wu, Chunlin; Huang, Su; Guan, Cuntai; Sekar, Sakthivel; Bhakoo, Kishore Kumar; Duan, Yuping

    2017-03-01

    Brain extraction is an important preprocessing step for further analysis of brain MR images. Significant intensity inhomogeneity can be observed in rodent brain images due to the high-field MRI technique. Unlike most existing brain extraction methods that require bias corrected MRI, we present a high-order and L 0 regularized variational model for bias correction and brain extraction. The model is composed of a data fitting term, a piecewise constant regularization and a smooth regularization, which is constructed on a 3-D formulation for medical images with anisotropic voxel sizes. We propose an efficient multi-resolution algorithm for fast computation. At each resolution layer, we solve an alternating direction scheme, all subproblems of which have the closed-form solutions. The method is tested on three T2 weighted acquisition configurations comprising a total of 50 rodent brain volumes, which are with the acquisition field strengths of 4.7 Tesla, 9.4 Tesla and 17.6 Tesla, respectively. On one hand, we compare the results of bias correction with N3 and N4 in terms of the coefficient of variations on 20 different tissues of rodent brain. On the other hand, the results of brain extraction are compared against manually segmented gold standards, BET, BSE and 3-D PCNN based on a number of metrics. With the high accuracy and efficiency, our proposed method can facilitate automatic processing of large-scale brain studies.

  8. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    Wang, D.; Nie, F.; Wang, G.; Long, P.; LV, Z.

    2013-01-01

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  9. Road Network Extraction from Dsm by Mathematical Morphology and Reasoning

    Science.gov (United States)

    Li, Yan; Wu, Jianliang; Zhu, Lin; Tachibana, Kikuo

    2016-06-01

    The objective of this research is the automatic extraction of the road network in a scene of the urban area from a high resolution digital surface model (DSM). Automatic road extraction and modeling from remote sensed data has been studied for more than one decade. The methods vary greatly due to the differences of data types, regions, resolutions et al. An advanced automatic road network extraction scheme is proposed to address the issues of tedium steps on segmentation, recognition and grouping. It is on the basis of a geometric road model which describes a multiple-level structure. The 0-dimension element is intersection. The 1-dimension elements are central line and side. The 2-dimension element is plane, which is generated from the 1-dimension elements. The key feature of the presented approach is the cross validation for the three road elements which goes through the entire procedure of their extraction. The advantage of our model and method is that linear elements of the road can be derived directly, without any complex, non-robust connection hypothesis. An example of Japanese scene is presented to display the procedure and the performance of the approach.

  10. Automatic crack detection and classification method for subway tunnel safety monitoring.

    Science.gov (United States)

    Zhang, Wenyu; Zhang, Zhenjiang; Qi, Dapeng; Liu, Yun

    2014-10-16

    Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS) industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  11. Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2014-10-01

    Full Text Available Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  12. 3D automatic segmentation method for retinal optical coherence tomography volume data using boundary surface enhancement

    Directory of Open Access Journals (Sweden)

    Yankui Sun

    2016-03-01

    Full Text Available With the introduction of spectral-domain optical coherence tomography (SD-OCT, much larger image datasets are routinely acquired compared to what was possible using the previous generation of time-domain OCT. Thus, there is a critical need for the development of three-dimensional (3D segmentation methods for processing these data. We present here a novel 3D automatic segmentation method for retinal OCT volume data. Briefly, to segment a boundary surface, two OCT volume datasets are obtained by using a 3D smoothing filter and a 3D differential filter. Their linear combination is then calculated to generate new volume data with an enhanced boundary surface, where pixel intensity, boundary position information, and intensity changes on both sides of the boundary surface are used simultaneously. Next, preliminary discrete boundary points are detected from the A-Scans of the volume data. Finally, surface smoothness constraints and a dynamic threshold are applied to obtain a smoothed boundary surface by correcting a small number of error points. Our method can extract retinal layer boundary surfaces sequentially with a decreasing search region of volume data. We performed automatic segmentation on eight human OCT volume datasets acquired from a commercial Spectralis OCT system, where each volume of datasets contains 97 OCT B-Scan images with a resolution of 496×512 (each B-Scan comprising 512 A-Scans containing 496 pixels; experimental results show that this method can accurately segment seven layer boundary surfaces in normal as well as some abnormal eyes.

  13. Information retrieval and terminology extraction in online resources for patients with diabetes.

    Science.gov (United States)

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

  14. [Modeling and implementation method for the automatic biochemistry analyzer control system].

    Science.gov (United States)

    Wang, Dong; Ge, Wan-cheng; Song, Chun-lin; Wang, Yun-guang

    2009-03-01

    In this paper the system structure The automatic biochemistry analyzer is a necessary instrument for clinical diagnostics. First of is analyzed. The system problems description and the fundamental principles for dispatch are brought forward. Then this text puts emphasis on the modeling for the automatic biochemistry analyzer control system. The objects model and the communications model are put forward. Finally, the implementation method is designed. It indicates that the system based on the model has good performance.

  15. Real-Time Automatic Fetal Brain Extraction in Fetal MRI by Deep Learning

    OpenAIRE

    Salehi, Seyed Sadegh Mohseni; Hashemi, Seyed Raein; Velasco-Annis, Clemente; Ouaalam, Abdelhakim; Estroff, Judy A.; Erdogmus, Deniz; Warfield, Simon K.; Gholipour, Ali

    2017-01-01

    Brain segmentation is a fundamental first step in neuroimage analysis. In the case of fetal MRI, it is particularly challenging and important due to the arbitrary orientation of the fetus, organs that surround the fetal head, and intermittent fetal motion. Several promising methods have been proposed but are limited in their performance in challenging cases and in real-time segmentation. We aimed to develop a fully automatic segmentation method that independently segments sections of the feta...

  16. Comparing a novel automatic 3D method for LGE-CMR quantification of scar size with established methods.

    Science.gov (United States)

    Woie, Leik; Måløy, Frode; Eftestøl, Trygve; Engan, Kjersti; Edvardsen, Thor; Kvaløy, Jan Terje; Ørn, Stein

    2014-02-01

    Current methods for the estimation of infarct size by late-enhanced cardiac magnetic imaging are based upon 2D analysis that first determines the size of the infarction in each slice, and thereafter adds the infarct sizes from each slice to generate a volume. We present a novel, automatic 3D method that estimates infarct size by a simultaneous analysis of all pixels from all slices. In a population of 54 patients with ischemic scars, the infarct size estimated by the automatic 3D method was compared with four established 2D methods. The new 3D method defined scar as the sum of all pixels with signal intensity (SI) ≥35 % of max SI from the complete myocardium, border zone: SI 35-50 % of max SI and core as SI ≥50 % of max SI. The 3D method yielded smaller infarct size (-2.8 ± 2.3 %) and core size (-3.0 ± 1.7 %) than the 2D method most similar to ours. There was no difference in the size of the border zone (0.2 ± 1.4 %). The 3D method demonstrated stronger correlations between scar size and left ventricular (LV) remodelling parameters (LV ejection fraction: r = -0.71, p 3D automatic method is without the need for manual demarcation of the scar; it is less time-consuming and has a stronger correlation with remodelling parameters compared with existing methods.

  17. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  18. Temporally rendered automatic cloud extraction (TRACE) system

    Science.gov (United States)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  19. Application of a semi-automatic cartilage segmentation method for biomechanical modeling of the knee joint.

    Science.gov (United States)

    Liukkonen, Mimmi K; Mononen, Mika E; Tanska, Petri; Saarakkala, Simo; Nieminen, Miika T; Korhonen, Rami K

    2017-10-01

    Manual segmentation of articular cartilage from knee joint 3D magnetic resonance images (MRI) is a time consuming and laborious task. Thus, automatic methods are needed for faster and reproducible segmentations. In the present study, we developed a semi-automatic segmentation method based on radial intensity profiles to generate 3D geometries of knee joint cartilage which were then used in computational biomechanical models of the knee joint. Six healthy volunteers were imaged with a 3T MRI device and their knee cartilages were segmented both manually and semi-automatically. The values of cartilage thicknesses and volumes produced by these two methods were compared. Furthermore, the influences of possible geometrical differences on cartilage stresses and strains in the knee were evaluated with finite element modeling. The semi-automatic segmentation and 3D geometry construction of one knee joint (menisci, femoral and tibial cartilages) was approximately two times faster than with manual segmentation. Differences in cartilage thicknesses, volumes, contact pressures, stresses, and strains between segmentation methods in femoral and tibial cartilage were mostly insignificant (p > 0.05) and random, i.e. there were no systematic differences between the methods. In conclusion, the devised semi-automatic segmentation method is a quick and accurate way to determine cartilage geometries; it may become a valuable tool for biomechanical modeling applications with large patient groups.

  20. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    Science.gov (United States)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  1. A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations.

    Science.gov (United States)

    Spanier, A B; Caplan, N; Sosna, J; Acar, B; Joskowicz, L

    2018-01-01

    The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.

  2. Automatically Extracting Typical Syntactic Differences from Corpora

    NARCIS (Netherlands)

    Wiersma, Wybo; Nerbonne, John; Lauttamus, Timo

    We develop an aggregate measure of syntactic difference for automatically finding common syntactic differences between collections of text. With the use of this measure, it is possible to mine for differences between, for example, the English of learners and natives, or between related dialects. If

  3. Automatic measurement of cusps in 2.5D dental images

    Science.gov (United States)

    Wolf, Mattias; Paulus, Dietrich W.; Niemann, Heinrich

    1996-01-01

    Automatic reconstruction of occlusal surfaces of teeth is an application which might become more and more urgent due to the toxicity of amalgam. Modern dental chairside equipment is currently restricted to the production of inlays. The automatic reconstruction of the occlusal surface is presently not possible. For manufacturing an occlusal surface it is required to extract features from which it is possible to reconstruct destroyed teeth. In this paper, we demonstrate how intact upper molars can be automatically extracted in dental range and intensity images. After normalization of the 3D location, the sizes of the cusps are detected and the distances between them are calculated. In the presented approach, the detection of the upper molar is based on a knowledge-based segmentation which includes anatomic knowledge. After the segmentation of the interesting tooth the central fossa is calculated. The normalization of the spatial location is archieved by aligning the detected fossa with a reference axis. After searching the cusp tips in the range image the image is resized. The methods have been successfully tested on 60 images. The results have been compared with the results of a dentist's evaluation on a sample of 20 images. The results will be further used for automatic production of tooth inlays.

  4. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Shaw, L; Mehari, F; Weckenmann, A; Ettl, S; Häusler, G

    2013-01-01

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  5. Automatic radar target recognition of objects falling on railway tracks

    International Nuclear Information System (INIS)

    Mroué, A; Heddebaut, M; Elbahhar, F; Rivenq, A; Rouvaen, J-M

    2012-01-01

    This paper presents an automatic radar target recognition procedure based on complex resonances using the signals provided by ultra-wideband radar. This procedure is dedicated to detection and identification of objects lying on railway tracks. For an efficient complex resonance extraction, a comparison between several pole extraction methods is illustrated. Therefore, preprocessing methods are presented aiming to remove most of the erroneous poles interfering with the discrimination scheme. Once physical poles are determined, a specific discrimination technique is introduced based on the Euclidean distances. Both simulation and experimental results are depicted showing an efficient discrimination of different targets including guided transport passengers

  6. A rule-based automatic sleep staging method.

    Science.gov (United States)

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uniform Test Method for Measuring the Energy Consumption... Energy Consumption of Automatic and Semi-Automatic Clothes Washers The provisions of this appendix J1... means for determining the energy consumption of a clothes washer with an adaptive control system...

  8. Automatic segmentation of liver structure in CT images

    International Nuclear Information System (INIS)

    Bae, K.T.; Giger, M.L.; Chen, C.; Kahn, C.E. Jr.

    1993-01-01

    The segmentation and three-dimensional representation of the liver from a computed tomography (CT) scan is an important step in many medical applications, such as in the surgical planning for a living-donor liver transplant and in the automatic detection and documentation of pathological states. A method is being developed to automatically extract liver structure from abdominal CT scans using a priori information about liver morphology and digital image-processing techniques. Segmentation is performed sequentially image-by-image (slice-by-slice), starting with a reference image in which the liver occupies almost the entire right half of the abdomen cross section. Image processing techniques include gray-level thresholding, Gaussian smoothing, and eight-point connectivity tracking. For each case, the shape, size, and pixel density distribution of the liver are recorded for each CT image and used in the processing of other CT images. Extracted boundaries of the liver are smoothed using mathematical morphology techniques and B-splines. Computer-determined boundaries were compared with those drawn by a radiologist. The boundary descriptions from the two methods were in agreement, and the calculated areas were within 10%

  9. Automatic X-ray inspection for the HTR-PM spherical fuel elements

    Energy Technology Data Exchange (ETDEWEB)

    Yi, DU, E-mail: duyi11@mails.tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building A309, Haidian District, Beijing 100084 (China); Xiangang, WANG, E-mail: wangxiangang@tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building A309, Haidian District, Beijing 100084 (China); Xincheng, XIANG, E-mail: inetxxc@tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building, Haidian District, Beijing 100084 (China); Bing, LIU, E-mail: bingliu@tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building, Haidian District, Beijing 100084 (China)

    2014-12-15

    Highlights: • An automatic X-ray inspection method is established to characterize HTR pebbles. • The method provides physical characterization and the inner structure of pebbles. • The method can be conducted non-destructively, quickly and automatically. • Sample pebbles were measured with this AXI method for validation. • The method shows the potential to be applied in situ. - Abstract: Inefficient quality assessment and control (QA and C) of spherical fuel elements for high temperature reactor-pebblebed modules (HTR-PM) has been a long-term problem, since conventional methods are labor intensive and cannot reveal the inside information nondestructively. Herein, we proposed a nondestructive, automated X-ray inspection (AXI) method to characterize spherical fuel elements including their inner structures based on X-ray digital radiography (DR). Briefly, DR images at different angles are first obtained and then the chosen important parameters such as spherical diameters, geometric and mass centers, can be automatically extracted and calculated via image processing techniques. Via evaluating sample spherical fuel elements, we proved that this AXI method can be conducted non-destructively, quickly and automatically. This method not only provides accurate physical characterization of spherical fuel elements but also reveals their inner structure with good resolution, showing great potentials to facilitate fast QA and C in HTM-PM spherical fuel element development and production.

  10. Automatic X-ray inspection for the HTR-PM spherical fuel elements

    International Nuclear Information System (INIS)

    Yi, DU; Xiangang, WANG; Xincheng, XIANG; Bing, LIU

    2014-01-01

    Highlights: • An automatic X-ray inspection method is established to characterize HTR pebbles. • The method provides physical characterization and the inner structure of pebbles. • The method can be conducted non-destructively, quickly and automatically. • Sample pebbles were measured with this AXI method for validation. • The method shows the potential to be applied in situ. - Abstract: Inefficient quality assessment and control (QA and C) of spherical fuel elements for high temperature reactor-pebblebed modules (HTR-PM) has been a long-term problem, since conventional methods are labor intensive and cannot reveal the inside information nondestructively. Herein, we proposed a nondestructive, automated X-ray inspection (AXI) method to characterize spherical fuel elements including their inner structures based on X-ray digital radiography (DR). Briefly, DR images at different angles are first obtained and then the chosen important parameters such as spherical diameters, geometric and mass centers, can be automatically extracted and calculated via image processing techniques. Via evaluating sample spherical fuel elements, we proved that this AXI method can be conducted non-destructively, quickly and automatically. This method not only provides accurate physical characterization of spherical fuel elements but also reveals their inner structure with good resolution, showing great potentials to facilitate fast QA and C in HTM-PM spherical fuel element development and production

  11. Automatic Hypocenter Determination Method in JMA Catalog and its Application

    Science.gov (United States)

    Tamaribuchi, K.

    2017-12-01

    The number of detectable earthquakes around Japan has increased by developing the high-sensitivity seismic observation network. After the 2011 Tohoku-oki earthquake, the number of detectable earthquakes have dramatically increased due to its aftershocks and induced earthquakes. This enormous number of earthquakes caused inability of manually determination of all the hypocenters. The Japan Meteorological Agency (JMA), which produces the earthquake catalog in Japan, has developed a new automatic hypocenter determination method and started its operation from April 1, 2016. This method (named PF method; Phase combination Forward search method) can determine the hypocenters of earthquakes that occur simultaneously by searching for the optimal combination of P- and S-wave arrival times and the maximum amplitudes using a Bayesian estimation technique. In the 2016 Kumamoto earthquake sequence, we successfully detected about 70,000 aftershocks automatically during the period from April 14 to the end of May, and this method contributed to the real-time monitoring of the seismic activity. Furthermore, this method can be also applied to the Earthquake Early Warning (EEW). Application of this method for EEW is called the IPF method and has been used as the hypocenter determination method of the EEW system in JMA from December 2016. By developing this method further, it is possible to contribute to not only speeding up the catalog production, but also improving reliability of the early warning.

  12. Development of automatic extraction of the corpus callosum from magnetic resonance imaging of the head and examination of the early dementia objective diagnostic technique in feature analysis

    International Nuclear Information System (INIS)

    Kodama, Naoki; Kaneko, Tomoyuki

    2005-01-01

    We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)

  13. ANA, automatic natural learning of a semantic network

    International Nuclear Information System (INIS)

    Enguehard, Chantal

    1992-01-01

    The objective of this research thesis is the automatic extraction of terminology and the study of its automatic structuring in order to produce a semantic network. Such an operation is applied to text corpus representing knowledge on a specific field in order to select the relevant technical vocabulary regarding this field. Thus, the author developed a method and a software for the automatic acquisition of terminology items. The author first gives an overview of systems and methods of document indexing and of thesaurus elaboration, and a brief presentation of the state-of-the-art of learning. Then, he discusses some drawbacks of computer systems of natural language processing which are using large knowledge sources such as grammars and dictionaries. After a presentation of the adopted approach and of some hypotheses, the author defines objects and operators which are necessary for an easier data handling, presents the knowledge acquisition process, and finally precisely describes the system computerization. Some results are assessed and discussed, and limitations and perspectives are commented [fr

  14. Automatic mining of symptom severity from psychiatric evaluation notes.

    Science.gov (United States)

    Karystianis, George; Nevado, Alejo J; Kim, Chi-Hun; Dehghan, Azad; Keane, John A; Nenadic, Goran

    2018-03-01

    As electronic mental health records become more widely available, several approaches have been suggested to automatically extract information from free-text narrative aiming to support epidemiological research and clinical decision-making. In this paper, we explore extraction of explicit mentions of symptom severity from initial psychiatric evaluation records. We use the data provided by the 2016 CEGS N-GRID NLP shared task Track 2, which contains 541 records manually annotated for symptom severity according to the Research Domain Criteria. We designed and implemented 3 automatic methods: a knowledge-driven approach relying on local lexicalized rules based on common syntactic patterns in text suggesting positive valence symptoms; a machine learning method using a neural network; and a hybrid approach combining the first 2 methods with a neural network. The results on an unseen evaluation set of 216 psychiatric evaluation records showed a performance of 80.1% for the rule-based method, 73.3% for the machine-learning approach, and 72.0% for the hybrid one. Although more work is needed to improve the accuracy, the results are encouraging and indicate that automated text mining methods can be used to classify mental health symptom severity from free text psychiatric notes to support epidemiological and clinical research. © 2017 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd.

  15. Automatic segmentation of mandible in panoramic x-ray.

    Science.gov (United States)

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-10-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of [Formula: see text] in Dice similarity, specificity, and sensitivity.

  16. AUTOMATIC SHAPE-BASED TARGET EXTRACTION FOR CLOSE-RANGE PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    X. Guo

    2016-06-01

    Full Text Available In order to perform precise identification and location of artificial coded targets in natural scenes, a novel design of circle-based coded target and the corresponding coarse-fine extraction algorithm are presented. The designed target separates the target box and coding box totally and owns an advantage of rotation invariance. Based on the original target, templates are prepared by three geometric transformations and are used as the input of shape-based template matching. Finally, region growing and parity check methods are used to extract the coded targets as final results. No human involvement is required except for the preparation of templates and adjustment of thresholds in the beginning, which is conducive to the automation of close-range photogrammetry. The experimental results show that the proposed recognition method for the designed coded target is robust and accurate.

  17. Automatic flow-through dynamic extraction: A fast tool to evaluate char-based remediation of multi-element contaminated mine soils.

    Science.gov (United States)

    Rosende, María; Beesley, Luke; Moreno-Jimenez, Eduardo; Miró, Manuel

    2016-02-01

    An automatic in-vitro bioaccessibility test based upon dynamic microcolumn extraction in a programmable flow setup is herein proposed as a screening tool to evaluate bio-char based remediation of mine soils contaminated with trace elements as a compelling alternative to conventional phyto-availability tests. The feasibility of the proposed system was evaluated by extracting the readily bioaccessible pools of As, Pb and Zn in two contaminated mine soils before and after the addition of two biochars (9% (w:w)) of diverse source origin (pine and olive). Bioaccessible fractions under worst-case scenarios were measured using 0.001 mol L(-1) CaCl2 as extractant for mimicking plant uptake, and analysis of the extracts by inductively coupled optical emission spectrometry. The t-test of comparison of means revealed an efficient metal (mostly Pb and Zn) immobilization by the action of olive pruning-based biochar against the bare (control) soil at the 0.05 significance level. In-vitro flow-through bioaccessibility tests are compared for the first time with in-vivo phyto-toxicity assays in a microcosm soil study. By assessing seed germination and shoot elongation of Lolium perenne in contaminated soils with and without biochar amendments the dynamic flow-based bioaccessibility data proved to be in good agreement with the phyto-availability tests. Experimental results indicate that the dynamic extraction method is a viable and economical in-vitro tool in risk assessment explorations to evaluate the feasibility of a given biochar amendment for revegetation and remediation of metal contaminated soils in a mere 10 min against 4 days in case of phyto-toxicity assays. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    Science.gov (United States)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  19. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    strategies (error functions and training algorithms) for artificial neural networks are examined across synthetic and psycho-physiological datasets, and compared against support vector machines and Cohen’s method. Results reveal the best training strategies for neural networks and suggest their superiority...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...

  20. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    Directory of Open Access Journals (Sweden)

    Xiao Yu

    2015-11-01

    Full Text Available Because roller element bearings (REBs failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT. In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS into window spectrums, following which Rand Index (RI criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs. Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines. The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU. The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault

  1. Automatic extraction of corpus callosum from midsagittal head MR image and examination of Alzheimer-type dementia objective diagnostic system in feature analysis

    International Nuclear Information System (INIS)

    Kaneko, Tomoyuki; Kodama, Naoki; Kaeriyama, Tomoharu; Fukumoto, Ichiro

    2004-01-01

    We studied the objective diagnosis of Alzheimer-type dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 40 Alzheimer-type dementia patients (15 men and 25 women; mean age, 75.4±5.5 years) and 31 healthy elderly persons (10 men and 21 women; mean age, 73.4±7.5 years), 71 subjects altogether. First, the corpus callosum was automatically extracted from midsagittal head MR images. Next, Alzheimer-type dementia was compared with the healthy elderly individuals using the features of shape factor and six features of Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum succeeded in 64 of 71 individuals, for an extraction rate of 90.1%. A statistically significant difference was found in 7 of the 9 features between Alzheimer-type dementia patients and the healthy elderly adults. Discriminant analysis using the 7 features demonstrated a sensitivity rate of 82.4%, specificity of 89.3%, and overall accuracy of 85.5%. These results indicated the possibility of an objective diagnostic system for Alzheimer-type dementia using feature analysis based on change in the corpus callosum. (author)

  2. A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms

    International Nuclear Information System (INIS)

    Lu Wei; Nystrom, Michelle M.; Parikh, Parag J.; Fooshee, David R.; Hubenschmidt, James P.; Bradley, Jeffrey D.; Low, Daniel A.

    2006-01-01

    The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations

  3. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  4. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  5. Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia

    Science.gov (United States)

    Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin

    2013-10-01

    This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.

  6. Impact of different extraction methods on the quality of Dipteryx alata extracts

    Directory of Open Access Journals (Sweden)

    Frederico S. Martins

    2013-05-01

    Full Text Available This study aimed to impact of different extraction methods on the quality of Dipteryx alata Vogel, Fabaceae, extracts from fruits. The major compounds found were the lipids 38.9% (w/w and proteins 26.20% (w/w. The residual moisture was 7.20% (w/w, total fiber 14.50% (w/w, minerals 4.10% (w/w and carbohydrate 9.10 % (w/w. The species studied has great potential in producing oil, but the content and type of fatty acids obtained is dependent on the method of extraction. The Blingh & Dyer method was more selective for unsaturated fatty acids and Shoxlet method was more selective for saturated fatty acids. The tannin extraction by ultrasound (33.70 % w/w was 13.90% more efficient than extraction by decoction (29 % w/w.

  7. Development of Filtered Bispectrum for EEG Signal Feature Extraction in Automatic Emotion Recognition Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Prima Dewi Purnamasari

    2017-05-01

    Full Text Available The development of automatic emotion detection systems has recently gained significant attention due to the growing possibility of their implementation in several applications, including affective computing and various fields within biomedical engineering. Use of the electroencephalograph (EEG signal is preferred over facial expression, as people cannot control the EEG signal generated by their brain; the EEG ensures a stronger reliability in the psychological signal. However, because of its uniqueness between individuals and its vulnerability to noise, use of EEG signals can be rather complicated. In this paper, we propose a methodology to conduct EEG-based emotion recognition by using a filtered bispectrum as the feature extraction subsystem and an artificial neural network (ANN as the classifier. The bispectrum is theoretically superior to the power spectrum because it can identify phase coupling between the nonlinear process components of the EEG signal. In the feature extraction process, to extract the information contained in the bispectrum matrices, a 3D pyramid filter is used for sampling and quantifying the bispectrum value. Experiment results show that the mean percentage of the bispectrum value from 5 × 5 non-overlapped 3D pyramid filters produces the highest recognition rate. We found that reducing the number of EEG channels down to only eight in the frontal area of the brain does not significantly affect the recognition rate, and the number of data samples used in the training process is then increased to improve the recognition rate of the system. We have also utilized a probabilistic neural network (PNN as another classifier and compared its recognition rate with that of the back-propagation neural network (BPNN, and the results show that the PNN produces a comparable recognition rate and lower computational costs. Our research shows that the extracted bispectrum values of an EEG signal using 3D filtering as a feature extraction

  8. A Supporting Platform for Semi-Automatic Hyoid Bone Tracking and Parameter Extraction from Videofluoroscopic Images for the Diagnosis of Dysphagia Patients.

    Science.gov (United States)

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Paik, Nam Jong; Ryu, Ju Seok; Kim, In Young

    2017-04-01

    Conventional kinematic analysis of videofluoroscopic (VF) swallowing image, most popular for dysphagia diagnosis, requires time-consuming and repetitive manual extraction of diagnostic information from multiple images representing one swallowing period, which results in a heavy work load for clinicians and excessive hospital visits for patients to receive counseling and prescriptions. In this study, a software platform was developed that can assist in the VF diagnosis of dysphagia by automatically extracting a two-dimensional moving trajectory of the hyoid bone as well as 11 temporal and kinematic parameters. Fifty VF swallowing videos containing both non-mandible-overlapped and mandible-overlapped cases from eight patients with dysphagia of various etiologies and 19 videos from ten healthy controls were utilized for performance verification. Percent errors of hyoid bone tracking were 1.7 ± 2.1% for non-overlapped images and 4.2 ± 4.8% for overlapped images. Correlation coefficients between manually extracted and automatically extracted moving trajectories of the hyoid bone were 0.986 ± 0.017 (X-axis) and 0.992 ± 0.006 (Y-axis) for non-overlapped images, and 0.988 ± 0.009 (X-axis) and 0.991 ± 0.006 (Y-axis) for overlapped images. Based on the experimental results, we believe that the proposed platform has the potential to improve the satisfaction of both clinicians and patients with dysphagia.

  9. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  10. Use of stochastic methods for robust parameter extraction from impedance spectra

    International Nuclear Information System (INIS)

    Bueschel, Paul; Troeltzsch, Uwe; Kanoun, Olfa

    2011-01-01

    The fitting of impedance models to measured data is an essential step in impedance spectroscopy (IS). Due to often complicated, nonlinear models, big number of parameters, large search spaces and presence of noise, an automated determination of the unknown parameters is a challenging task. The stronger the nonlinear behavior of a model, the weaker is the convergence of the corresponding regression and the probability to trap into local minima increases during parameter extraction. For fast measurements or automatic measurement systems these problems became the limiting factors of use. We compared the usability of stochastic algorithms, evolution, simulated annealing and particle filter with the widely used tool LEVM for parameter extraction for IS. The comparison is based on one reference model by J.R. Macdonald and a battery model used with noisy measurement data. The results show different performances of the algorithms for these two problems depending on the search space and the model used for optimization. The obtained results by particle filter were the best for both models. This method delivers the most reliable result for both cases even for the ill posed battery model.

  11. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    Science.gov (United States)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  12. Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method.

    Science.gov (United States)

    Veta, Mitko; van Diest, Paul J; Jiwa, Mehdi; Al-Janabi, Shaimaa; Pluim, Josien P W

    2016-01-01

    Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an "external" dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial

  13. Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.

    Science.gov (United States)

    Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel

    2017-08-18

    Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among

  14. A Framework for the Development of Automatic DFA Method to Minimize the Number of Components and Assembly Reorientations

    Science.gov (United States)

    Alfadhlani; Samadhi, T. M. A. Ari; Ma’ruf, Anas; Setiasyah Toha, Isa

    2018-03-01

    Assembly is a part of manufacturing processes that must be considered at the product design stage. Design for Assembly (DFA) is a method to evaluate product design in order to make it simpler, easier and quicker to assemble, so that assembly cost is reduced. This article discusses a framework for developing a computer-based DFA method. The method is expected to aid product designer to extract data, evaluate assembly process, and provide recommendation for the product design improvement. These three things are desirable to be performed without interactive process or user intervention, so product design evaluation process could be done automatically. Input for the proposed framework is a 3D solid engineering drawing. Product design evaluation is performed by: minimizing the number of components; generating assembly sequence alternatives; selecting the best assembly sequence based on the minimum number of assembly reorientations; and providing suggestion for design improvement.

  15. Steroid hormones in environmental matrices: extraction method comparison.

    Science.gov (United States)

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  16. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.

  17. Research on automatic inspection technique of real-time radiography for turbine-blade

    International Nuclear Information System (INIS)

    Zhou, Z.G.; Zhao, S.; An, Z.G.

    2004-01-01

    To inspect turbine blade automatically, with a real-time radiographic system based on X-ray flat panel detector, computerized defect extraction technique is studied on the basis of characteristics of turbine blade's digital radiographic images. At first, in the light of a variety of gray-level in a turbine blade's digital radiographic image, it is divided into six subareas. An adaptive median filter is used to smooth defects in each subarea. Then, the filtrated image is subtracted from the raw image and a difference image with flat background and outstanding defects is obtained. After that, thresholding is applied to the difference image and defects in the turbine blade become obvious. Later on, a morphological opening is used to realize noise reduction. In order to ensure the accuracy of defects, a region growing method is adopted to reconstruct the defects. Finally, the feature data of defects are extracted. The comparison between computerized feature extraction results and human interpretation results indicates that the method mentioned above is effective and efficient, which will lay a good foundation for automatic inspection of turbine-blade with X-ray. (author)

  18. eCTG: an automatic procedure to extract digital cardiotocographic signals from digital images.

    Science.gov (United States)

    Sbrollini, Agnese; Agostinelli, Angela; Marcantoni, Ilaria; Morettini, Micaela; Burattini, Luca; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura

    2018-03-01

    Cardiotocography (CTG), consisting in the simultaneous recording of fetal heart rate (FHR) and maternal uterine contractions (UC), is a popular clinical test to assess fetal health status. Typically, CTG machines provide paper reports that are visually interpreted by clinicians. Consequently, visual CTG interpretation depends on clinician's experience and has a poor reproducibility. The lack of databases containing digital CTG signals has limited number and importance of retrospective studies finalized to set up procedures for automatic CTG analysis that could contrast visual CTG interpretation subjectivity. In order to help overcoming this problem, this study proposes an electronic procedure, termed eCTG, to extract digital CTG signals from digital CTG images, possibly obtainable by scanning paper CTG reports. eCTG was specifically designed to extract digital CTG signals from digital CTG images. It includes four main steps: pre-processing, Otsu's global thresholding, signal extraction and signal calibration. Its validation was performed by means of the "CTU-UHB Intrapartum Cardiotocography Database" by Physionet, that contains digital signals of 552 CTG recordings. Using MATLAB, each signal was plotted and saved as a digital image that was then submitted to eCTG. Digital CTG signals extracted by eCTG were eventually compared to corresponding signals directly available in the database. Comparison occurred in terms of signal similarity (evaluated by the correlation coefficient ρ, and the mean signal error MSE) and clinical features (including FHR baseline and variability; number, amplitude and duration of tachycardia, bradycardia, acceleration and deceleration episodes; number of early, variable, late and prolonged decelerations; and UC number, amplitude, duration and period). The value of ρ between eCTG and reference signals was 0.85 (P digital FHR and UC signals from digital CTG images. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Automatic intra-modality brain image registration method

    International Nuclear Information System (INIS)

    Whitaker, J.M.; Ardekani, B.A.; Braun, M.

    1996-01-01

    Full text: Registration of 3D images of brain of the same or different subjects has potential importance in clinical diagnosis, treatment planning and neurological research. The broad aim of our work is to produce an automatic and robust intra-modality, brain image registration algorithm for intra-subject and inter-subject studies. Our algorithm is composed of two stages. Initial alignment is achieved by finding the values of nine transformation parameters (representing translation, rotation and scale) that minimise the nonoverlapping regions of the head. This is achieved by minimisation of the sum of the exclusive OR of two binary head images, produced using the head extraction procedure described by Ardekani et al. (J Comput Assist Tomogr, 19:613-623, 1995). The initial alignment successfully determines the scale parameters and gross translation and rotation parameters. Fine alignment uses an objective function described for inter-modality registration in Ardekani et al. (ibid.). The algorithm segments one of the images to be aligned into a set of connected components using K-means clustering. Registration is achieved by minimising the K-means variance of the segmentation induced in the other image. Similarity of images of the same modality makes the method attractive for intra-modality registration. A 3D MR image, with voxel dimensions, 2x2x6 mm, was misaligned. The registered image shows visually accurate registration. The average displacement of a pixel from its correct location was measured to be 3.3 mm. The algorithm was tested on intra-subject MR images and was found to produce good qualitative results. Using the data available, the algorithm produced promising qualitative results in intra-subject registration. Further work is necessary in its application to intersubject registration, due to large variability in brain structure between subjects. Clinical evaluation of the algorithm for selected applications is required

  20. Detecting and extracting clusters in atom probe data: A simple, automated method using Voronoi cells

    International Nuclear Information System (INIS)

    Felfer, P.; Ceguerra, A.V.; Ringer, S.P.; Cairney, J.M.

    2015-01-01

    The analysis of the formation of clusters in solid solutions is one of the most common uses of atom probe tomography. Here, we present a method where we use the Voronoi tessellation of the solute atoms and its geometric dual, the Delaunay triangulation to test for spatial/chemical randomness of the solid solution as well as extracting the clusters themselves. We show how the parameters necessary for cluster extraction can be determined automatically, i.e. without user interaction, making it an ideal tool for the screening of datasets and the pre-filtering of structures for other spatial analysis techniques. Since the Voronoi volumes are closely related to atomic concentrations, the parameters resulting from this analysis can also be used for other concentration based methods such as iso-surfaces. - Highlights: • Cluster analysis of atom probe data can be significantly simplified by using the Voronoi cell volumes of the atomic distribution. • Concentration fields are defined on a single atomic basis using Voronoi cells. • All parameters for the analysis are determined by optimizing the separation probability of bulk atoms vs clustered atoms

  1. CURRENT STATE ANALYSIS OF AUTOMATIC BLOCK SYSTEM DEVICES, METHODS OF ITS SERVICE AND MONITORING

    Directory of Open Access Journals (Sweden)

    A. M. Beznarytnyy

    2014-01-01

    Full Text Available Purpose. Development of formalized description of automatic block system of numerical code based on the analysis of characteristic failures of automatic block system and procedure of its maintenance. Methodology. For this research a theoretical and analytical methods have been used. Findings. Typical failures of the automatic block systems were analyzed, as well as basic reasons of failure occur were found out. It was determined that majority of failures occurs due to defects of the maintenance system. Advantages and disadvantages of the current service technology of automatic block system were analyzed. Works that can be automatized by means of technical diagnostics were found out. Formal description of the numerical code of automatic block system as a graph in the state space of the system was carried out. Originality. The state graph of the numerical code of automatic block system that takes into account gradual transition from the serviceable condition to the loss of efficiency was offered. That allows selecting diagnostic information according to attributes and increasing the effectiveness of recovery operations in the case of a malfunction. Practical value. The obtained results of analysis and proposed the state graph can be used as the basis for the development of new means of diagnosing devices for automatic block system, which in turn will improve the efficiency and service of automatic block system devices in general.

  2. MiDas: automatic extraction of a common domain of discourse in sleep medicine for multi-center data integration.

    Science.gov (United States)

    Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry.

  3. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  4. Evaluation of urinary cortisol excretion by radioimmunoassay through two methods (extracted and non-extracted)

    International Nuclear Information System (INIS)

    Fonte Kohek, M.B. da; Mendonca, B.B. de; Nicolau, W.

    1993-01-01

    The objective of this paper is to compare the feasibility, sensitivity and specificity of both methods (extracted versus non-extracted) in the hypercortisolism diagnosis. It used Gamma Coat 125 cortisol Kit provided by Clinical Assays, Incstar, USA, for both methods extracting it with methylene chloride in order to measure the extracted cortisol. It was performed 32 assays from which it was obtained from 0.1 to 0.47 u g/d l of sensitivity. The intra-run precision was varied from 8.29 +- 3.38% and 8.19 +-4.72% for high and low levels, respectively for non-extracted cortisol, and 9.72 +- 1.94% and 9.54 +- 44% for high and low levels, respectively, for extracted cortisol. The inter-run precision was 15.98% and 16.15% for high level of non-extracted cortisol, respectively. For the low level it obtained 17.25% and 18.59% for non-extracted and extracted cortisol respectively. It was evaluated 24-hour urine basal samples from 43 normal subjects, and 53 obese (body mass index > 30) and 53 Cushing's syndrome patients. The sensitivity of the methods were similar (100% and 98.1% for non-extracted and extracted methods, respectively) and the specificity was the same for both methods (100%). It was noticed a positive correlation between the two methods in all the groups studied (p s syndrome. (author)

  5. Evaluation of urinary excretion of cortisol by radioimmunoassay through two methods (extracted and non-extracted)

    International Nuclear Information System (INIS)

    Fonte Kohek, M.B. da.

    1992-01-01

    The radioimmunoassay of urinary cortisol extracted by organic solvent (free cortisol) has been used for along time in the hypercortisolism diagnosis. With the development of more specific antisera it became possible to measure urinary cortisol without extracting it. The objective of this paper is to compare the feasibility, sensitivity and specificity of both methods (extracted versus non-extracted) in the hypercortisolism diagnosis. It was used Gamma Coat 125 I - cortisol kit provided by Clinical Assay, Incstar, US, for both methods extracting it with methylene chloride in order to measure the extracted cortisol. The sensitivity of the methods were similar (100% and 98,1%, for non-extracted and extracted methods, respectively). A positive correlation between the two methods was noticed in all groups studied (p < 0.05). It was concluded that both methods are efficient for the investigation of hypercortisolism. However, it's suggested that non-extracted urinary cortisol measurement should be the method of choice since it's an easy-to-perform and affordable method to diagnose Cushing's syndrome. (author)

  6. Software design of automatic counting system for nuclear track based on mathematical morphology algorithm

    International Nuclear Information System (INIS)

    Pan Yi; Mao Wanchong

    2010-01-01

    The parameter measurement of nuclear track occupies an important position in the field of nuclear technology. However, traditional artificial counting method has many limitations. In recent years, DSP and digital image processing technology have been applied in nuclear field more and more. For the sake of reducing errors of visual measurement in artificial counting method, an automatic counting system for nuclear track based on DM642 real-time image processing platform is introduced in this article, which is able to effectively remove interferences from the background and noise points, as well as automatically extract nuclear track-points by using mathematical morphology algorithm. (authors)

  7. CAD-based Monte Carlo automatic modeling method based on primitive solid

    International Nuclear Information System (INIS)

    Wang, Dong; Song, Jing; Yu, Shengpeng; Long, Pengcheng; Wang, Yongliang

    2016-01-01

    Highlights: • We develop a method which bi-convert between CAD model and primitive solid. • This method was improved from convert method between CAD model and half space. • This method was test by ITER model and validated the correctness and efficiency. • This method was integrated in SuperMC which could model for SuperMC and Geant4. - Abstract: Monte Carlo method has been widely used in nuclear design and analysis, where geometries are described with primitive solids. However, it is time consuming and error prone to describe a primitive solid geometry, especially for a complicated model. To reuse the abundant existed CAD models and conveniently model with CAD modeling tools, an automatic modeling method for accurate prompt modeling between CAD model and primitive solid is needed. An automatic modeling method for Monte Carlo geometry described by primitive solid was developed which could bi-convert between CAD model and Monte Carlo geometry represented by primitive solids. While converting from CAD model to primitive solid model, the CAD model was decomposed into several convex solid sets, and then corresponding primitive solids were generated and exported. While converting from primitive solid model to the CAD model, the basic primitive solids were created and related operation was done. This method was integrated in the SuperMC and was benchmarked with ITER benchmark model. The correctness and efficiency of this method were demonstrated.

  8. A Simple and Automatic Method for Locating Surgical Guide Hole

    Science.gov (United States)

    Li, Xun; Chen, Ming; Tang, Kai

    2017-12-01

    Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.

  9. Method for automatically evaluating a transition from a batch manufacturing technique to a lean manufacturing technique

    Science.gov (United States)

    Ivezic, Nenad; Potok, Thomas E.

    2003-09-30

    A method for automatically evaluating a manufacturing technique comprises the steps of: receiving from a user manufacturing process step parameters characterizing a manufacturing process; accepting from the user a selection for an analysis of a particular lean manufacturing technique; automatically compiling process step data for each process step in the manufacturing process; automatically calculating process metrics from a summation of the compiled process step data for each process step; and, presenting the automatically calculated process metrics to the user. A method for evaluating a transition from a batch manufacturing technique to a lean manufacturing technique can comprise the steps of: collecting manufacturing process step characterization parameters; selecting a lean manufacturing technique for analysis; communicating the selected lean manufacturing technique and the manufacturing process step characterization parameters to an automatic manufacturing technique evaluation engine having a mathematical model for generating manufacturing technique evaluation data; and, using the lean manufacturing technique evaluation data to determine whether to transition from an existing manufacturing technique to the selected lean manufacturing technique.

  10. An evaluation of automatic coronary artery calcium scoring methods with cardiac CT using the orCaScore framework.

    Science.gov (United States)

    Wolterink, Jelmer M; Leiner, Tim; de Vos, Bob D; Coatrieux, Jean-Louis; Kelm, B Michael; Kondo, Satoshi; Salgado, Rodrigo A; Shahzad, Rahil; Shu, Huazhong; Snoeren, Miranda; Takx, Richard A P; van Vliet, Lucas J; van Walsum, Theo; Willems, Tineke P; Yang, Guanyu; Zheng, Yefeng; Viergever, Max A; Išgum, Ivana

    2016-05-01

    The amount of coronary artery calcification (CAC) is a strong and independent predictor of cardiovascular disease (CVD) events. In clinical practice, CAC is manually identified and automatically quantified in cardiac CT using commercially available software. This is a tedious and time-consuming process in large-scale studies. Therefore, a number of automatic methods that require no interaction and semiautomatic methods that require very limited interaction for the identification of CAC in cardiac CT have been proposed. Thus far, a comparison of their performance has been lacking. The objective of this study was to perform an independent evaluation of (semi)automatic methods for CAC scoring in cardiac CT using a publicly available standardized framework. Cardiac CT exams of 72 patients distributed over four CVD risk categories were provided for (semi)automatic CAC scoring. Each exam consisted of a noncontrast-enhanced calcium scoring CT (CSCT) and a corresponding coronary CT angiography (CCTA) scan. The exams were acquired in four different hospitals using state-of-the-art equipment from four major CT scanner vendors. The data were divided into 32 training exams and 40 test exams. A reference standard for CAC in CSCT was defined by consensus of two experts following a clinical protocol. The framework organizers evaluated the performance of (semi)automatic methods on test CSCT scans, per lesion, artery, and patient. Five (semi)automatic methods were evaluated. Four methods used both CSCT and CCTA to identify CAC, and one method used only CSCT. The evaluated methods correctly detected between 52% and 94% of CAC lesions with positive predictive values between 65% and 96%. Lesions in distal coronary arteries were most commonly missed and aortic calcifications close to the coronary ostia were the most common false positive errors. The majority (between 88% and 98%) of correctly identified CAC lesions were assigned to the correct artery. Linearly weighted Cohen's kappa

  11. Manual versus automatic bladder wall thickness measurements: a method comparison study

    NARCIS (Netherlands)

    Oelke, M.; Mamoulakis, C.; Ubbink, D.T.; de la Rosette, J.J.; Wijkstra, H.

    2009-01-01

    Purpose To compare repeatability and agreement of conventional ultrasound bladder wall thickness (BWT) measurements with automatically obtained BWT measurements by the BVM 6500 device. Methods Adult patients with lower urinary tract symptoms, urinary incontinence, or postvoid residual urine were

  12. Cause Information Extraction from Financial Articles Concerning Business Performance

    Science.gov (United States)

    Sakai, Hiroyuki; Masuyama, Shigeru

    We propose a method of extracting cause information from Japanese financial articles concerning business performance. Our method acquires cause informtion, e. g. “_??__??__??__??__??__??__??__??__??__??_ (zidousya no uriage ga koutyou: Sales of cars were good)”. Cause information is useful for investors in selecting companies to invest. Our method extracts cause information as a form of causal expression by using statistical information and initial clue expressions automatically. Our method can extract causal expressions without predetermined patterns or complex rules given by hand, and is expected to be applied to other tasks for acquiring phrases that have a particular meaning not limited to cause information. We compared our method with our previous one originally proposed for extracting phrases concerning traffic accident causes and experimental results showed that our new method outperforms our previous one.

  13. The development of an automatic scanning method for CR-39 neutron dosimeter

    International Nuclear Information System (INIS)

    Tawara, Hiroko; Miyajima, Mitsuhiro; Sasaki, Shin-ichi; Hozumi, Ken-ichi

    1989-01-01

    A method of measuring low level neutron dose has been developed with CR-39 track detectors using an automatic scanning system. It is composed of the optical microscope with a video camera, an image processor and a personal computer. The focus point of the microscope and the X-Y stage are controlled from the computer. The minimum detectable neutron dose is estimated at 4.6 mrem in the uniform field of neutron with equivalent energy spectrum to Am-Be source from the results of automatic measurements. (author)

  14. Extraction of latent images from printed media

    Science.gov (United States)

    Sergeyev, Vladislav; Fedoseev, Victor

    2015-12-01

    In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.

  15. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    Science.gov (United States)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-06-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the downdip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multiscale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of

  16. Imaging different components of a tectonic tremor sequence in southwestern Japan using an automatic statistical detection and location method

    Science.gov (United States)

    Poiata, Natalia; Vilotte, Jean-Pierre; Bernard, Pascal; Satriano, Claudio; Obara, Kazushige

    2018-02-01

    In this study, we demonstrate the capability of an automatic network-based detection and location method to extract and analyse different components of tectonic tremor activity by analysing a 9-day energetic tectonic tremor sequence occurring at the down-dip extension of the subducting slab in southwestern Japan. The applied method exploits the coherency of multi-scale, frequency-selective characteristics of non-stationary signals recorded across the seismic network. Use of different characteristic functions, in the signal processing step of the method, allows to extract and locate the sources of short-duration impulsive signal transients associated with low-frequency earthquakes and of longer-duration energy transients during the tectonic tremor sequence. Frequency-dependent characteristic functions, based on higher-order statistics' properties of the seismic signals, are used for the detection and location of low-frequency earthquakes. This allows extracting a more complete (˜6.5 times more events) and time-resolved catalogue of low-frequency earthquakes than the routine catalogue provided by the Japan Meteorological Agency. As such, this catalogue allows resolving the space-time evolution of the low-frequency earthquakes activity in great detail, unravelling spatial and temporal clustering, modulation in response to tide, and different scales of space-time migration patterns. In the second part of the study, the detection and source location of longer-duration signal energy transients within the tectonic tremor sequence is performed using characteristic functions built from smoothed frequency-dependent energy envelopes. This leads to a catalogue of longer-duration energy sources during the tectonic tremor sequence, characterized by their durations and 3-D spatial likelihood maps of the energy-release source regions. The summary 3-D likelihood map for the 9-day tectonic tremor sequence, built from this catalogue, exhibits an along-strike spatial segmentation of

  17. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  18. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  19. Automatic Vessel Segmentation on Retinal Images

    Institute of Scientific and Technical Information of China (English)

    Chun-Yuan Yu; Chia-Jen Chang; Yen-Ju Yao; Shyr-Shen Yu

    2014-01-01

    Several features of retinal vessels can be used to monitor the progression of diseases. Changes in vascular structures, for example, vessel caliber, branching angle, and tortuosity, are portents of many diseases such as diabetic retinopathy and arterial hyper-tension. This paper proposes an automatic retinal vessel segmentation method based on morphological closing and multi-scale line detection. First, an illumination correction is performed on the green band retinal image. Next, the morphological closing and subtraction processing are applied to obtain the crude retinal vessel image. Then, the multi-scale line detection is used to fine the vessel image. Finally, the binary vasculature is extracted by the Otsu algorithm. In this paper, for improving the drawbacks of multi-scale line detection, only the line detectors at 4 scales are used. The experimental results show that the accuracy is 0.939 for DRIVE (digital retinal images for vessel extraction) retinal database, which is much better than other methods.

  20. Establishing a novel automated magnetic bead-based method for the extraction of DNA from a variety of forensic samples.

    Science.gov (United States)

    Witt, Sebastian; Neumann, Jan; Zierdt, Holger; Gébel, Gabriella; Röscheisen, Christiane

    2012-09-01

    Automated systems have been increasingly utilized for DNA extraction by many forensic laboratories to handle growing numbers of forensic casework samples while minimizing the risk of human errors and assuring high reproducibility. The step towards automation however is not easy: The automated extraction method has to be very versatile to reliably prepare high yields of pure genomic DNA from a broad variety of sample types on different carrier materials. To prevent possible cross-contamination of samples or the loss of DNA, the components of the kit have to be designed in a way that allows for the automated handling of the samples with no manual intervention necessary. DNA extraction using paramagnetic particles coated with a DNA-binding surface is predestined for an automated approach. For this study, we tested different DNA extraction kits using DNA-binding paramagnetic particles with regard to DNA yield and handling by a Freedom EVO(®)150 extraction robot (Tecan) equipped with a Te-MagS magnetic separator. Among others, the extraction kits tested were the ChargeSwitch(®)Forensic DNA Purification Kit (Invitrogen), the PrepFiler™Automated Forensic DNA Extraction Kit (Applied Biosystems) and NucleoMag™96 Trace (Macherey-Nagel). After an extensive test phase, we established a novel magnetic bead extraction method based upon the NucleoMag™ extraction kit (Macherey-Nagel). The new method is readily automatable and produces high yields of DNA from different sample types (blood, saliva, sperm, contact stains) on various substrates (filter paper, swabs, cigarette butts) with no evidence of a loss of magnetic beads or sample cross-contamination. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Statistical Methods in Assembly Quality Management of Multi-Element Products on Automatic Rotor Lines

    Science.gov (United States)

    Pries, V. V.; Proskuriakov, N. E.

    2018-04-01

    To control the assembly quality of multi-element mass-produced products on automatic rotor lines, control methods with operational feedback are required. However, due to possible failures in the operation of the devices and systems of automatic rotor line, there is always a real probability of getting defective (incomplete) products into the output process stream. Therefore, a continuous sampling control of the products completeness, based on the use of statistical methods, remains an important element in managing the quality of assembly of multi-element mass products on automatic rotor lines. The feature of continuous sampling control of the multi-element products completeness in the assembly process is its breaking sort, which excludes the possibility of returning component parts after sampling control to the process stream and leads to a decrease in the actual productivity of the assembly equipment. Therefore, the use of statistical procedures for continuous sampling control of the multi-element products completeness when assembled on automatic rotor lines requires the use of such sampling plans that ensure a minimum size of control samples. Comparison of the values of the limit of the average output defect level for the continuous sampling plan (CSP) and for the automated continuous sampling plan (ACSP) shows the possibility of providing lower limit values for the average output defects level using the ACSP-1. Also, the average sample size when using the ACSP-1 plan is less than when using the CSP-1 plan. Thus, the application of statistical methods in the assembly quality management of multi-element products on automatic rotor lines, involving the use of proposed plans and methods for continuous selective control, will allow to automating sampling control procedures and the required level of quality of assembled products while minimizing sample size.

  2. SurfCut: Free-Boundary Surface Extraction

    KAUST Repository

    Algarni, Marei Saeed Mohammed

    2016-09-15

    We present SurfCut, an algorithm for extracting a smooth simple surface with unknown boundary from a noisy 3D image and a seed point. In contrast to existing approaches that extract smooth simple surfaces with boundary, our method requires less user input, i.e., a seed point, rather than a 3D boundary curve. Our method is built on the novel observation that certain ridge curves of a front propagated using the Fast Marching algorithm are likely to lie on the surface. Using the framework of cubical complexes, we design a novel algorithm to robustly extract such ridge curves and form the surface of interest. Our algorithm automatically cuts these ridge curves to form the surface boundary, and then extracts the surface. Experiments show the robustness of our method to errors in the data, and that we achieve higher accuracy with lower computational cost than comparable methods. © Springer International Publishing AG 2016.

  3. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  4. Extracting natural dyes from wool--an evaluation of extraction methods.

    Science.gov (United States)

    Manhita, Ana; Ferreira, Teresa; Candeias, António; Dias, Cristina Barrocas

    2011-05-01

    The efficiency of eight different procedures used for the extraction of natural dyes was evaluated using contemporary wool samples dyed with cochineal, madder, woad, weld, brazilwood and logwood. Comparison was made based on the LC-DAD peak areas of the natural dye's main components which had been extracted from the wool samples. Among the tested methods, an extraction procedure with Na(2)EDTA in water/DMF (1:1, v/v) proved to be the most suitable for the extraction of the studied dyes, which presented a wide range of chemical structures. The identification of the natural dyes used in the making of an eighteenth century Arraiolos carpet was possible using the Na(2)EDTA/DMF extraction of the wool embroidery samples and an LC-DAD-MS methodology. The effectiveness of the Na(2)EDTA/DMF extraction method was particularly observed in the extraction of weld dye components. Nine flavone derivatives previously identified in weld extracts could be identified in a single historical sample, confirming the use of this natural dye in the making of Arraiolos carpets. Indigo and brazilwood were also identified in the samples, and despite the fact that these natural dyes were referred in the historical recipes of Arraiolos dyeing, it is the first time that the use of brazilwood is confirmed. Mordant analysis by ICP-MS identified the widespread use of alum in the dyeing process, but in some samples with darker hues, high amounts of iron were found instead.

  5. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    International Nuclear Information System (INIS)

    Levine, S.H.; Ivanov, K.; Feltus, M.

    1996-01-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors)

  6. Improvement in PWR automatic optimization reloading methods using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Levine, S H; Ivanov, K; Feltus, M [Pennsylvania State Univ., University Park, PA (United States)

    1996-12-01

    The objective of using automatic optimized reloading methods is to provide the Nuclear Engineer with an efficient method for reloading a nuclear reactor which results in superior core configurations that minimize fuel costs. Previous methods developed by Levine et al required a large effort to develop the initial core loading using a priority loading scheme. Subsequent modifications to this core configuration were made using expert rules to produce the final core design. Improvements in this technique have been made by using a genetic algorithm to produce improved core reload designs for PWRs more efficiently (authors).

  7. Semi Automatic Ontology Instantiation in the domain of Risk Management

    Science.gov (United States)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  8. A framework for automatic feature extraction from airborne light detection and ranging data

    Science.gov (United States)

    Yan, Jianhua

    Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly

  9. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  10. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  11. Automatic optimized reload and depletion method for a pressurized water reactor

    International Nuclear Information System (INIS)

    Ahn, D.H.; Levene, S.H.

    1985-01-01

    A new method has been developed to automatically reload and deplete a pressurized water reactor (PWR) so that both the enriched inventory requirements during the reactor cycle and the cost of reloading the core are minimized. This is achieved through four stepwise optimization calculations: (a) determination of the minimum fuel requirement for an equivalent three-region core model, (b) optimal selection and allocation of fuel assemblies for each of the three regions to minimize the reload cost, (c) optimal placement of fuel assemblies to conserve regionwise optimal conditions, and (d) optimal control through poison management to deplete individual fuel assemblies to maximize end-of-cycle k /SUB eff/ . The new method differs from previous methods in that the optimization process automatically performs all tasks required to reload and deplete a PWR. In addition, the previous work that developed optimization methods principally for the initial reactor cycle was modified to handle subsequent cycles with fuel assemblies having burnup at beginning of cycle. Application of the method to the fourth reactor cycle at Three Mile Island Unit 1 has shown that both the enrichment and the number of fresh reload fuel assemblies can be decreased and fully amortized fuel assemblies can be reused to minimize the fuel cost of the reactor

  12. Clinically-inspired automatic classification of ovarian carcinoma subtypes

    Directory of Open Access Journals (Sweden)

    Aicha BenTaieb

    2016-01-01

    Full Text Available Context: It has been shown that ovarian carcinoma subtypes are distinct pathologic entities with differing prognostic and therapeutic implications. Histotyping by pathologists has good reproducibility, but occasional cases are challenging and require immunohistochemistry and subspecialty consultation. Motivated by the need for more accurate and reproducible diagnoses and to facilitate pathologists′ workflow, we propose an automatic framework for ovarian carcinoma classification. Materials and Methods: Our method is inspired by pathologists′ workflow. We analyse imaged tissues at two magnification levels and extract clinically-inspired color, texture, and segmentation-based shape descriptors using image-processing methods. We propose a carefully designed machine learning technique composed of four modules: A dissimilarity matrix, dimensionality reduction, feature selection and a support vector machine classifier to separate the five ovarian carcinoma subtypes using the extracted features. Results: This paper presents the details of our implementation and its validation on a clinically derived dataset of eighty high-resolution histopathology images. The proposed system achieved a multiclass classification accuracy of 95.0% when classifying unseen tissues. Assessment of the classifier′s confusion (confusion matrix between the five different ovarian carcinoma subtypes agrees with clinician′s confusion and reflects the difficulty in diagnosing endometrioid and serous carcinomas. Conclusions: Our results from this first study highlight the difficulty of ovarian carcinoma diagnosis which originate from the intrinsic class-imbalance observed among subtypes and suggest that the automatic analysis of ovarian carcinoma subtypes could be valuable to clinician′s diagnostic procedure by providing a second opinion.

  13. Semi-Automatic Rating Method for Neutrophil Alkaline Phosphatase Activity.

    Science.gov (United States)

    Sugano, Kanae; Hashi, Kotomi; Goto, Misaki; Nishi, Kiyotaka; Maeda, Rie; Kono, Keigo; Yamamoto, Mai; Okada, Kazunori; Kaga, Sanae; Miwa, Keiko; Mikami, Taisei; Masauzi, Nobuo

    2017-01-01

    The neutrophil alkaline phosphatase (NAP) score is a valuable test for the diagnosis of myeloproliferative neoplasms, but it has still manually rated. Therefore, we developed a semi-automatic rating method using Photoshop ® and Image-J, called NAP-PS-IJ. Neutrophil alkaline phosphatase staining was conducted with Tomonaga's method to films of peripheral blood taken from three healthy volunteers. At least 30 neutrophils with NAP scores from 0 to 5+ were observed and taken their images. From which the outer part of neutrophil was removed away with Image-J. These were binarized with two different procedures (P1 and P2) using Photoshop ® . NAP-positive area (NAP-PA) and granule (NAP-PGC) were measured and counted with Image-J. The NAP-PA in images binarized with P1 significantly (P < 0.05) differed between images with NAP scores from 0 to 3+ (group 1) and those from 4+ to 5+ (group 2). The original images in group 1 were binarized with P2. NAP-PGC of them significantly (P < 0.05) differed among all four NAP score groups. The mean NAP-PGC with NAP-PS-IJ indicated a good correlation (r = 0.92, P < 0.001) to results by human examiners. The sensitivity and specificity of NAP-PS-IJ were 60% and 92%, which might be considered as a prototypic method for the full-automatic rating NAP score. © 2016 Wiley Periodicals, Inc.

  14. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Bárbara F.G.; Delgado, José Ubiratan; Wanderley S da Silva, José; Barros, Pedro D. de; Araújo, Radier M.S. de; Dias, Fábio C.; Lopes, Ricardo T.

    2012-01-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. - Highlights: ► A semi-automatic potentiometric titration method was developed for U charaterization. ► K 2 Cr 2 O 7 was the only certified reference material used. ► Values obtained for U 3 O 8 samples were consistent with certified. ► Uncertainty of 0.01% was useful for characterization and intercomparison program.

  15. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Science.gov (United States)

    Umili, G.; Ferrero, A.; Einstein, H. H.

    2013-02-01

    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  16. Development of {sup 99m}Tc extraction-recovery by solvent extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Akihiro; Nishikata, Kaori; Izumo, Hironobu; Tsuchiya, Kunihiko; Ishihara, Masahiro [Japan Atomic Energy Agency, Oarai Research and Development Center, Oarai, Ibaraki (Japan); Tanase, Masakazu; Fujisaki, Saburo; Shiina, Takayuki; Ohta, Akio; Takeuchi, Nobuhiro [Chiyoda Technol Corp., Tokyo (Japan)

    2012-03-15

    {sup 99m}Tc is used as a radiopharmaceutical in the medical field for the diagnosis, and manufactured from {sup 99}Mo, the parent nuclide. In this study, the solvent extraction with MEK was selected, and preliminary experiments were carried out using Re instead of {sup 99m}Tc. Two tests were carried out in the experiments; the one is the Re extraction test with MEK from Re-Mo solution, the other is the Re recovery test from the Re-MEK. As to the Re extraction test, and it was clear that the Re extraction yield was more than 90%. Two kinds of Re recovery tests, which are an evaporation method using the evaporator and an adsorption/elution method using the alumina column, were carried out. As to the evaporation method, the Re concentration in the collected solution increased more than 150 times. As to the adsorption/elution method, the Re concentration increased in the eluted solution more than 20 times. (author)

  17. A Plant Control Technology Using Reinforcement Learning Method with Automatic Reward Adjustment

    Science.gov (United States)

    Eguchi, Toru; Sekiai, Takaaki; Yamada, Akihiro; Shimizu, Satoru; Fukai, Masayuki

    A control technology using Reinforcement Learning (RL) and Radial Basis Function (RBF) Network has been developed to reduce environmental load substances exhausted from power and industrial plants. This technology consists of the statistic model using RBF Network, which estimates characteristics of plants with respect to environmental load substances, and RL agent, which learns the control logic for the plants using the statistic model. In this technology, it is necessary to design an appropriate reward function given to the agent immediately according to operation conditions and control goals to control plants flexibly. Therefore, we propose an automatic reward adjusting method of RL for plant control. This method adjusts the reward function automatically using information of the statistic model obtained in its learning process. In the simulations, it is confirmed that the proposed method can adjust the reward function adaptively for several test functions, and executes robust control toward the thermal power plant considering the change of operation conditions and control goals.

  18. Automatic Emotional State Detection using Facial Expression Dynamic in Videos

    Directory of Open Access Journals (Sweden)

    Hongying Meng

    2014-11-01

    Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.

  19. Virgin almond oil: Extraction methods and composition

    Energy Technology Data Exchange (ETDEWEB)

    Roncero, J.M.; Alvarez-Orti, M.; Pardo-Gimenez, A.; Gomez, R.; Rabadan, A.; Pardo, J.E.

    2016-07-01

    In this paper the extraction methods of virgin almond oil and its chemical composition are reviewed. The most common methods for obtaining oil are solvent extraction, extraction with supercritical fluids (CO2) and pressure systems (hydraulic and screw presses). The best industrial performance, but also the worst oil quality is achieved by using solvents. Oils obtained by this method cannot be considered virgin oils as they are obtained by chemical treatments. Supercritical fluid extraction results in higher quality oils but at a very high price. Extraction by pressing becomes the best option to achieve high quality oils at an affordable price. With regards chemical composition, almond oil is characterized by its low content in saturated fatty acids and the predominance of monounsaturated, especially oleic acid. Furthermore, almond oil contains antioxidants and fat-soluble bioactive compounds that make it an oil with interesting nutritional and cosmetic properties.

  20. Virgin almond oil: Extraction methods and composition

    International Nuclear Information System (INIS)

    Roncero, J.M.; Alvarez-Orti, M.; Pardo-Gimenez, A.; Gomez, R.; Rabadan, A.; Pardo, J.E.

    2016-01-01

    In this paper the extraction methods of virgin almond oil and its chemical composition are reviewed. The most common methods for obtaining oil are solvent extraction, extraction with supercritical fluids (CO2) and pressure systems (hydraulic and screw presses). The best industrial performance, but also the worst oil quality is achieved by using solvents. Oils obtained by this method cannot be considered virgin oils as they are obtained by chemical treatments. Supercritical fluid extraction results in higher quality oils but at a very high price. Extraction by pressing becomes the best option to achieve high quality oils at an affordable price. With regards chemical composition, almond oil is characterized by its low content in saturated fatty acids and the predominance of monounsaturated, especially oleic acid. Furthermore, almond oil contains antioxidants and fat-soluble bioactive compounds that make it an oil with interesting nutritional and cosmetic properties.

  1. Influence of extraction methods on the hepatotoxicity of Azadirachta ...

    African Journals Online (AJOL)

    The influence of extraction methods: Cold aqueous (CA) hot aqueous (HA) and alcoholic extraction (AE) methods on the hepatotoxic effect of Azadirachta indica bark extract (ABC) was investigated using albino rats. A total of forty eight rats were divided into three groups of sixteen rats equally for the extraction methods.

  2. Automatic QRS complex detection using two-level convolutional neural network.

    Science.gov (United States)

    Xiang, Yande; Lin, Zhitao; Meng, Jianyi

    2018-01-29

    The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.

  3. An Overview of Biomolecular Event Extraction from Scientific Documents.

    Science.gov (United States)

    Vanegas, Jorge A; Matos, Sérgio; González, Fabio; Oliveira, José L

    2015-01-01

    This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  4. An Overview of Biomolecular Event Extraction from Scientific Documents

    Directory of Open Access Journals (Sweden)

    Jorge A. Vanegas

    2015-01-01

    Full Text Available This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  5. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  6. Research on automatic correction of electronic beam path for distributed control

    International Nuclear Information System (INIS)

    Guo Xin; Su Haijun; Li Deming; Wang Shengli; Guo Honglei

    2014-01-01

    Background: Dynamitron, an electron irradiation accelerator of high-voltage, is used as a radiation source for industrial and agricultural production. The control system is an important component of dynamitron. Purpose: The aim is to improve the control system to meet the performance requirements of dynamitron for the stability. Methods: A distributed control system for the 1.5-MeV dynamitron is proposed to gain better performance. On this basis, an electron beam trajectory automatic correction method based on Cerebellar Model Articulation Controller and Proportional-Integral Derivative (CMAC-PID) controller is designed to improve the function of electron beam extraction system. Results: The distributed control system can meet the control requirements of the accelerator. The stability of the CMAC PID controller is better than that of conventional PID controller for the electron beam trajectory automatic correction system, and hence the CMAC-PID controller can provide better protection of dynamitron when electron beam deflection occurs. Conclusion: The distributed control system and the electron beam trajectory automatic correction method system can effectively improve the performance and reduce the failure probability of the accelerator, thereby enhancing the efficiency of the accelerator. (authors)

  7. Extraction method

    International Nuclear Information System (INIS)

    Stary, J.; Kyrs, M.; Navratil, J.; Havelka, S.; Hala, J.

    1975-01-01

    Definitions of the basic terms and of relations are given and the knowledge is described of the possibilities of the extraction of elements, oxides, covalent-bound halogenides and heteropolyacids. Greatest attention is devoted to the detailed analysis of the extraction of chelates and ion associates using diverse agents. For both types of compounds detailed conditions are given of the separation and the effects of the individual factors are listed. Attention is also devoted to extractions using mixtures of organic agents, the synergic effects thereof, and to extractions in non-aqueous solvents. The effects of radiation on extraction and the main types of apparatus used for extractions carried out in the laboratory are described. (L.K.)

  8. A Method Based on Artificial Intelligence To Fully Automatize The Evaluation of Bovine Blastocyst Images.

    Science.gov (United States)

    Rocha, José Celso; Passalia, Felipe José; Matos, Felipe Delestro; Takahashi, Maria Beatriz; Ciniciato, Diego de Souza; Maserati, Marc Peter; Alves, Mayra Fernanda; de Almeida, Tamie Guibu; Cardoso, Bruna Lopes; Basso, Andrea Cristina; Nogueira, Marcelo Fábio Gouveia

    2017-08-09

    Morphological analysis is the standard method of assessing embryo quality; however, its inherent subjectivity tends to generate discrepancies among evaluators. Using genetic algorithms and artificial neural networks (ANNs), we developed a new method for embryo analysis that is more robust and reliable than standard methods. Bovine blastocysts produced in vitro were classified as grade 1 (excellent or good), 2 (fair), or 3 (poor) by three experienced embryologists according to the International Embryo Technology Society (IETS) standard. The images (n = 482) were subjected to automatic feature extraction, and the results were used as input for a supervised learning process. One part of the dataset (15%) was used for a blind test posterior to the fitting, for which the system had an accuracy of 76.4%. Interestingly, when the same embryologists evaluated a sub-sample (10%) of the dataset, there was only 54.0% agreement with the standard (mode for grades). However, when using the ANN to assess this sub-sample, there was 87.5% agreement with the modal values obtained by the evaluators. The presented methodology is covered by National Institute of Industrial Property (INPI) and World Intellectual Property Organization (WIPO) patents and is currently undergoing a commercial evaluation of its feasibility.

  9. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    Science.gov (United States)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  10. Extractive Summarisation of Medical Documents

    OpenAIRE

    Abeed Sarker; Diego Molla; Cecile Paris

    2012-01-01

    Background Evidence Based Medicine (EBM) practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medic...

  11. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  12. Automatic emotional expression analysis from eye area

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  13. Shadow Analysis Technique for Extraction of Building Height using High Resolution Satellite Single Image and Accuracy Assessment

    Science.gov (United States)

    Raju, P. L. N.; Chaudhary, H.; Jha, A. K.

    2014-11-01

    These High resolution satellite data with metadata information is used to extract the height of the building using shadow. Proposed approach divides into two phases 1) rooftop and shadow extraction and 2) height estimation. Firstly the rooftop and shadow region were extracted by manual/ automatic methods using Example - Based and Rule - Based approaches. After feature extraction next step is estimating height of the building by taking rooftop in association with shadow using Ratio Method and by using the relation between sun-satellite geometry. The performance analysis shows the total mean error of height is 0.67 m from ratio method, 1.51 m from Example - Based Approach and 0.96 m from Rule - Based Approach. Analysis concluded that Ratio Method i.e. manual method is best for height estimation but it is time consuming so the automatic Rule Based approach is best for height estimation in comparison to Example Based Approach because it require more knowledge and selection of more training samples as well as slows the processing rate of the method.

  14. DNA extraction on bio-chip: history and preeminence over conventional and solid-phase extraction methods.

    Science.gov (United States)

    Ayoib, Adilah; Hashim, Uda; Gopinath, Subash C B; Md Arshad, M K

    2017-11-01

    This review covers a developmental progression on early to modern taxonomy at cellular level following the advent of electron microscopy and the advancement in deoxyribonucleic acid (DNA) extraction for expatiation of biological classification at DNA level. Here, we discuss the fundamental values of conventional chemical methods of DNA extraction using liquid/liquid extraction (LLE) followed by development of solid-phase extraction (SPE) methods, as well as recent advances in microfluidics device-based system for DNA extraction on-chip. We also discuss the importance of DNA extraction as well as the advantages over conventional chemical methods, and how Lab-on-a-Chip (LOC) system plays a crucial role for the future achievements.

  15. Fast Automatic Airport Detection in Remote Sensing Images Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Fen Chen

    2018-03-01

    Full Text Available Fast and automatic detection of airports from remote sensing images is useful for many military and civilian applications. In this paper, a fast automatic detection method is proposed to detect airports from remote sensing images based on convolutional neural networks using the Faster R-CNN algorithm. This method first applies a convolutional neural network to generate candidate airport regions. Based on the features extracted from these proposals, it then uses another convolutional neural network to perform airport detection. By taking the typical elongated linear geometric shape of airports into consideration, some specific improvements to the method are proposed. These approaches successfully improve the quality of positive samples and achieve a better accuracy in the final detection results. Experimental results on an airport dataset, Landsat 8 images, and a Gaofen-1 satellite scene demonstrate the effectiveness and efficiency of the proposed method.

  16. Natural colorants: Pigment stability and extraction yield enhancement via utilization of appropriate pretreatment and extraction methods.

    Science.gov (United States)

    Ngamwonglumlert, Luxsika; Devahastin, Sakamon; Chiewchan, Naphaporn

    2017-10-13

    Natural colorants from plant-based materials have gained increasing popularity due to health consciousness of consumers. Among the many steps involved in the production of natural colorants, pigment extraction is one of the most important. Soxhlet extraction, maceration, and hydrodistillation are conventional methods that have been widely used in industry and laboratory for such a purpose. Recently, various non-conventional methods, such as supercritical fluid extraction, pressurized liquid extraction, microwave-assisted extraction, ultrasound-assisted extraction, pulsed-electric field extraction, and enzyme-assisted extraction have emerged as alternatives to conventional methods due to the advantages of the former in terms of smaller solvent consumption, shorter extraction time, and more environment-friendliness. Prior to the extraction step, pretreatment of plant materials to enhance the stability of natural pigments is another important step that must be carefully taken care of. In this paper, a comprehensive review of appropriate pretreatment and extraction methods for chlorophylls, carotenoids, betalains, and anthocyanins, which are major classes of plant pigments, is provided by using pigment stability and extraction yield as assessment criteria.

  17. A Simple Method for Automated Solid Phase Extraction of Water Samples for Immunological Analysis of Small Pollutants.

    Science.gov (United States)

    Heub, Sarah; Tscharner, Noe; Kehl, Florian; Dittrich, Petra S; Follonier, Stéphane; Barbe, Laurent

    2016-01-01

    A new method for solid phase extraction (SPE) of environmental water samples is proposed. The developed prototype is cost-efficient and user friendly, and enables to perform rapid, automated and simple SPE. The pre-concentrated solution is compatible with analysis by immunoassay, with a low organic solvent content. A method is described for the extraction and pre-concentration of natural hormone 17β-estradiol in 100 ml water samples. Reverse phase SPE is performed with octadecyl-silica sorbent and elution is done with 200 µl of methanol 50% v/v. Eluent is diluted by adding di-water to lower the amount of methanol. After preparing manually the SPE column, the overall procedure is performed automatically within 1 hr. At the end of the process, estradiol concentration is measured by using a commercial enzyme-linked immune-sorbent assay (ELISA). 100-fold pre-concentration is achieved and the methanol content in only 10% v/v. Full recoveries of the molecule are achieved with 1 ng/L spiked de-ionized and synthetic sea water samples.

  18. A multi-atlas based method for automated anatomical rat brain MRI segmentation and extraction of PET activity.

    Science.gov (United States)

    Lancelot, Sophie; Roche, Roxane; Slimen, Afifa; Bouillot, Caroline; Levigoureux, Elise; Langlois, Jean-Baptiste; Zimmer, Luc; Costes, Nicolas

    2014-01-01

    Preclinical in vivo imaging requires precise and reproducible delineation of brain structures. Manual segmentation is time consuming and operator dependent. Automated segmentation as usually performed via single atlas registration fails to account for anatomo-physiological variability. We present, evaluate, and make available a multi-atlas approach for automatically segmenting rat brain MRI and extracting PET activies. High-resolution 7T 2DT2 MR images of 12 Sprague-Dawley rat brains were manually segmented into 27-VOI label volumes using detailed protocols. Automated methods were developed with 7/12 atlas datasets, i.e. the MRIs and their associated label volumes. MRIs were registered to a common space, where an MRI template and a maximum probability atlas were created. Three automated methods were tested: 1/registering individual MRIs to the template, and using a single atlas (SA), 2/using the maximum probability atlas (MP), and 3/registering the MRIs from the multi-atlas dataset to an individual MRI, propagating the label volumes and fusing them in individual MRI space (propagation & fusion, PF). Evaluation was performed on the five remaining rats which additionally underwent [18F]FDG PET. Automated and manual segmentations were compared for morphometric performance (assessed by comparing volume bias and Dice overlap index) and functional performance (evaluated by comparing extracted PET measures). Only the SA method showed volume bias. Dice indices were significantly different between methods (PF>MP>SA). PET regional measures were more accurate with multi-atlas methods than with SA method. Multi-atlas methods outperform SA for automated anatomical brain segmentation and PET measure's extraction. They perform comparably to manual segmentation for FDG-PET quantification. Multi-atlas methods are suitable for rapid reproducible VOI analyses.

  19. Extraction and identification of cyclobutanones from irradiated cheese employing a rapid direct solvent extraction method.

    Science.gov (United States)

    Tewfik, Ihab

    2008-01-01

    2-Alkylcyclobutanones (cyclobutanones) are accepted as chemical markers for irradiated foods containing lipid. However, current extraction procedures (Soxhlet-florisil chromatography) for the isolation of these markers involve a long and tedious clean-up regime prior to gas chromatography-mass spectrophotometry identification. This paper outlines an alternative isolation and clean-up method for the extraction of cyclobutanones in irradiated Camembert cheese. The newly developed direct solvent extraction method enables the efficient screening of large numbers of food samples and is not as resource intensive as the BS EN 1785:1997 method. Direct solvent extraction appears to be a simple, robust method and has the added advantage of a considerably shorter extraction time for the analysis of foods containing lipid.

  20. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  1. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    Science.gov (United States)

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A comparison of five extraction methods for extracellular polymeric ...

    African Journals Online (AJOL)

    Two physical methods (centrifugation and ultrasonication) and 3 chemical methods (extraction with EDTA, extraction with formaldehyde, and extraction with formaldehyde plus NaOH) for extraction of EPS from alga-bacteria biofilm were assessed. Pretreatment with ultrasound at low intensity doubled the EPS yield without ...

  3. Comparison of Document Index Graph Using TextRank and HITS Weighting Method in Automatic Text Summarization

    Science.gov (United States)

    Hadyan, Fadhlil; Shaufiah; Arif Bijaksana, Moch.

    2017-01-01

    Automatic summarization is a system that can help someone to take the core information of a long text instantly. The system can help by summarizing text automatically. there’s Already many summarization systems that have been developed at this time but there are still many problems in those system. In this final task proposed summarization method using document index graph. This method utilizes the PageRank and HITS formula used to assess the web page, adapted to make an assessment of words in the sentences in a text document. The expected outcome of this final task is a system that can do summarization of a single document, by utilizing document index graph with TextRank and HITS to improve the quality of the summary results automatically.

  4. Effects of Different Extraction Methods and Conditions on the Phenolic Composition of Mate Tea Extracts

    Directory of Open Access Journals (Sweden)

    Jelena Vladic

    2012-03-01

    Full Text Available A simple and rapid HPLC method for determination of chlorogenic acid (5-O-caffeoylquinic acid in mate tea extracts was developed and validated. The chromatography used isocratic elution with a mobile phase of aqueous 1.5% acetic acid-methanol (85:15, v/v. The flow rate was 0.8 mL/min and detection by UV at 325 nm. The method showed good selectivity, accuracy, repeatability and robustness, with detection limit of 0.26 mg/L and recovery of 97.76%. The developed method was applied for the determination of chlorogenic acid in mate tea extracts obtained by ethanol extraction and liquid carbon dioxide extraction with ethanol as co-solvent. Different ethanol concentrations were used (40, 50 and 60%, v/v and liquid CO2 extraction was performed at different pressures (50 and 100 bar and constant temperature (27 ± 1 °C. Significant influence of extraction methods, conditions and solvent polarity on chlorogenic acid content, antioxidant activity and total phenolic and flavonoid content of mate tea extracts was established. The most efficient extraction solvent was liquid CO2 with aqueous ethanol (40% as co-solvent using an extraction pressure of 100 bar.

  5. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    Science.gov (United States)

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  6. Automatic MPST-cut for segmentation of carpal bones from MR volumes.

    Science.gov (United States)

    Gemme, Laura; Nardotto, Sonia; Dellepiane, Silvana G

    2017-08-01

    In the context of rheumatic diseases, several studies suggest that Magnetic Resonance Imaging (MRI) allows the detection of the three main signs of Rheumatoid Arthritis (RA) at higher sensitivities than available through conventional radiology. The rapid, accurate segmentation of bones is an essential preliminary step for quantitative diagnosis, erosion evaluation, and multi-temporal data fusion. In the present paper, a new, semi-automatic, 3D graph-based segmentation method to extract carpal bone data is proposed. The method is unsupervised, does not employ any a priori model or knowledge, and is adaptive to the individual variability of the acquired data. After selecting one source point inside the Region of Interest (ROI), a segmentation process is initiated, which consists of two automatic stages: a cost-labeling phase and a graph-cutting phase. The algorithm finds optimal paths based on a new cost function by creating a Minimum Path Spanning Tree (MPST). To extract the region, a cut of the obtained tree is necessary. A new criterion of the MPST-cut based on compactness shape factor was conceived and developed. The proposed approach is applied to a large database of 96 T1-weighted MR bone volumes. Performance quality is evaluated by comparing the results with gold-standard bone volumes manually defined by rheumatologists through the computation of metrics extracted from the confusion matrix. Furthermore, comparisons with the existing literature are carried out. The results show that this method is efficient and provides satisfactory performance for bone segmentation on low-field MR volumes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Automatic Atrial Fibrillation Detection: A Novel Approach Using Discrete Wavelet Transform and Heart Rate Variabilit

    DEFF Research Database (Denmark)

    Bruun, Iben H.; Hissabu, Semira M. S.; Poulsen, Erik S.

    2017-01-01

    be used as a screening tool for patients suspected to have AF. The method includes an automatic peak detection prior to the feature extraction, as well as a noise cancellation technique followed by a bagged tree classification. Simulation studies on the MIT-BIH Atrial Fibrillation database was performed...

  8. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2013-11-01

    Full Text Available Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is developed to recover the extracted boundary segments, which then intersect to form the corners of buildings. The building corners extracted from airborne and terrestrial laser scanning are reliably matched through an automatic iterative process in which boundaries from two datasets are compared for the reliability check. The experimental results illustrate that the proposed approach provides both high reliability and high geometric accuracy (average error of 0.44 m/0.15 m in horizontal/vertical direction for corresponding building corners for the final registration of airborne laser scanning (ALS and tripod mounted terrestrial laser scanning (TLS data.

  9. Automatic Mosaicking of Satellite Imagery Considering the Clouds

    Science.gov (United States)

    Kang, Yifei; Pan, Li; Chen, Qi; Zhang, Tong; Zhang, Shasha; Liu, Zhang

    2016-06-01

    With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.

  10. System and method for free-boundary surface extraction

    KAUST Repository

    Algarni, Marei

    2017-10-26

    A method of extracting surfaces in three-dimensional data includes receiving as inputs three-dimensional data and a seed point p located on a surface to be extracted. The method further includes propagating a front outwardly from the seed point p and extracting a plurality of ridge curves based on the propagated front. A surface boundary is detected based on a comparison of distances between adjacent ridge curves and the desired surface is extracted based on the detected surface boundary.

  11. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy

    Science.gov (United States)

    Maspero, Matteo; van den Berg, Cornelis A. T.; Zijlstra, Frank; Sikkes, Gonda G.; de Boer, Hans C. J.; Meijer, Gert J.; Kerkmeijer, Linda G. W.; Viergever, Max A.; Lagendijk, Jan J. W.; Seevinck, Peter R.

    2017-10-01

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of 1.1 × 1.1 × 1.2 mm3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a

  12. Evaluation of an automatic MR-based gold fiducial marker localisation method for MR-only prostate radiotherapy.

    Science.gov (United States)

    Maspero, Matteo; van den Berg, Cornelis A T; Zijlstra, Frank; Sikkes, Gonda G; de Boer, Hans C J; Meijer, Gert J; Kerkmeijer, Linda G W; Viergever, Max A; Lagendijk, Jan J W; Seevinck, Peter R

    2017-10-03

    An MR-only radiotherapy planning (RTP) workflow would reduce the cost, radiation exposure and uncertainties introduced by CT-MRI registrations. In the case of prostate treatment, one of the remaining challenges currently holding back the implementation of an RTP workflow is the MR-based localisation of intraprostatic gold fiducial markers (FMs), which is crucial for accurate patient positioning. Currently, MR-based FM localisation is clinically performed manually. This is sub-optimal, as manual interaction increases the workload. Attempts to perform automatic FM detection often rely on being able to detect signal voids induced by the FMs in magnitude images. However, signal voids may not always be sufficiently specific, hampering accurate and robust automatic FM localisation. Here, we present an approach that aims at automatic MR-based FM localisation. This method is based on template matching using a library of simulated complex-valued templates, and exploiting the behaviour of the complex MR signal in the vicinity of the FM. Clinical evaluation was performed on seventeen prostate cancer patients undergoing external beam radiotherapy treatment. Automatic MR-based FM localisation was compared to manual MR-based and semi-automatic CT-based localisation (the current gold standard) in terms of detection rate and the spatial accuracy and precision of localisation. The proposed method correctly detected all three FMs in 15/17 patients. The spatial accuracy (mean) and precision (STD) were 0.9 mm and 0.5 mm respectively, which is below the voxel size of [Formula: see text] mm 3 and comparable to MR-based manual localisation. FM localisation failed (3/51 FMs) in the presence of bleeding or calcifications in the direct vicinity of the FM. The method was found to be spatially accurate and precise, which is essential for clinical use. To overcome any missed detection, we envision the use of the proposed method along with verification by an observer. This will result in a

  13. Automatic Bowel Motility Evaluation Technique for Noncontact Sound Recordings

    Directory of Open Access Journals (Sweden)

    Ryunosuke Sato

    2018-06-01

    Full Text Available Information on bowel motility can be obtained via magnetic resonance imaging (MRIs and X-ray imaging. However, these approaches require expensive medical instruments and are unsuitable for frequent monitoring. Bowel sounds (BS can be conveniently obtained using electronic stethoscopes and have recently been employed for the evaluation of bowel motility. More recently, our group proposed a novel method to evaluate bowel motility on the basis of BS acquired using a noncontact microphone. However, the method required manually detecting BS in the sound recordings, and manual segmentation is inconvenient and time consuming. To address this issue, herein, we propose a new method to automatically evaluate bowel motility for noncontact sound recordings. Using simulations for the sound recordings obtained from 20 human participants, we showed that the proposed method achieves an accuracy of approximately 90% in automatic bowel sound detection when acoustic feature power-normalized cepstral coefficients are used as inputs to artificial neural networks. Furthermore, we showed that bowel motility can be evaluated based on the three acoustic features in the time domain extracted by our method: BS per minute, signal-to-noise ratio, and sound-to-sound interval. The proposed method has the potential to contribute towards the development of noncontact evaluation methods for bowel motility.

  14. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    Jiang, Hao; Yamamoto, Shinji; Imao, Masanao.

    1995-01-01

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  15. A model based method for automatic facial expression recognition

    NARCIS (Netherlands)

    Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den

    2006-01-01

    Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image

  16. Channel selection for automatic seizure detection

    DEFF Research Database (Denmark)

    Duun-Henriksen, Jonas; Kjaer, Troels Wesenberg; Madsen, Rasmus Elsborg

    2012-01-01

    Objective: To investigate the performance of epileptic seizure detection using only a few of the recorded EEG channels and the ability of software to select these channels compared with a neurophysiologist. Methods: Fifty-nine seizures and 1419 h of interictal EEG are used for training and testing...... of an automatic channel selection method. The characteristics of the seizures are extracted by the use of a wavelet analysis and classified by a support vector machine. The best channel selection method is based upon maximum variance during the seizure. Results: Using only three channels, a seizure detection...... sensitivity of 96% and a false detection rate of 0.14/h were obtained. This corresponds to the performance obtained when channels are selected through visual inspection by a clinical neurophysiologist, and constitutes a 4% improvement in sensitivity compared to seizure detection using channels recorded...

  17. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    Directory of Open Access Journals (Sweden)

    Hongqiang Li

    2016-10-01

    Full Text Available Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  18. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  19. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  20. Ontology Assisted Formal Specification Extraction from Text

    Directory of Open Access Journals (Sweden)

    Andreea Mihis

    2010-12-01

    Full Text Available In the field of knowledge processing, the ontologies are the most important mean. They make possible for the computer to understand better the natural language and to make judgments. In this paper, a method which use ontologies in the semi-automatic extraction of formal specifications from a natural language text is proposed.

  1. Automatic Tortuosity-Based Retinopathy of Prematurity Screening System

    Science.gov (United States)

    Sukkaew, Lassada; Uyyanonvara, Bunyarit; Makhanov, Stanislav S.; Barman, Sarah; Pangputhipong, Pannet

    Retinopathy of Prematurity (ROP) is an infant disease characterized by increased dilation and tortuosity of the retinal blood vessels. Automatic tortuosity evaluation from retinal digital images is very useful to facilitate an ophthalmologist in the ROP screening and to prevent childhood blindness. This paper proposes a method to automatically classify the image into tortuous and non-tortuous. The process imitates expert ophthalmologists' screening by searching for clearly tortuous vessel segments. First, a skeleton of the retinal blood vessels is extracted from the original infant retinal image using a series of morphological operators. Next, we propose to partition the blood vessels recursively using an adaptive linear interpolation scheme. Finally, the tortuosity is calculated based on the curvature of the resulting vessel segments. The retinal images are then classified into two classes using segments characterized by the highest tortuosity. For an optimal set of training parameters the prediction is as high as 100%.

  2. Evaluation of in vitro antioxidant potential of different polarities stem crude extracts by different extraction methods of Adenium obesum

    Directory of Open Access Journals (Sweden)

    Mohammad Amzad Hossain

    2014-09-01

    Full Text Available Objective: To select best extraction method for the isolated antioxidant compounds from the stems of Adenium obesum. Methods: Two methods used for the extraction are Soxhlet and maceration methods. Methanol solvent was used for both extraction method. The methanol crude extract was defatted with water and extracted successively with hexane, chloroform, ethyl acetate and butanol solvents. The antioxidant potential for all crude extracts were determined by using 1, 1-diphenyl-2- picrylhydrazyl method. Results: The percentage of extraction yield by Soxhlet method is higher compared to maceration method. The antioxidant potential for methanol and its derived fractions by Soxhlet extractor method was highest in ethyl acetate and lowest in hexane crude extracts and found in the order of ethyl acetate>butanol>water>chloroform>methanol>hexane. However, the antioxidant potential for methanol and its derived fractions by maceration method was highest in butanol and lowest in hexane followed in the order of butanol>methanol>chloroform>water>ethyl acetate>hexane. Conclusions: The results showed that isolate antioxidant compounds effected on the extraction method and condition of extraction.

  3. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  4. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  5. TU-G-204-02: Automatic Sclerotic Bone Metastases Detection in the Pelvic Region From Dual Energy CT

    Energy Technology Data Exchange (ETDEWEB)

    Fehr, D; Schmidtlein, C; Hwang, S; Deasy, J; Veeraraghavan, H [Memorial Sloan Kettering Cancer Center, New York, NY (United States)

    2015-06-15

    Purpose: To automatically detect sclerotic bone metastases in the pelvic region using dual energy computed tomography (DECT). Methods: We developed a two stage algorithm to automatically detect sclerotic bone metastases in the pelvis from DECT for patients with multiple bone metastatic lesions and with hip implants. The first stage consists of extracting the bone and marrow regions by using a support vector machine (SVM) classifier. We employed a novel representation of the DECT images using multi-material decomposition, which represents each voxel as a mixture of different physical materials (e.g. bone+water+fat). Following the extraction of bone and marrow, in the second stage, a bi -histogram equalization method was employed to enhance the contrast to reveal the bone metastases. Next, meanshift segmentation was performed to separate the voxels by their intensity levels. Finally, shape-based filtering was performed to extract the possible locations of the metastatic lesions using multiple shape criteria. We used the following shape parameters: area, eccentricity, major and minor axis, perimeter and skeleton. Results: A radiologist with several years of experience with DECT manually labeled 64 regions consisting of metastatic lesions from 10 different patients. However, the patients had many more metastasic lesions throughout the pelvis. Our method correctly identified 46 of the marked 64 regions (72%). In addition, our method also identified several other lesions, which can then be validated by the radiologist. The missed lesions were typically very large elongated regions consisting of several islands of very small (<4mm) lesions. Conclusion: We developed an algorithm to automatically detect sclerotic lesions in the pelvic region from DECT. Preliminary assessment shows that our algorithm generated lesions agreeing with the radiologist generated candidate regions. Furthermore, our method reveals additional lesions that can be inspected by the radiologist, thereby

  6. Newer methods of extraction of teeth

    Directory of Open Access Journals (Sweden)

    MHendra Chandha

    2016-06-01

    Full Text Available Atraumatic extraction methods are deemed to be important to minimize alveolar bone loss after tooth extraction. With the advent of such techniques, exodontia is no more a dreaded procedure in anxious patients. Newer system and techniques for extraction of teeth have evolved in the recent few decades. This article reviews and discusses new techniques to make simple and complex exodontias more efficient with improved patient outcomes. This includes physics forceps, powered periotome, piezosurgery, benex extractor, sonic instrument for bone surgery, lasers.

  7. Automatic coronary calcium scoring using noncontrast and contrast CT images

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Guanyu, E-mail: yang.list@seu.edu.cn; Chen, Yang; Shu, Huazhong [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Ning, Xiufang; Sun, Qiaoyu [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Coatrieux, Jean-Louis [INSERM-U1099, Rennes F-35000 (France); Labotatoire Traitement du Signal et de l’Image (LTSI), Université de Rennes 1, Campus de Beaulieu, Bat. 22, Rennes 35042 Cedex (France); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China)

    2016-05-15

    Purpose: Calcium scoring is widely used to assess the risk of coronary heart disease (CHD). Accurate coronary artery calcification detection in noncontrast CT image is a prerequisite step for coronary calcium scoring. Currently, calcified lesions in the coronary arteries are manually identified by radiologists in clinical practice. Thus, in this paper, a fully automatic calcium scoring method was developed to alleviate the work load of the radiologists or cardiologists. Methods: The challenge of automatic coronary calcification detection is to discriminate the calcification in the coronary arteries from the calcification in the other tissues. Since the anatomy of coronary arteries is difficult to be observed in the noncontrast CT images, the contrast CT image of the same patient is used to extract the regions of the aorta, heart, and coronary arteries. Then, a patient-specific region-of-interest (ROI) is generated in the noncontrast CT image according to the segmentation results in the contrast CT image. This patient-specific ROI focuses on the regions in the neighborhood of coronary arteries for calcification detection, which can eliminate the calcifications in the surrounding tissues. A support vector machine classifier is applied finally to refine the results by removing possible image noise. Furthermore, the calcified lesions in the noncontrast images belonging to the different main coronary arteries are identified automatically using the labeling results of the extracted coronary arteries. Results: Forty datasets from four different CT machine vendors were used to evaluate their algorithm, which were provided by the MICCAI 2014 Coronary Calcium Scoring (orCaScore) Challenge. The sensitivity and positive predictive value for the volume of detected calcifications are 0.989 and 0.948. Only one patient out of 40 patients had been assigned to the wrong risk category defined according to Agatston scores (0, 1–100, 101–300, >300) by comparing with the ground

  8. Automatic Reverse Engineering of Private Flight Control Protocols of UAVs

    Directory of Open Access Journals (Sweden)

    Ran Ji

    2017-01-01

    Full Text Available The increasing use of civil unmanned aerial vehicles (UAVs has the potential to threaten public safety and privacy. Therefore, airspace administrators urgently need an effective method to regulate UAVs. Understanding the meaning and format of UAV flight control commands by automatic protocol reverse-engineering techniques is highly beneficial to UAV regulation. To improve our understanding of the meaning and format of UAV flight control commands, this paper proposes a method to automatically analyze the private flight control protocols of UAVs. First, we classify flight control commands collected from a binary network trace into clusters; then, we analyze the meaning of flight control commands by the accumulated error of each cluster; next, we extract the binary format of commands and infer field semantics in these commands; and finally, we infer the location of the check field in command and the generator polynomial matrix. The proposed approach is validated via experiments on a widely used consumer UAV.

  9. Method and apparatus for mounting or dismounting a semi-automatic twist-lock

    NARCIS (Netherlands)

    Klein Breteler, A.J.; Tekeli, G.

    2001-01-01

    The invention relates to a method for mounting or dismounting a semi-automatic twistlock at a corner of a deck container, wherein the twistlock is mounted or dismounted on a quayside where a ship may be docked for loading or unloading, in a loading or unloading terminal installed on the quayside,

  10. Automatic target classification of man-made objects in synthetic aperture radar images using Gabor wavelet and neural network

    Science.gov (United States)

    Vasuki, Perumal; Roomi, S. Mohamed Mansoor

    2013-01-01

    Processing of synthetic aperture radar (SAR) images has led to the development of automatic target classification approaches. These approaches help to classify individual and mass military ground vehicles. This work aims to develop an automatic target classification technique to classify military targets like truck/tank/armored car/cannon/bulldozer. The proposed method consists of three stages via preprocessing, feature extraction, and neural network (NN). The first stage removes speckle noise in a SAR image by the identified frost filter and enhances the image by histogram equalization. The second stage uses a Gabor wavelet to extract the image features. The third stage classifies the target by an NN classifier using image features. The proposed work performs better than its counterparts, like K-nearest neighbor (KNN). The proposed work performs better on databases like moving and stationary target acquisition and recognition against the earlier methods by KNN.

  11. Comparative exergy analyses of Jatropha curcas oil extraction methods: Solvent and mechanical extraction processes

    International Nuclear Information System (INIS)

    Ofori-Boateng, Cynthia; Keat Teong, Lee; JitKang, Lim

    2012-01-01

    Highlights: ► Exergy analysis detects locations of resource degradation within a process. ► Solvent extraction is six times exergetically destructive than mechanical extraction. ► Mechanical extraction of jatropha oil is 95.93% exergetically efficient. ► Solvent extraction of jatropha oil is 79.35% exergetically efficient. ► Exergy analysis of oil extraction processes allow room for improvements. - Abstract: Vegetable oil extraction processes are found to be energy intensive. Thermodynamically, any energy intensive process is considered to degrade the most useful part of energy that is available to produce work. This study uses literature values to compare the efficiencies and degradation of the useful energy within Jatropha curcas oil during oil extraction taking into account solvent and mechanical extraction methods. According to this study, J. curcas seeds on processing into J. curcas oil is upgraded with mechanical extraction but degraded with solvent extraction processes. For mechanical extraction, the total internal exergy destroyed is 3006 MJ which is about six times less than that for solvent extraction (18,072 MJ) for 1 ton J. curcas oil produced. The pretreatment processes of the J. curcas seeds recorded a total internal exergy destructions of 5768 MJ accounting for 24% of the total internal exergy destroyed for solvent extraction processes and 66% for mechanical extraction. The exergetic efficiencies recorded are 79.35% and 95.93% for solvent and mechanical extraction processes of J. curcas oil respectively. Hence, mechanical oil extraction processes are exergetically efficient than solvent extraction processes. Possible improvement methods are also elaborated in this study.

  12. Improving extraction technology of level seams. Sovershenstvovanie tekhnologii razrabotki pologikh plastov

    Energy Technology Data Exchange (ETDEWEB)

    Shetser, M G; Spitsyn, Yu G

    1985-01-01

    This report deals with conditions and prospects for intensifying extraction of level and inclined seams and improving extraction technology. Reviews mechanization of excavation of stables with automatic cutter-loaders (KA80 in conjunction with KD80); coal extraction using two cutter-loaders in seams 0.9 - 1.9 m thick and up to 20 degrees inclination (pillar mining); reciprocating method of coal cutting; one-sided method of coal extraction (KMK97 cutter loaders). Discusses strengthening of junctions of faces with gate roads (KSU and KSU3M props); improved types of props (hydraulic props SUG-30, SUG-V and GVD); roof control methods (induced caving, advance torpedoing or using KM87UMP and KMT power supports). Deals in detail with introduction of new extraction technology and strengthening of unstable rock by injecting polyurethene compounds, extraction of seams with wide-web cutter-loaders (Kirovets, IK101) and plowing. (3 refs.)

  13. AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION

    Energy Technology Data Exchange (ETDEWEB)

    Alipour, N.; Safari, H. [Department of Physics, University of Zanjan, P.O. Box 45195-313, Zanjan (Iran, Islamic Republic of); Innes, D. E. [Max-Planck Institut fuer Sonnensystemforschung, 37191 Katlenburg-Lindau (Germany)

    2012-02-10

    Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separated using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.

  14. Isolation and Identification of Volatile Components in Tempe by Simultaneous Distillation-Extraction Method by Modified Extraction Method

    Directory of Open Access Journals (Sweden)

    Syahrial Syahrial

    2010-06-01

    Full Text Available An isolation and identification of volatile components in temps for 2, 5 and 8 days fermentation by simultaneous distillation-extraction method was carried out. Simultaneous distillation-extraction apparatus was modified by Muchalal from the basic Likens-Nickerson's design. Steam distillation and benzena as an extraction solvent was used in this system. The isolation was continuously carried out for 3 hours which maximum water temperature In the Liebig condenser was 8 °C. The extract was concentrated by freeze concentration method, and the volatile components were analyzed and identified by combined gas chromatography-mass spectrophotometry (GC-MS. The Muchalal's simultaneous distillation extraction apparatus have some disadvantage in cold finger condenser, and it's extractor did not have condenser. At least 47, 13 and 5 volatile components were found in 2, 5 and 8 days fermentation, respectively. The volatile components in the 2 days fermentation were nonalal, ɑ-pinene, 2,4-decadienal, 5-phenyldecane, 5-phenylundecane, 4-phenylundecane, 5-phenyldodecane, 4-phenyldodecane, 3-phenyldodecane, 2-phenyldodecane, 5-phenyltridecane, and caryophyllene; in the 5 days fermentation were nonalal, caryophyllene, 4-phenylundecane, 5-phenyldodecane, 4-phenyldodecane, 3-phenyldodecane, 2-phenyldodecane; and in the 8 days fermentation were ethenyl butanoic, 2-methy1-3-(methylethenylciclohexyl etanoic and 3,7-dimethyl-5-octenyl etanoic.

  15. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    Science.gov (United States)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  16. Road Extraction and Car Detection from Aerial Image Using Intensity and Color

    Directory of Open Access Journals (Sweden)

    Vahid Ghods

    2011-07-01

    Full Text Available In this paper a new automatic approach to road extraction from aerial images is proposed. The initialization strategies are based on the intensity, color, and Hough transform. After road elements extraction, chain codes are calculated. In the last step, using shadow, cars on the roads are detected. We implemented our method on the 25 images from "Google Earth" database. The experiments show an increase in both the completeness and the quality indexes for the extracted road.

  17. ScholarLens: extracting competences from research publications for the automatic generation of semantic user profiles

    Directory of Open Access Journals (Sweden)

    Bahar Sateli

    2017-07-01

    Full Text Available Motivation Scientists increasingly rely on intelligent information systems to help them in their daily tasks, in particular for managing research objects, like publications or datasets. The relatively young research field of Semantic Publishing has been addressing the question how scientific applications can be improved through semantically rich representations of research objects, in order to facilitate their discovery and re-use. To complement the efforts in this area, we propose an automatic workflow to construct semantic user profiles of scholars, so that scholarly applications, like digital libraries or data repositories, can better understand their users’ interests, tasks, and competences, by incorporating these user profiles in their design. To make the user profiles sharable across applications, we propose to build them based on standard semantic web technologies, in particular the Resource Description Framework (RDF for representing user profiles and Linked Open Data (LOD sources for representing competence topics. To avoid the cold start problem, we suggest to automatically populate these profiles by analyzing the publications (co-authored by users, which we hypothesize reflect their research competences. Results We developed a novel approach, ScholarLens, which can automatically generate semantic user profiles for authors of scholarly literature. For modeling the competences of scholarly users and groups, we surveyed a number of existing linked open data vocabularies. In accordance with the LOD best practices, we propose an RDF Schema (RDFS based model for competence records that reuses existing vocabularies where appropriate. To automate the creation of semantic user profiles, we developed a complete, automated workflow that can generate semantic user profiles by analyzing full-text research articles through various natural language processing (NLP techniques. In our method, we start by processing a set of research articles for a

  18. [Extraction and recognition of attractors in three-dimensional Lorenz plot].

    Science.gov (United States)

    Hu, Min; Jang, Chengfan; Wang, Suxia

    2018-02-01

    Lorenz plot (LP) method which gives a global view of long-time electrocardiogram signals, is an efficient simple visualization tool to analyze cardiac arrhythmias, and the morphologies and positions of the extracted attractors may reveal the underlying mechanisms of the onset and termination of arrhythmias. But automatic diagnosis is still impossible because it is lack of the method of extracting attractors by now. We presented here a methodology of attractor extraction and recognition based upon homogeneously statistical properties of the location parameters of scatter points in three dimensional LP (3DLP), which was constructed by three successive RR intervals as X , Y and Z axis in Cartesian coordinate system. Validation experiments were tested in a group of RR-interval time series and tags data with frequent unifocal premature complexes exported from a 24-hour Holter system. The results showed that this method had excellent effective not only on extraction of attractors, but also on automatic recognition of attractors by the location parameters such as the azimuth of the points peak frequency ( A PF ) of eccentric attractors once stereographic projection of 3DLP along the space diagonal. Besides, A PF was still a powerful index of differential diagnosis of atrial and ventricular extrasystole. Additional experiments proved that this method was also available on several other arrhythmias. Moreover, there were extremely relevant relationships between 3DLP and two dimensional LPs which indicate any conventional achievement of LPs could be implanted into 3DLP. It would have a broad application prospect to integrate this method into conventional long-time electrocardiogram monitoring and analysis system.

  19. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  20. An Automatic Cloud Detection Method for ZY-3 Satellite

    Directory of Open Access Journals (Sweden)

    CHEN Zhenwei

    2015-03-01

    Full Text Available Automatic cloud detection for optical satellite remote sensing images is a significant step in the production system of satellite products. For the browse images cataloged by ZY-3 satellite, the tree discriminate structure is adopted to carry out cloud detection. The image was divided into sub-images and their features were extracted to perform classification between clouds and grounds. However, due to the high complexity of clouds and surfaces and the low resolution of browse images, the traditional classification algorithms based on image features are of great limitations. In view of the problem, a prior enhancement processing to original sub-images before classification was put forward in this paper to widen the texture difference between clouds and surfaces. Afterwards, with the secondary moment and first difference of the images, the feature vectors were extended in multi-scale space, and then the cloud proportion in the image was estimated through comprehensive analysis. The presented cloud detection algorithm has already been applied to the ZY-3 application system project, and the practical experiment results indicate that this algorithm is capable of promoting the accuracy of cloud detection significantly.

  1. A METHOD OF AUTOMATIC DETERMINATION OF THE NUMBER OF THE ELECTRICAL MOTORS SIMULTANEOUSLY WORKING IN GROUP

    Directory of Open Access Journals (Sweden)

    A. V. Voloshko

    2016-11-01

    Full Text Available Purpose. Propose a method of automatic determination of the number of operating high voltage electric motors in the group of the same type based on the determination and analysis of the account data of power consumption, obtained from of electric power meters installed at the connection of motors. Results. The algorithm of the automatic determination program for the number of working in the same group of electric motors, which is based on the determination of the motor power minimum value at which it is considered on, was developed. Originality. For the first time a method of automatic determination of the number of working of the same type high-voltage motors group was proposed. Practical value. Obtained results may be used for the introduction of an automated accounting run of each motor, calculating the parameters of the equivalent induction motor or a synchronous motor.

  2. Automatic differentiation of functions

    International Nuclear Information System (INIS)

    Douglas, S.R.

    1990-06-01

    Automatic differentiation is a method of computing derivatives of functions to any order in any number of variables. The functions must be expressible as combinations of elementary functions. When evaluated at specific numerical points, the derivatives have no truncation error and are automatically found. The method is illustrated by simple examples. Source code in FORTRAN is provided

  3. Methods for determination of extractable complex composition

    International Nuclear Information System (INIS)

    Sergievskij, V.V.

    1984-01-01

    Specific features and restrictions of main methods for determining the extractable complex composition by the distribution data (methods of equilibrium shift, saturation, mathematical models) are considered. Special attention is given to the solution of inverse problems with account for hydration effect on the activity of organic phase components. By example of the systems lithium halides-isoamyl alcohol, thorium nitrate-n-hexyl alcohol, mineral acids tri-n-butyl phosphate (TBP), metal nitrates (uranium lanthanides) - TBP the results on determining stoichiometry of extraction equilibria obtained by various methods are compared

  4. Automatic extraction of nuclei centroids of mouse embryonic cells from fluorescence microscopy images.

    Directory of Open Access Journals (Sweden)

    Md Khayrul Bashar

    Full Text Available Accurate identification of cell nuclei and their tracking using three dimensional (3D microscopic images is a demanding task in many biological studies. Manual identification of nuclei centroids from images is an error-prone task, sometimes impossible to accomplish due to low contrast and the presence of noise. Nonetheless, only a few methods are available for 3D bioimaging applications, which sharply contrast with 2D analysis, where many methods already exist. In addition, most methods essentially adopt segmentation for which a reliable solution is still unknown, especially for 3D bio-images having juxtaposed cells. In this work, we propose a new method that can directly extract nuclei centroids from fluorescence microscopy images. This method involves three steps: (i Pre-processing, (ii Local enhancement, and (iii Centroid extraction. The first step includes two variations: first variation (Variant-1 uses the whole 3D pre-processed image, whereas the second one (Variant-2 modifies the preprocessed image to the candidate regions or the candidate hybrid image for further processing. At the second step, a multiscale cube filtering is employed in order to locally enhance the pre-processed image. Centroid extraction in the third step consists of three stages. In Stage-1, we compute a local characteristic ratio at every voxel and extract local maxima regions as candidate centroids using a ratio threshold. Stage-2 processing removes spurious centroids from Stage-1 results by analyzing shapes of intensity profiles from the enhanced image. An iterative procedure based on the nearest neighborhood principle is then proposed to combine if there are fragmented nuclei. Both qualitative and quantitative analyses on a set of 100 images of 3D mouse embryo are performed. Investigations reveal a promising achievement of the technique presented in terms of average sensitivity and precision (i.e., 88.04% and 91.30% for Variant-1; 86.19% and 95.00% for Variant-2

  5. Automatic variable selection method and a comparison for quantitative analysis in laser-induced breakdown spectroscopy

    Science.gov (United States)

    Duan, Fajie; Fu, Xiao; Jiang, Jiajia; Huang, Tingting; Ma, Ling; Zhang, Cong

    2018-05-01

    In this work, an automatic variable selection method for quantitative analysis of soil samples using laser-induced breakdown spectroscopy (LIBS) is proposed, which is based on full spectrum correction (FSC) and modified iterative predictor weighting-partial least squares (mIPW-PLS). The method features automatic selection without artificial processes. To illustrate the feasibility and effectiveness of the method, a comparison with genetic algorithm (GA) and successive projections algorithm (SPA) for different elements (copper, barium and chromium) detection in soil was implemented. The experimental results showed that all the three methods could accomplish variable selection effectively, among which FSC-mIPW-PLS required significantly shorter computation time (12 s approximately for 40,000 initial variables) than the others. Moreover, improved quantification models were got with variable selection approaches. The root mean square errors of prediction (RMSEP) of models utilizing the new method were 27.47 (copper), 37.15 (barium) and 39.70 (chromium) mg/kg, which showed comparable prediction effect with GA and SPA.

  6. Automatic removal of eye-movement and blink artifacts from EEG signals.

    Science.gov (United States)

    Gao, Jun Feng; Yang, Yong; Lin, Pan; Wang, Pei; Zheng, Chong Xun

    2010-03-01

    Frequent occurrence of electrooculography (EOG) artifacts leads to serious problems in interpreting and analyzing the electroencephalogram (EEG). In this paper, a robust method is presented to automatically eliminate eye-movement and eye-blink artifacts from EEG signals. Independent Component Analysis (ICA) is used to decompose EEG signals into independent components. Moreover, the features of topographies and power spectral densities of those components are extracted to identify eye-movement artifact components, and a support vector machine (SVM) classifier is adopted because it has higher performance than several other classifiers. The classification results show that feature-extraction methods are unsuitable for identifying eye-blink artifact components, and then a novel peak detection algorithm of independent component (PDAIC) is proposed to identify eye-blink artifact components. Finally, the artifact removal method proposed here is evaluated by the comparisons of EEG data before and after artifact removal. The results indicate that the method proposed could remove EOG artifacts effectively from EEG signals with little distortion of the underlying brain signals.

  7. Analysis of steranes and triterpanes in geolipid extracts by automatic classification of mass spectra

    Science.gov (United States)

    Wardroper, A. M. K.; Brooks, P. W.; Humberston, M. J.; Maxwell, J. R.

    1977-01-01

    A computer method is described for the automatic classification of triterpanes and steranes into gross structural type from their mass spectral characteristics. The method has been applied to the spectra obtained by gas-chromatographic/mass-spectroscopic analysis of two mixtures of standards and of hydrocarbon fractions isolated from Green River and Messel oil shales. Almost all of the steranes and triterpanes identified previously in both shales were classified, in addition to a number of new components. The results indicate that classification of such alkanes is possible with a laboratory computer system. The method has application to diagenesis and maturation studies as well as to oil/oil and oil/source rock correlations in which rapid screening of large numbers of samples is required.

  8. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    Science.gov (United States)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  9. Extraction Methods for the Isolation of Isoflavonoids from Plant Material

    Directory of Open Access Journals (Sweden)

    Blicharski Tomasz

    2017-03-01

    Full Text Available The purpose of this review is to describe and compare selected traditional and modern extraction methods employed in the isolation of isoflavonoids from plants. Conventional methods such as maceration, percolation, or Soxhlet extraction are still frequently used in phytochemical analysis. Despite their flexibility, traditional extraction techniques have significant drawbacks, including the need for a significant investment of time, energy, and starting material, and a requirement for large amounts of potentially toxic solvents. Moreover, these techniques are difficult to automate, produce considerable amount of waste and pose a risk of degradation of thermolabile compounds. Modern extraction methods, such as: ultrasound-assisted extraction, microwave-assisted extraction, accelerated solvent extraction, supercritical fluid extraction, and negative pressure cavitation extraction, can be regarded as remedies for the aforementioned problems. This manuscript discusses the use of the most relevant extraction techniques in the process of isolation of isoflavonoids, secondary metabolites that have been found to have a plethora of biological and pharmacological activities.

  10. Pressurised liquid extraction of flavonoids in onions. Method development and validation

    DEFF Research Database (Denmark)

    Søltoft, Malene; Christensen, J.H.; Nielsen, J.

    2009-01-01

    A rapid and reliable analytical method for quantification of flavonoids in onions was developed and validated. Five extraction methods were tested on freeze-dried onions and subsequently high performance liquid chromatography (HPLC) with UV detection was used for quantification of seven flavonoids...... extraction methods. However. PLE was the preferred extraction method because the method can be highly automated, use only small amounts of solvents, provide the cleanest extracts, and allow the extraction of light and oxygen-sensitive flavonoids to be carried out in an inert atmosphere protected from light......-step PLE method showed good selectivity, precision (RSDs = 3.1-11%) and recovery of the extractable flavonoids (98-99%). The method also appeared to be a multi-method, i.e. generally applicable to, e.g. phenolic acids in potatoes and carrots....

  11. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    Science.gov (United States)

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Extraction of uranium from simulated ore by the supercritical carbon dioxide fluid extraction method with nitric acid-TBP complex

    International Nuclear Information System (INIS)

    Dung, Le Thi Kim; Imai, Tomoki; Tomioka, Osamu; Nakashima, Mikio; Takahashi, Kuniaki; Meguro, Yoshihiro

    2006-01-01

    The supercritical fluid extraction (SFE) method using CO 2 as a medium with an extractant of HNO 3 -tri-n-butyl phosphate (TBP) complex was applied to extract uranium from several uranyl phosphate compounds and simulated uranium ores. An extraction method consisting of a static extraction process and a dynamic one was established, and the effects of the experimental conditions, such as pressure, temperature, and extraction time, on the extraction of uranium were ascertained. It was found that uranium could be efficiently extracted from both the uranyl phosphates and simulated ores by the SFE method using CO 2 . It was thus demonstrated that the SFE method using CO 2 is useful as a pretreatment method for the analysis of uranium in ores. (author)

  13. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  14. Adaptive and automatic red blood cell counting method based on microscopic hyperspectral imaging technology

    Science.gov (United States)

    Liu, Xi; Zhou, Mei; Qiu, Song; Sun, Li; Liu, Hongying; Li, Qingli; Wang, Yiting

    2017-12-01

    Red blood cell counting, as a routine examination, plays an important role in medical diagnoses. Although automated hematology analyzers are widely used, manual microscopic examination by a hematologist or pathologist is still unavoidable, which is time-consuming and error-prone. This paper proposes a full-automatic red blood cell counting method which is based on microscopic hyperspectral imaging of blood smears and combines spatial and spectral information to achieve high precision. The acquired hyperspectral image data of the blood smear in the visible and near-infrared spectral range are firstly preprocessed, and then a quadratic blind linear unmixing algorithm is used to get endmember abundance images. Based on mathematical morphological operation and an adaptive Otsu’s method, a binaryzation process is performed on the abundance images. Finally, the connected component labeling algorithm with magnification-based parameter setting is applied to automatically select the binary images of red blood cell cytoplasm. Experimental results show that the proposed method can perform well and has potential for clinical applications.

  15. Exploring DBpedia and Wikipedia for Portuguese Semantic Relationship Extraction

    Directory of Open Access Journals (Sweden)

    David Soares Batista

    2013-07-01

    Full Text Available The identification of semantic relationships, as expressed between named entities in text, is an important step for extracting knowledge from large document collections, such as the Web. Previous works have addressed this task for the English language through supervised learning techniques for automatic classification. The current state of the art involves the use of learning methods based on string kernels. However, such approaches require manually annotated training data for each type of semantic relationship, and have scalability problems when tens or hundreds of different types of relationships have to be extracted. This article discusses an approach for distantly supervised relation extraction over texts written in the Portuguese language, which uses an efficient technique for measuring similarity between relation instances, based on minwise hashing and on locality sensitive hashing. In the proposed method, the training examples are automatically collected from Wikipedia, corresponding to sentences that express semantic relationships between pairs of entities extracted from DBPedia. These examples are represented as sets of character quadgrams and other representative elements. The sets are indexed in a data structure that implements the idea of locality-sensitive hashing. To check which semantic relationship is expressed between a given pair of entities referenced in a sentence, the most similar training examples are searched, based on an approximation to the Jaccard coefficient, obtained through min-hashing. The relation class is assigned with basis on the weighted votes of the most similar examples. Tests with a dataset from Wikipedia validate the suitability of the proposed method, showing, for instance, that the method is able to extract 10 different types of semantic relations, 8 of them corresponding to asymmetric relations, with an average score of 55.6%, measured in terms of F1.

  16. The extraction of essential oil from patchouli leaves (Pogostemon cablin Benth) using microwave hydrodistillation and solvent-free microwave extraction methods

    Science.gov (United States)

    Putri, D. K. Y.; Kusuma, H. S.; Syahputra, M. E.; Parasandi, D.; Mahfud, M.

    2017-12-01

    Patchouli plant (Pogostemon cablin Benth) is one of the important essential oil-producing plant, contributes more than 50% of total exports of Indonesia’s essential oil. However, the extraction of patchouli oil that has been done in Indonesia is generally still used conventional methods that require enormous amount of energy, high solvent usage, and long time of extraction. Therefore, in this study, patchouli oil extraction was carried out by using microwave hydrodistillation and solvent-free microwave extraction methods. Based on this research, it is known that the extraction of patchouli oil using microwave hydrodistillation method with longer extraction time (240 min) only produced patchouli oil’s yield 1.2 times greater than solvent-free microwave extraction method which require faster extraction time (120 min). Otherwise the analysis of electric consumption and the environmental impact, the solvent-free microwave extraction method showed a smaller amount when compared with microwave hydrodistillation method. It is conclude that the use of solvent-free microwave extraction method for patchouli oil extraction is suitably method as a new green technique.

  17. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  18. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    Dupeyrat, Benoit

    1975-01-01

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author) [fr

  19. Automatic characterization of dynamics in Absence Epilepsy

    DEFF Research Database (Denmark)

    Petersen, Katrine N. H.; Nielsen, Trine N.; Kjær, Troels W.

    2013-01-01

    Dynamics of the spike-wave paroxysms in Childhood Absence Epilepsy (CAE) are automatically characterized using novel approaches. Features are extracted from scalograms formed by Continuous Wavelet Transform (CWT). Detection algorithms are designed to identify an estimate of the temporal development...

  20. A new robust markerless method for automatic image-to-patient registration in image-guided neurosurgery system.

    Science.gov (United States)

    Liu, Yinlong; Song, Zhijian; Wang, Manning

    2017-12-01

    Compared with the traditional point-based registration in the image-guided neurosurgery system, surface-based registration is preferable because it does not use fiducial markers before image scanning and does not require image acquisition dedicated for navigation purposes. However, most existing surface-based registration methods must include a manual step for coarse registration, which increases the registration time and elicits some inconvenience and uncertainty. A new automatic surface-based registration method is proposed, which applies 3D surface feature description and matching algorithm to obtain point correspondences for coarse registration and uses the iterative closest point (ICP) algorithm in the last step to obtain an image-to-patient registration. Both phantom and clinical data were used to execute automatic registrations and target registration error (TRE) calculated to verify the practicality and robustness of the proposed method. In phantom experiments, the registration accuracy was stable across different downsampling resolutions (18-26 mm) and different support radii (2-6 mm). In clinical experiments, the mean TREs of two patients by registering full head surfaces were 1.30 mm and 1.85 mm. This study introduced a new robust automatic surface-based registration method based on 3D feature matching. The method achieved sufficient registration accuracy with different real-world surface regions in phantom and clinical experiments.

  1. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    Science.gov (United States)

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  2. Development of automatic control method for cryopump system for JT-60 neutral beam injector

    International Nuclear Information System (INIS)

    Shibanuma, Kiyoshi; Akino, Noboru; Dairaku, Masayuki; Ohuchi, Yutaka; Shibata, Takemasa

    1991-10-01

    A cryopump system for JT-60 neutral beam injector (NBI) is composed of 14 cryopumps with the largest total pumping speed of 20000 m 3 /s in the world, which are cooled by liquid helium through a long-distance liquid helium transferline of about 500 m from a helium refrigerator with the largest capacity of 3000 W at 3.6 K in Japan. An automatic control method of the cryopump system has been developed and tested. Features of the automatic control method are as follows. 1) Suppression control of the thermal imbalance in cooling-down of the 14 cryopumps. 2) Stable cooling control of the cryopump due to liquid helium supply to six cryopanels by natural circulation in steady-state mode. 3) Stable liquid helium supply control for the cryopumps from the liquid helium dewar in all operation modes of the cryopumps, considering the helium quantities held in respective components of the closed helium loop. 4) Stable control of the helium refrigerator for the fluctuation in thermal load from the cryopumps and the change of operation mode of the cryopumps. In the automatic operation of the cryopump system by the newly developed control method, the cryopump system including the refrigerator was stably operated for all operation modes of the cryopumps, so that the cool-down of 14 cryopumps was completed in 16 hours from the start of cool-down of the system and the cryopumps was stably cooled by natural circulation cooling in steady-state mode. (author)

  3. Development of an extraction method for perchlorate in soils.

    Science.gov (United States)

    Cañas, Jaclyn E; Patel, Rashila; Tian, Kang; Anderson, Todd A

    2006-03-01

    Perchlorate originates as a contaminant in the environment from its use in solid rocket fuels and munitions. The current US EPA methods for perchlorate determination via ion chromatography using conductivity detection do not include recommendations for the extraction of perchlorate from soil. This study evaluated and identified appropriate conditions for the extraction of perchlorate from clay loam, loamy sand, and sandy soils. Based on the results of this evaluation, soils should be extracted in a dry, ground (mortar and pestle) state with Milli-Q water in a 1 ratio 1 soil ratio water ratio and diluted no more than 5-fold before analysis. When sandy soils were extracted in this manner, the calculated method detection limit was 3.5 microg kg(-1). The findings of this study have aided in the establishment of a standardized extraction method for perchlorate in soil.

  4. Improving the local wavenumber method by automatic DEXP transformation

    Science.gov (United States)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  5. Automatic Sleep Staging using Multi-dimensional Feature Extraction and Multi-kernel Fuzzy Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2014-01-01

    Full Text Available This paper employed the clinical Polysomnographic (PSG data, mainly including all-night Electroencephalogram (EEG, Electrooculogram (EOG and Electromyogram (EMG signals of subjects, and adopted the American Academy of Sleep Medicine (AASM clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM were learned and the multi-kernel FSVM (MK-FSVM was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.

  6. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. © The Author(s) 2015.

  7. A comparison of coronal mass ejections identified by manual and automatic methods

    Directory of Open Access Journals (Sweden)

    S. Yashiro

    2008-10-01

    Full Text Available Coronal mass ejections (CMEs are related to many phenomena (e.g. flares, solar energetic particles, geomagnetic storms, thus compiling of event catalogs is important for a global understanding these phenomena. CMEs have been identified manually for a long time, but in the SOHO era, automatic identification methods are being developed. In order to clarify the advantage and disadvantage of the manual and automatic CME catalogs, we examined the distributions of CME properties listed in the CDAW (manual and CACTus (automatic catalogs. Both catalogs have a good agreement on the wide CMEs (width>120° in their properties, while there is a significant discrepancy on the narrow CMEs (width≤30°: CACTus has a larger number of narrow CMEs than CDAW. We carried out an event-by-event examination of a sample of events and found that the CDAW catalog have missed many narrow CMEs during the solar maximum. Another significant discrepancy was found on the fast CMEs (speed>1000 km/s: the majority of the fast CDAW CMEs are wide and originate from low latitudes, while the fast CACTus CMEs are narrow and originate from all latitudes. Event-by-event examination of a sample of events suggests that CACTus has a problem on the detection of the fast CMEs.

  8. Detection of alcoholism based on EEG signals and functional brain network features extraction

    NARCIS (Netherlands)

    Ahmadi, N.; Pei, Y.; Pechenizkiy, M.

    2017-01-01

    Alcoholism is a common disorder that leads to brain defects and associated cognitive, emotional and behavioral impairments. Finding and extracting discriminative biological markers, which are correlated to healthy brain pattern and alcoholic brain pattern, helps us to utilize automatic methods for

  9. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  10. Automatic segmentation and 3-dimensional display based on the knowledge of head MRI images

    International Nuclear Information System (INIS)

    Suzuki, Hidetomo; Toriwaki, Jun-ichiro.

    1987-01-01

    In this paper we present a procedure which automatically extracts soft tissues, such as subcutaneous fat, brain, and cerebral ventricle, from the multislice MRI images of head region, and displays their 3-dimensional images. Segmentation of soft tissues is done by use of an iterative thresholding. In order to select the optimum threshold value automatically, we introduce a measure to evaluate the goodness of segmentation into this procedure. When the measure satisfies given conditions, iteration of thresholding terminates, and the final result of segmentation is extracted by using the current threshold value. Since this procedure can execute segmentation and calculation of the goodness measure in each slice automatically, it remarkably decreases efforts of users. Moreover, the 3-dimensional display of the segmented tissues shows that this procedure can extract the shape of each soft tissue with reasonable precision for clinical use. (author)

  11. A method of automatic data processing in radiometric control

    International Nuclear Information System (INIS)

    Adonin, V.M.; Gulyukina, N.A.; Nemirov, Yu.V.; Mogil'nitskij, M.I.

    1980-01-01

    Described is the algorithm for automatic data processing in gamma radiography of products. Rapidity due to application of recurrent evaluation is a specific feature of the processing. Experimental data of by-line control are presented. The results obtained have shown the applicability of automatic signal processing to the testing under industrial conditions, which would permit to increase the testing efficiency to eliminate the subjectivism in assessment of testing results and to improve working conditions

  12. Automatic validation of numerical solutions

    DEFF Research Database (Denmark)

    Stauning, Ole

    1997-01-01

    This thesis is concerned with ``Automatic Validation of Numerical Solutions''. The basic theory of interval analysis and self-validating methods is introduced. The mean value enclosure is applied to discrete mappings for obtaining narrow enclosures of the iterates when applying these mappings...... differential equations, but in this thesis, we describe how to use the methods for enclosing iterates of discrete mappings, and then later use them for discretizing solutions of ordinary differential equations. The theory of automatic differentiation is introduced, and three methods for obtaining derivatives...... are described: The forward, the backward, and the Taylor expansion methods. The three methods have been implemented in the C++ program packages FADBAD/TADIFF. Some examples showing how to use the three metho ds are presented. A feature of FADBAD/TADIFF not present in other automatic differentiation packages...

  13. Comparative Analysis of Music Recordings from Western and Non-Western traditions by Automatic Tonal Feature Extraction

    Directory of Open Access Journals (Sweden)

    Emilia Gómez

    2008-09-01

    Full Text Available The automatic analysis of large musical corpora by means of computational models overcomes some limitations of manual analysis, and the unavailability of scores for most existing music makes necessary to work with audio recordings. Until now, research on this area has focused on music from the Western tradition. Nevertheless, we might ask if the available methods are suitable when analyzing music from other cultures. We present an empirical approach to the comparative analysis of audio recordings, focusing on tonal features and data mining techniques. Tonal features are related to the pitch class distribution, pitch range and employed scale, gamut and tuning system. We provide our initial but promising results obtained when trying to automatically distinguish music from Western and non- Western traditions; we analyze which descriptors are most relevant and study their distribution over 1500 pieces from different traditions and styles. As a result, some feature distributions differ for Western and non-Western music, and the obtained classification accuracy is higher than 80% for different classification algorithms and an independent test set. These results show that automatic description of audio signals together with data mining techniques provide means to characterize huge music collections from different traditions and complement musicological manual analyses.

  14. Method of purifying phosphoric acid after solvent extraction

    International Nuclear Information System (INIS)

    Kouloheris, A.P.; Lefever, J.A.

    1979-01-01

    A method of purifying phosphoric acid after solvent extraction is described. The phosphoric acid is contacted with a sorbent which sorbs or takes up the residual amount of organic carrier and the phosphoric acid separated from the organic carrier-laden sorbent. The method is especially suitable for removing residual organic carrier from phosphoric acid after solvent extraction uranium recovery. (author)

  15. A method for the automatic quantification of the completeness of pulmonary fissures: evaluation in a database of subjects with severe emphysema

    Energy Technology Data Exchange (ETDEWEB)

    Rikxoort, Eva M. van; Goldin, Jonathan G.; Galperin-Aizenberg, Maya; Abtin, Fereidoun; Kim, Hyun J.; Lu, Peiyun; Shaw, Greg; Brown, Matthew S. [University of California-Los Angeles, Center for Computer Vision and Imaging Biomarkers and Thoracic Imaging Research Group, Department of Radiological Sciences, David Geffen School of Medicine, Los Angeles, CA (United States); Ginneken, Bram van [Radboud University Nijmegen Medical Centre, Diagnostic Image Analysis Group, Department of Radiology, Nijmegen (Netherlands); University Medical Center Utrecht, Image Sciences Institute, Department of Radiology, Utrecht (Netherlands)

    2012-02-15

    To propose and evaluate a technique for automatic quantification of fissural completeness from chest computed tomography (CT) in a database of subjects with severe emphysema. Ninety-six CT studies of patients with severe emphysema were included. The lungs, fissures and lobes were automatically segmented. The completeness of the fissures was calculated as the percentage of the lobar border defined by a fissure. The completeness score of the automatic method was compared with a visual consensus read by three radiologists using boxplots, rank sum tests and ROC analysis. The consensus read found 49% (47/96), 15% (14/96) and 67% (64/96) of the right major, right minor and left major fissures to be complete. For all fissures visually assessed as being complete the automatic method resulted in significantly higher completeness scores (mean 92.78%) than for those assessed as being partial or absent (mean 77.16%; all p values <0.001). The areas under the curves for the automatic fissural completeness were 0.88, 0.91 and 0.83 for the right major, right minor and left major fissures respectively. An automatic method is able to quantify fissural completeness in a cohort of subjects with severe emphysema consistent with a visual consensus read of three radiologists. (orig.)

  16. Using activity-related behavioural features towards more effective automatic stress detection.

    Directory of Open Access Journals (Sweden)

    Dimitris Giakoumis

    Full Text Available This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing.

  17. Extraction methods of Amaranthus sp. grain oil isolation.

    Science.gov (United States)

    Krulj, Jelena; Brlek, Tea; Pezo, Lato; Brkljača, Jovana; Popović, Sanja; Zeković, Zoran; Bodroža Solarov, Marija

    2016-08-01

    Amaranthus sp. is a fast-growing crop with well-known beneficial nutritional values (rich in protein, fat, dietary fiber, ash, and minerals, especially calcium and sodium, and containing a higher amount of lysine than conventional cereals). Amaranthus sp. is an underexploited plant source of squalene, a compound of high importance in the food, cosmetic and pharmaceutical industries. This paper has examined the effects of the different extraction methods (Soxhlet, supercritical fluid and accelerated solvent extraction) on the oil and squalene yield of three genotypes of Amaranthus sp. grain. The highest yield of the extracted oil (78.1 g kg(-1) ) and squalene (4.7 g kg(-1) ) in grain was obtained by accelerated solvent extraction (ASE) in genotype 16. Post hoc Tukey's HSD test at 95% confidence limit showed significant differences between observed samples. Principal component analysis (PCA) and cluster analysis (CA) were used for assessing the effect of different genotypes and extraction methods on oil and squalene yield, and also the fatty acid composition profile. Using coupled PCA and CA of observed samples, possible directions for improving the quality of product can be realized. The results of this study indicate that it is very important to choose both the right genotype and the right method of extraction for optimal oil and squalene yield. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  18. Position automatic determination technology

    International Nuclear Information System (INIS)

    1985-10-01

    This book tells of method of position determination and characteristic, control method of position determination and point of design, point of sensor choice for position detector, position determination of digital control system, application of clutch break in high frequency position determination, automation technique of position determination, position determination by electromagnetic clutch and break, air cylinder, cam and solenoid, stop position control of automatic guide vehicle, stacker crane and automatic transfer control.

  19. A Karnaugh-Map based fingerprint minutiae extraction method

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Singla

    2010-07-01

    Full Text Available Fingerprint is one of the most promising method among all the biometric techniques and has been used for thepersonal authentication for a long time because of its wide acceptance and reliability. Features (Minutiae are extracted fromthe fingerprint in question and are compared with the features already stored in the database for authentication. Crossingnumber (CN is the most commonly used minutiae extraction method for fingerprints. In this paper, a new Karnaugh-Mapbased fingerprint minutiae extraction method has been proposed and discussed. In the proposed algorithm the 8 neighborsof a pixel in a 33 window are arranged as 8 bits of a byte and corresponding hexadecimal (hex value is calculated. Thesehex values are simplified using standard Karnaugh-Map (K-map technique to obtain the minimized logical expression.Experiments conducted on the FVC2002/Db1_a database reveals that the developed method is better than the crossingnumber (CN method.

  20. Inorganic arsenic in seafood: does the extraction method matter?

    Science.gov (United States)

    Pétursdóttir, Ásta H; Gunnlaugsdóttir, Helga; Krupp, Eva M; Feldmann, Jörg

    2014-05-01

    Nine different extraction methods were evaluated for three seafood samples to test whether the concentration of inorganic arsenic (iAs) determined in seafood is dependent on the extraction method. Certified reference materials (CRM) DOLT-4 (Dogfish Liver) and TORT-2 (Lobster Hepatopancreas), and a commercial herring fish meal were evaluated. All experimental work described here was carried out by the same operator using the same instrumentation, thus eliminating possible differences in results caused by laboratory related factors. Low concentrations of iAs were found in CRM DOLT-4 (0.012±0.003mgkg(-1)) and the herring fish meal sample (0.007±0.002mgkg(-1)) for all extraction methods. When comparing the concentration of iAs in CRM TORT-2 found in this study and in the literature dilute acids, HNO3 and HCl, showed the highest extracted iAs wheras dilute NaOH (in 50% ethanol) showed significantly lower extracted iAs. However, most other extraction solvents were not statistically different from one another. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Methods for microbial DNA extraction from soil for PCR amplification

    Directory of Open Access Journals (Sweden)

    Yeates C

    1998-01-01

    Full Text Available Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1. DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol precipitation. This method was compared to other DNA extraction methods with regard to DNA purity and size.

  2. Automatic methods of the processing of data from track detectors on the basis of the PAVICOM facility

    Science.gov (United States)

    Aleksandrov, A. B.; Goncharova, L. A.; Davydov, D. A.; Publichenko, P. A.; Roganova, T. M.; Polukhina, N. G.; Feinberg, E. L.

    2007-02-01

    New automatic methods essentially simplify and increase the rate of the processing of data from track detectors. This provides a possibility of processing large data arrays and considerably improves their statistical significance. This fact predetermines the development of new experiments which plan to use large-volume targets, large-area emulsion, and solid-state track detectors [1]. In this regard, the problem of training qualified physicists who are capable of operating modern automatic equipment is very important. Annually, about ten Moscow students master the new methods, working at the Lebedev Physical Institute at the PAVICOM facility [2 4]. Most students specializing in high-energy physics are only given an idea of archaic manual methods of the processing of data from track detectors. In 2005, on the basis of the PAVICOM facility and the physicstraining course of Moscow State University, a new training work was prepared. This work is devoted to the determination of the energy of neutrons passing through a nuclear emulsion. It provides the possibility of acquiring basic practical skills of the processing of data from track detectors using automatic equipment and can be included in the educational process of students of any physical faculty. Those who have mastered the methods of automatic data processing in a simple and pictorial example of track detectors will be able to apply their knowledge in various fields of science and technique. Formulation of training works for pregraduate and graduate students is a new additional aspect of application of the PAVICOM facility described earlier in [4].

  3. AUTOMR: An automatic processing program system for the molecular replacement method

    International Nuclear Information System (INIS)

    Matsuura, Yoshiki

    1991-01-01

    An automatic processing program system of the molecular replacement method AUTMR is presented. The program solves the initial model of the target crystal structure using a homologous molecule as the search model. It processes the structure-factor calculation of the model molecule, the rotation function, the translation function and the rigid-group refinement successively in one computer job. Test calculations were performed for six protein crystals and the structures were solved in all of these cases. (orig.)

  4. Technical characterization by image analysis: an automatic method of mineralogical studies

    International Nuclear Information System (INIS)

    Oliveira, J.F. de

    1988-01-01

    The application of a modern method of image analysis fully automated for the study of grain size distribution modal assays, degree of liberation and mineralogical associations is discussed. The image analyser is interfaced with a scanning electron microscope and an energy dispersive X-rays analyser. The image generated by backscattered electrons is analysed automatically and the system has been used in accessment studies of applied mineralogy as well as in process control in the mining industry. (author) [pt

  5. A fast, simple and green method for the extraction of carbamate pesticides from rice by microwave assisted steam extraction coupled with solid phase extraction.

    Science.gov (United States)

    Song, Weitao; Zhang, Yiqun; Li, Guijie; Chen, Haiyan; Wang, Hui; Zhao, Qi; He, Dong; Zhao, Chun; Ding, Lan

    2014-01-15

    This paper presented a fast, simple and green sample pretreatment method for the extraction of 8 carbamate pesticides in rice. The carbamate pesticides were extracted by microwave assisted water steam extraction method, and the extract obtained was immediately applied on a C18 solid phase extraction cartridge for clean-up and concentration. The eluate containing target compounds was finally analysed by high performance liquid chromatography with mass spectrometry. The parameters affecting extraction efficiency were investigated and optimised. The limits of detection ranging from 1.1 to 4.2ngg(-1) were obtained. The recoveries of 8 carbamate pesticides ranged from 66% to 117% at three spiked levels, and the inter- and intra-day relative standard deviation values were less than 9.1%. Compared with traditional methods, the proposed method cost less extraction time and organic solvent. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. MIAQuant, a novel system for automatic segmentation, measurement, and localization comparison of different biomarkers from serialized histological slices.

    Science.gov (United States)

    Casiraghi, Elena; Cossa, Mara; Huber, Veronica; Rivoltini, Licia; Tozzi, Matteo; Villa, Antonello; Vergani, Barbara

    2017-11-02

    In the clinical practice, automatic image analysis methods quickly quantizing histological results by objective and replicable methods are getting more and more necessary and widespread. Despite several commercial software products are available for this task, they are very little flexible, and provided as black boxes without modifiable source code. To overcome the aforementioned problems, we employed the commonly used MATLAB platform to develop an automatic method, MIAQuant, for the analysis of histochemical and immunohistochemical images, stained with various methods and acquired by different tools. It automatically extracts and quantifies markers characterized by various colors and shapes; furthermore, it aligns contiguous tissue slices stained by different markers and overlaps them with differing colors for visual comparison of their localization. Application of MIAQuant for clinical research fields, such as oncology and cardiovascular disease studies, has proven its efficacy, robustness and flexibility with respect to various problems; we highlight that, the flexibility of MIAQuant makes it an important tool to be exploited for basic researches where needs are constantly changing. MIAQuant software and its user manual are freely available for clinical studies, pathological research, and diagnosis.

  7. Automatic Quantification of Radiographic Wrist Joint Space Width of Patients With Rheumatoid Arthritis.

    Science.gov (United States)

    Huo, Yinghe; Vincken, Koen L; van der Heijde, Desiree; de Hair, Maria J H; Lafeber, Floris P; Viergever, Max A

    2017-11-01

    Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints are located around the scaphoid bone, viz. the multangular-navicular, capitate-navicular-lunate, and radiocarpal joints. Methods: The joint space around the scaphoid bone is detected by using consecutive searches of separate path segments, where each segment location aids in constraining the subsequent one. For joint margin delineation, first the boundary not affected by X-ray projection is extracted, followed by a backtrace process to obtain the actual joint margin. The accuracy of the quantified JSW is evaluated by comparison with the manually obtained ground truth. Results: Two of the 50 radiographs used for evaluation of the method did not yield a correct path through all three wrist joints. The delineated joint margins of the remaining 48 radiographs were used for JSW quantification. It was found that 90% of the joints had a JSW deviating less than 20% from the mean JSW of manual indications, with the mean JSW error less than 10%. Conclusion: The proposed method is able to automatically quantify the JSW of radiographic wrist joints reliably. The proposed method may aid clinical researchers to study the progression of wrist joint damage in RA studies. Objective: Wrist joint space narrowing is a main radiographic outcome of rheumatoid arthritis (RA). Yet, automatic radiographic wrist joint space width (JSW) quantification for RA patients has not been widely investigated. The aim of this paper is to present an automatic method to quantify the JSW of three wrist joints that are least affected by bone overlapping and are frequently involved in RA. These joints

  8. A hybrid model based on neural networks for biomedical relation extraction.

    Science.gov (United States)

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Zhang, Shaowu; Sun, Yuanyuan; Yang, Liang

    2018-05-01

    Biomedical relation extraction can automatically extract high-quality biomedical relations from biomedical texts, which is a vital step for the mining of biomedical knowledge hidden in the literature. Recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are two major neural network models for biomedical relation extraction. Neural network-based methods for biomedical relation extraction typically focus on the sentence sequence and employ RNNs or CNNs to learn the latent features from sentence sequences separately. However, RNNs and CNNs have their own advantages for biomedical relation extraction. Combining RNNs and CNNs may improve biomedical relation extraction. In this paper, we present a hybrid model for the extraction of biomedical relations that combines RNNs and CNNs. First, the shortest dependency path (SDP) is generated based on the dependency graph of the candidate sentence. To make full use of the SDP, we divide the SDP into a dependency word sequence and a relation sequence. Then, RNNs and CNNs are employed to automatically learn the features from the sentence sequence and the dependency sequences, respectively. Finally, the output features of the RNNs and CNNs are combined to detect and extract biomedical relations. We evaluate our hybrid model using five public (protein-protein interaction) PPI corpora and a (drug-drug interaction) DDI corpus. The experimental results suggest that the advantages of RNNs and CNNs in biomedical relation extraction are complementary. Combining RNNs and CNNs can effectively boost biomedical relation extraction performance. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Interactive extraction of neural structures with user-guided morphological diffusion

    KAUST Repository

    Yong Wan,; Otsuna, H.; Chi-Bin Chien,; Hansen, C.

    2012-01-01

    Extracting neural structures with their fine details from confocal volumes is essential to quantitative analysis in neurobiology research. Despite the abundance of various segmentation methods and tools, for complex neural structures, both manual and semi-automatic methods are ine ective either in full 3D or when user interactions are restricted to 2D slices. Novel interaction techniques and fast algorithms are demanded by neurobiologists to interactively and intuitively extract neural structures from confocal data. In this paper, we present such an algorithm-technique combination, which lets users interactively select desired structures from visualization results instead of 2D slices. By integrating the segmentation functions with a confocal visualization tool neurobiologists can easily extract complex neural structures within their typical visualization workflow.

  10. Interactive extraction of neural structures with user-guided morphological diffusion

    KAUST Repository

    Yong Wan,

    2012-10-01

    Extracting neural structures with their fine details from confocal volumes is essential to quantitative analysis in neurobiology research. Despite the abundance of various segmentation methods and tools, for complex neural structures, both manual and semi-automatic methods are ine ective either in full 3D or when user interactions are restricted to 2D slices. Novel interaction techniques and fast algorithms are demanded by neurobiologists to interactively and intuitively extract neural structures from confocal data. In this paper, we present such an algorithm-technique combination, which lets users interactively select desired structures from visualization results instead of 2D slices. By integrating the segmentation functions with a confocal visualization tool neurobiologists can easily extract complex neural structures within their typical visualization workflow.

  11. Effect of extraction method and orientin content on radio-protective effect of tulsi extracts

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, Mrinalini; Dwarakanath, B. S.; Agrawala, Paban K., E-mail: pkagrawal@gmail.com [Institute of Nuclear Medicine and Allied Sciences, Delhi (India); Murugan, R.; Parimelazhagan, T. [Department of Botany, Bharathiar University, Coimbatore (India); Uma Devi, P. [ARA-B-3SA, Plavilakonam,Trivandrum (India); Gota, V.; Sarin, R. K. [Advanced Centre for Treatment Research and Education in Cancer, Navi Mumbai (India)

    2014-07-01

    Extract of tulsi leaves (Ocimum sanctum) has been reported for its radioprotective efficacy. In our initial studies we observed significant variation in the survival of irradiated mice with different batches of tulsi extracts and therefore we employed different extraction methods on leaves collected during various seasons from different localities to study any variation in the radioprotective efficacy. Orientin, a component of tulsi extract, was considered a marker. Mice whole body survival (at 10 Gy lethal whole body irradiation) study and day 11 endo-CFU-s assay (at 5 Gy WBI) were performed employing 3 treatment schedules, 50 mg/kg or 25 mg/kg b.w (single injection, 30 min irradiation), and 10 mg/kgb.w (one injection per day for 5 day, last injection being 30 min before irradiation). Single dose of 25 mg/kg b.w (both aqueous and alcoholic) did not provide any significant survival benefit. The orientin concentrations in the extracts tested varied from 3.3 to 9.91 mg/g extract as studied by HPLC method. With a single administration (i.p) of 50 mg/kg, the aqueous extract from leaves of monsoon season had an orientin content of 9.91 mg/g extract and gave a survival of 60% with a CFU-s count of 37, while extract of leaf summer leaves had an orientin content of 4.15 mg/g extract and gave a survival of 50% with a CFU-s count of 11.6. At the same dose (50 mg/kg), the aqueous extract from the winter season had an orientin content of 3.30 mg/g extract and gave 25% survival with a CFU-s count of 19, while the ethanolic extract had an orientin content of 7.70 mg/g extract and gave a survival of 50% with a CFU-s count of 13. These observations suggest that different climatic factors, orientin content and the doses of administration are important factors regulating radioprotection afforded by different extracts of tulsi. (author)

  12. Automatic ultrasound image enhancement for 2D semi-automatic breast-lesion segmentation

    Science.gov (United States)

    Lu, Kongkuo; Hall, Christopher S.

    2014-03-01

    Breast cancer is the fastest growing cancer, accounting for 29%, of new cases in 2012, and second leading cause of cancer death among women in the United States and worldwide. Ultrasound (US) has been used as an indispensable tool for breast cancer detection/diagnosis and treatment. In computer-aided assistance, lesion segmentation is a preliminary but vital step, but the task is quite challenging in US images, due to imaging artifacts that complicate detection and measurement of the suspect lesions. The lesions usually present with poor boundary features and vary significantly in size, shape, and intensity distribution between cases. Automatic methods are highly application dependent while manual tracing methods are extremely time consuming and have a great deal of intra- and inter- observer variability. Semi-automatic approaches are designed to counterbalance the advantage and drawbacks of the automatic and manual methods. However, considerable user interaction might be necessary to ensure reasonable segmentation for a wide range of lesions. This work proposes an automatic enhancement approach to improve the boundary searching ability of the live wire method to reduce necessary user interaction while keeping the segmentation performance. Based on the results of segmentation of 50 2D breast lesions in US images, less user interaction is required to achieve desired accuracy, i.e. < 80%, when auto-enhancement is applied for live-wire segmentation.

  13. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  14. Feature Extraction and Selection Strategies for Automated Target Recognition

    Science.gov (United States)

    Greene, W. Nicholas; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2010-01-01

    Several feature extraction and selection methods for an existing automatic target recognition (ATR) system using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height (OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory region of-interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a final classification stage. Feature extraction and selection concerns transforming potential target data into more useful forms as well as selecting important subsets of that data which may aide in detection and classification. The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA) and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.

  15. Effect of mucin extraction method on some properties of ...

    African Journals Online (AJOL)

    Effect of mucin extraction method on some properties of metronidazole mucoadhesive loaded patches. MI Arhewoh, SO Eraga, PF Builders, MA Ibobiri. Abstract. To evaluate the effects of mucin extraction method and plasticizer concentration on the bioadhesive strength and metronidazole release profile from mucin-based ...

  16. Simple Methods for Scanner Drift Normalization Validated for Automatic Segmentation of Knee Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Dam, Erik Bjørnager

    2018-01-01

    Scanner drift is a well-known magnetic resonance imaging (MRI) artifact characterized by gradual signal degradation and scan intensity changes over time. In addition, hardware and software updates may imply abrupt changes in signal. The combined effects are particularly challenging for automatic...... image analysis methods used in longitudinal studies. The implication is increased measurement variation and a risk of bias in the estimations (e.g. in the volume change for a structure). We proposed two quite different approaches for scanner drift normalization and demonstrated the performance...... for segmentation of knee MRI using the fully automatic KneeIQ framework. The validation included a total of 1975 scans from both high-field and low-field MRI. The results demonstrated that the pre-processing method denoted Atlas Affine Normalization significantly removed scanner drift effects and ensured...

  17. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds

    International Nuclear Information System (INIS)

    Cristiano, Barbara Fernandes G.; Dias, Fabio C.; Barros, Pedro D. de; Araujo, Radier Mario S. de; Delgado, Jose Ubiratan; Silva, Jose Wanderley S. da; Lopes, Ricardo T.

    2011-01-01

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  18. Large-scale automatic extraction of side effects associated with targeted anticancer drugs from full-text oncological articles.

    Science.gov (United States)

    Xu, Rong; Wang, QuanQiu

    2015-06-01

    Targeted anticancer drugs such as imatinib, trastuzumab and erlotinib dramatically improved treatment outcomes in cancer patients, however, these innovative agents are often associated with unexpected side effects. The pathophysiological mechanisms underlying these side effects are not well understood. The availability of a comprehensive knowledge base of side effects associated with targeted anticancer drugs has the potential to illuminate complex pathways underlying toxicities induced by these innovative drugs. While side effect association knowledge for targeted drugs exists in multiple heterogeneous data sources, published full-text oncological articles represent an important source of pivotal, investigational, and even failed trials in a variety of patient populations. In this study, we present an automatic process to extract targeted anticancer drug-associated side effects (drug-SE pairs) from a large number of high profile full-text oncological articles. We downloaded 13,855 full-text articles from the Journal of Oncology (JCO) published between 1983 and 2013. We developed text classification, relationship extraction, signaling filtering, and signal prioritization algorithms to extract drug-SE pairs from downloaded articles. We extracted a total of 26,264 drug-SE pairs with an average precision of 0.405, a recall of 0.899, and an F1 score of 0.465. We show that side effect knowledge from JCO articles is largely complementary to that from the US Food and Drug Administration (FDA) drug labels. Through integrative correlation analysis, we show that targeted drug-associated side effects positively correlate with their gene targets and disease indications. In conclusion, this unique database that we built from a large number of high-profile oncological articles could facilitate the development of computational models to understand toxic effects associated with targeted anticancer drugs. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Automatic method for selective enhancement of different tissue densities at digital chest radiography

    International Nuclear Information System (INIS)

    McNitt-Gray, M.F.; Taira, R.K.; Eldredge, S.L.; Razavi, M.

    1991-01-01

    This paper reports that digital chest radiographs often are too bright and/or lack contrast when viewed on a video display. The authors have developed a method that can automatically provide a series of look-up tables that selectively enhance the radiographically soft or dense tissues on a digital chest radiograph. This reduces viewer interaction and improves displayed image quality. On the basis of a histogram analysis, gray-level ranges are approximated for the patient background, radiographically soft tissues, and radiographically dense tissues. A series of look-up tables is automatically created by varying the contrast in each range to achieve a level of enhancement for a selected tissue range. This is repeated for differing amounts of enhancement and for each tissue range. This allows the viewer to interactively select a tissue density range and degree of enhancement at the time of display via precalculated look-up tables. Preclinical trials in pediatric radiology using computed radiography images show that this method reduces viewer interaction and improves or maintains the displayed image quality

  20. Tie Points Extraction for SAR Images Based on Differential Constraints

    Science.gov (United States)

    Xiong, X.; Jin, G.; Xu, Q.; Zhang, H.

    2018-04-01

    Automatically extracting tie points (TPs) on large-size synthetic aperture radar (SAR) images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC) algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC) algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  1. BMAA extraction of cyanobacteria samples: which method to choose?

    Science.gov (United States)

    Lage, Sandra; Burian, Alfred; Rasmussen, Ulla; Costa, Pedro Reis; Annadotter, Heléne; Godhe, Anna; Rydberg, Sara

    2016-01-01

    β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW).

  2. Social network extraction based on Web: 1. Related superficial methods

    Science.gov (United States)

    Khairuddin Matyuso Nasution, Mahyuddin

    2018-01-01

    Often the nature of something affects methods to resolve the related issues about it. Likewise, methods to extract social networks from the Web, but involve the structured data types differently. This paper reveals several methods of social network extraction from the same sources that is Web: the basic superficial method, the underlying superficial method, the description superficial method, and the related superficial methods. In complexity we derive the inequalities between methods and so are their computations. In this case, we find that different results from the same tools make the difference from the more complex to the simpler: Extraction of social network by involving co-occurrence is more complex than using occurrences.

  3. Extraction kinetics of uranium (VI) with polyurethane foam

    International Nuclear Information System (INIS)

    Huang, Ting-Chia; Chen, Dong-Hwang; Huang, Shius-Dong; Huang, Ching-Tsven; Shieh, Mu-Chang.

    1993-01-01

    The extraction kinetics of uranium(VI) from aqueous nitrate solution with polyether-based polyurethane foam was investigated in a batch reactor with automatic squeezing. The extraction curves of uranium(VI) concentration in solution vs. extraction time exhibited a rather rapid exponential decay within the first few minutes, followed by a slower exponential decay during the remaining period. This phenomenon can be attributed to the presence of two-phase structure, hard segment domains and soft segment matrix in the polyurethane foam. A two-stage rate model expressed by a superposition of two exponential curves was proposed, according to which the experimental data were fitted by an optimization method. The extraction rate of uranium (VI) was also found to increase with increasing temperature, nitrate concentration, and hydration of the cation of nitrate salt. (author)

  4. Method of extraction under pressure of fossil material

    Energy Technology Data Exchange (ETDEWEB)

    Fredenmark, G L

    1942-02-24

    A method is described of extraction under pressure of fossil material, such as coal, brown coal (lignite), peat, oil shale. It is characterized by carrying out the process of extraction by utilization of fractions of shale oils and/or peat tar with a boiling point above 170/sup 0/C and under such as pressure that the medium of extraction is in a liquid state.

  5. Creation of voxel-based models for paediatric dosimetry from automatic segmentation methods

    International Nuclear Information System (INIS)

    Acosta, O.; Li, R.; Ourselin, S.; Caon, M.

    2006-01-01

    Full text: The first computational models representing human anatomy were mathematical phantoms, but still far from accurate representations of human body. These models have been used with radiation transport codes (Monte Carlo) to estimate organ doses from radiological procedures. Although new medical imaging techniques have recently allowed the construction of voxel-based models based on the real anatomy, few children models from individual CT or MRI data have been reported [1,3]. For pediatric dosimetry purposes, a large range of voxel models by ages is required since scaling the anatomy from existing models is not sufficiently accurate. The small number of models available arises from the small number of CT or MRI data sets of children available and the long amount of time required to segment the data sets. The existing models have been constructed by manual segmentation slice by slice and using simple thresholding techniques. In medical image segmentation, considerable difficulties appear when applying classical techniques like thresholding or simple edge detection. Until now, any evidence of more accurate or near-automatic methods used in construction of child voxel models exists. We aim to construct a range of pediatric voxel models, integrating automatic or semi-automatic 3D segmentation techniques. In this paper we present the first stage of this work using pediatric CT data.

  6. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  7. Automatic evaluation of vessel diameter variation from 2D X-ray angiography.

    Science.gov (United States)

    M'hiri, Faten; Duong, Luc; Desrosiers, Christian; Dahdah, Nagib; Miró, Joaquim; Cheriet, Mohamed

    2017-11-01

    Early detection of blood vessel pathologies can be made through the evaluation of functional and structural abnormalities in the arteries, including the arterial distensibility measure. We propose a feasibility study on computing arterial distensibility automatically from monoplane 2D X-ray sequences for both small arteries (such as coronary arteries) and larger arteries (such as the aorta). To compute the distensibility measure, three steps were developed: First, the segment of an artery is extracted using our graph-based segmentation method. Then, the same segment is tracked in the moving sequence using our spatio-temporal segmentation method: the Temporal Vessel Walker. Finally, the diameter of the artery is measured automatically at each frame of the sequence based on the segmentation results. The method was evaluated using one simulated sequence and 4 patients' angiograms depicting the coronary arteries and three depicting the ascending aorta. Results of the simulated sequence achieved a Dice index of 98%, with a mean squared error in diameter measurement of [Formula: see text] mm. Results obtained from patients' X-ray sequences are consistent with manual assessment of the diameter by experts. The proposed method measures changes in diameter of a specific segment of a blood vessel during the cardiac sequence, automatically based on monoplane 2D X-ray sequence. Such information might become a key to help physicians in the detection of variations of arterial stiffness associated with early stages of various vasculopathies.

  8. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    Benkirane, A.; Auger, G.; Chbihi, A.; Bloyet, D.; Plagnol, E.

    1994-01-01

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  9. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Benkirane, A; Auger, G; Chbihi, A [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Bloyet, D [Caen Univ., 14 (France); Plagnol, E [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire

    1994-12-31

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ``classical`` automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append.

  10. Robot-assisted automatic ultrasound calibration.

    Science.gov (United States)

    Aalamifar, Fereshteh; Cheng, Alexis; Kim, Younsu; Hu, Xiao; Zhang, Haichong K; Guo, Xiaoyu; Boctor, Emad M

    2016-10-01

    Ultrasound (US) calibration is the process of determining the unknown transformation from a coordinate frame such as the robot's tooltip to the US image frame and is a necessary task for any robotic or tracked US system. US calibration requires submillimeter-range accuracy for most applications, but it is a time-consuming and repetitive task. We provide a new framework for automatic US calibration with robot assistance and without the need for temporal calibration. US calibration based on active echo (AE) phantom was previously proposed, and its superiority over conventional cross-wire phantom-based calibration was shown. In this work, we use AE to guide the robotic arm motion through the process of data collection; we combine the capability of the AE point to localize itself in the frame of the US image with the automatic motion of the robotic arm to provide a framework for calibrating the arm to the US image automatically. We demonstrated the efficacy of the automated method compared to the manual method through experiments. To highlight the necessity of frequent ultrasound calibration, it is demonstrated that the calibration precision changed from 1.67 to 3.20 mm if the data collection is not repeated after a dismounting/mounting of the probe holder. In a large data set experiment, similar reconstruction precision of automatic and manual data collection was observed, while the time was reduced by 58 %. In addition, we compared ten automatic calibrations with ten manual ones, each performed in 15 min, and showed that all the automatic ones could converge in the case of setting the initial matrix as identity, while this was not achieved by manual data sets. Given the same initial matrix, the repeatability of the automatic was [0.46, 0.34, 0.80, 0.47] versus [0.42, 0.51, 0.98, 1.15] mm in the manual case for the US image four corners. The submillimeter accuracy requirement of US calibration makes frequent data collections unavoidable. We proposed an automated

  11. A Survey of Automatic Protocol Reverse Engineering Approaches, Methods, and Tools on the Inputs and Outputs View

    Directory of Open Access Journals (Sweden)

    Baraka D. Sija

    2018-01-01

    Full Text Available A network protocol defines rules that control communications between two or more machines on the Internet, whereas Automatic Protocol Reverse Engineering (APRE defines the way of extracting the structure of a network protocol without accessing its specifications. Enough knowledge on undocumented protocols is essential for security purposes, network policy implementation, and management of network resources. This paper reviews and analyzes a total of 39 approaches, methods, and tools towards Protocol Reverse Engineering (PRE and classifies them into four divisions, approaches that reverse engineer protocol finite state machines, protocol formats, and both protocol finite state machines and protocol formats to approaches that focus directly on neither reverse engineering protocol formats nor protocol finite state machines. The efficiency of all approaches’ outputs based on their selected inputs is analyzed in general along with appropriate reverse engineering inputs format. Additionally, we present discussion and extended classification in terms of automated to manual approaches, known and novel categories of reverse engineered protocols, and a literature of reverse engineered protocols in relation to the seven layers’ OSI (Open Systems Interconnection model.

  12. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    Directory of Open Access Journals (Sweden)

    Zečević, N.

    2008-12-01

    Full Text Available There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon black4. mass flow rate of hydrocarbon oil feedstock5. type and quantity of additive for adjustment the structure of oil-furnace carbon black6. quantity and position of the quench water for cooling the reaction of oil-furnace carbon black.The control of oil-furnace carbon black adsorption capacity is made with mass flow rate of hydrocarbon feedstock, which is the most important inlet process factor. Oil-furnace carbon black adsorption capacity in industrial process is determined with laboratory analyze of iodine adsorption number. It is shown continuously and automatically method for controlling iodine adsorption number in carbon black oil-furnaces to get as much as possible efficient control of adsorption capacity. In the proposed method it can be seen the correlation between qualitatively-quantitatively composition of the process tail gasses in the production of oil-furnace carbon black and relationship between air for combustion and hydrocarbon feedstock. It is shown that the ratio between air for combustion and hydrocarbon oil feedstock is depended of adsorption capacity summarized by iodine adsorption number, regarding to BMCI index of hydrocarbon oil feedstock.The mentioned correlation can be seen through the figures from 1. to 4. From the whole composition of the process tail gasses the best correlation for continuously and automatically control of iodine adsorption number is show the volume fraction of methane. The volume fraction of methane in the

  13. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    International Nuclear Information System (INIS)

    Dang, H.; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-01-01

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  14. Automatic supervision and fault detection of PV systems based on power losses analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chouder, A.; Silvestre, S. [Electronic Engineering Department, Universitat Politecnica de Catalunya, C/Jordi Girona 1-3, Campus Nord UPC, 08034 Barcelona (Spain)

    2010-10-15

    In this work, we present a new automatic supervision and fault detection procedure for PV systems, based on the power losses analysis. This automatic supervision system has been developed in Matlab and Simulink environment. It includes parameter extraction techniques to calculate main PV system parameters from monitoring data in real conditions of work, taking into account the environmental irradiance and module temperature evolution, allowing simulation of the PV system behaviour in real time. The automatic supervision method analyses the output power losses, presents in the DC side of the PV generator, capture losses. Two new power losses indicators are defined: thermal capture losses (L{sub ct}) and miscellaneous capture losses (L{sub cm}). The processing of these indicators allows the supervision system to generate a faulty signal as indicator of fault detection in the PV system operation. Two new indicators of the deviation of the DC variables respect to the simulated ones have been also defined. These indicators are the current and voltage ratios: R{sub C} and R{sub V}. Analysing both, the faulty signal and the current/voltage ratios, the type of fault can be identified. The automatic supervision system has been successfully tested experimentally. (author)

  15. Automatic supervision and fault detection of PV systems based on power losses analysis

    International Nuclear Information System (INIS)

    Chouder, A.; Silvestre, S.

    2010-01-01

    In this work, we present a new automatic supervision and fault detection procedure for PV systems, based on the power losses analysis. This automatic supervision system has been developed in Matlab and Simulink environment. It includes parameter extraction techniques to calculate main PV system parameters from monitoring data in real conditions of work, taking into account the environmental irradiance and module temperature evolution, allowing simulation of the PV system behaviour in real time. The automatic supervision method analyses the output power losses, presents in the DC side of the PV generator, capture losses. Two new power losses indicators are defined: thermal capture losses (L ct ) and miscellaneous capture losses (L cm ). The processing of these indicators allows the supervision system to generate a faulty signal as indicator of fault detection in the PV system operation. Two new indicators of the deviation of the DC variables respect to the simulated ones have been also defined. These indicators are the current and voltage ratios: R C and R V . Analysing both, the faulty signal and the current/voltage ratios, the type of fault can be identified. The automatic supervision system has been successfully tested experimentally.

  16. Summary of water body extraction methods based on ZY-3 satellite

    Science.gov (United States)

    Zhu, Yu; Sun, Li Jian; Zhang, Chuan Yin

    2017-12-01

    Extracting from remote sensing images is one of the main means of water information extraction. Affected by spectral characteristics, many methods can be not applied to the satellite image of ZY-3. To solve this problem, we summarize the extraction methods for ZY-3 and analyze the extraction results of existing methods. According to the characteristics of extraction results, the method of WI& single band threshold and the method of texture filtering based on probability statistics are explored. In addition, the advantages and disadvantages of all methods are compared, which provides some reference for the research of water extraction from images. The obtained conclusions are as follows. 1) NIR has higher water sensitivity, consequently when the surface reflectance in the study area is less similar to water, using single band threshold method or multi band operation can obtain the ideal effect. 2) Compared with the water index and HIS optimal index method, object extraction method based on rules, which takes into account not only the spectral information of the water, but also space and texture feature constraints, can obtain better extraction effect, yet the image segmentation process is time consuming and the definition of the rules requires a certain knowledge. 3) The combination of the spectral relationship and water index can eliminate the interference of the shadow to a certain extent. When there is less small water or small water is not considered in further study, texture filtering based on probability statistics can effectively reduce the noises in result and avoid mixing shadows or paddy field with water in a certain extent.

  17. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    Science.gov (United States)

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Influence of different extraction methods on the yield and linalool content of the extracts of Eugenia uniflora L.

    Science.gov (United States)

    Galhiane, Mário S; Rissato, Sandra R; Chierice, Gilberto O; Almeida, Marcos V; Silva, Letícia C

    2006-09-15

    This work has been developed using a sylvestral fruit tree, native to the Brazilian forest, the Eugenia uniflora L., one of the Mirtaceae family. The main goal of the analytical study was focused on extraction methods themselves. The method development pointed to the Clevenger extraction as the best yield in relation to SFE and Soxhlet. The SFE method presented a good yield but showed a big amount of components in the final extract, demonstrating low selectivity. The essential oil extracted was analyzed by GC/FID showing a large range of polarity and boiling point compounds, where linalool, a widely used compound, was identified. Furthermore, an analytical solid phase extraction method was used to clean it up and obtain separated classes of compounds that were fractionated and studied by GC/FID and GC/MS.

  19. A Probabilistic Approach for Breast Boundary Extraction in Mammograms

    Directory of Open Access Journals (Sweden)

    Hamed Habibi Aghdam

    2013-01-01

    Full Text Available The extraction of the breast boundary is crucial to perform further analysis of mammogram. Methods to extract the breast boundary can be classified into two categories: methods based on image processing techniques and those based on models. The former use image transformation techniques such as thresholding, morphological operations, and region growing. In the second category, the boundary is extracted using more advanced techniques, such as the active contour model. The problem with thresholding methods is that it is a hard to automatically find the optimal threshold value by using histogram information. On the other hand, active contour models require defining a starting point close to the actual boundary to be able to successfully extract the boundary. In this paper, we propose a probabilistic approach to address the aforementioned problems. In our approach we use local binary patterns to describe the texture around each pixel. In addition, the smoothness of the boundary is handled by using a new probability model. Experimental results show that the proposed method reaches 38% and 50% improvement with respect to the results obtained by the active contour model and threshold-based methods respectively, and it increases the stability of the boundary extraction process up to 86%.

  20. An automatic tuning method of a fuzzy logic controller for nuclear reactors

    International Nuclear Information System (INIS)

    Ramaswamy, P.; Lee, K.Y.; Edwards, R.M.

    1993-01-01

    The design and evaluation by simulation of an automatically tuned fuzzy logic controller is presented. Typically, fuzzy logic controllers are designed based on an expert's knowledge of the process. However, this approach has its limitations in the fact that the controller is hard to optimize or tune to get the desired control action. A method to automate the tuning process using a simplified Kalman filter approach is presented for the fuzzy logic controller to track a suitable reference trajectory. Here, for purposes of illustration an optimal controller's response is used as a reference trajectory to determine automatically the rules for the fuzzy logic controller. To demonstrate the robustness of this design approach, a nonlinear six-delayed neutron group plant is controlled using a fuzzy logic controller that utilizes estimated reactor temperatures from a one-delayed neutron group observer. The fuzzy logic controller displayed good stability and performance robustness characteristics for a wide range of operation

  1. An integrated automatic system to evaluate U and Th dynamic lixiviation from solid matrices, and to extract/pre-concentrate leached analytes previous ICP-MS detection.

    Science.gov (United States)

    Ceballos, Melisa Rodas; García-Tenorio, Rafael; Estela, José Manuel; Cerdà, Víctor; Ferrer, Laura

    2017-12-01

    Leached fractions of U and Th from different environmental solid matrices were evaluated by an automatic system enabling the on-line lixiviation and extraction/pre-concentration of these two elements previous ICP-MS detection. UTEVA resin was used as selective extraction material. Ten leached fraction, using artificial rainwater (pH 5.4) as leaching agent, and a residual fraction were analyzed for each sample, allowing the study of behavior of U and Th in dynamic lixiviation conditions. Multivariate techniques have been employed for the efficient optimization of the independent variables that affect the lixiviation process. The system reached LODs of 0.1 and 0.7ngkg -1 of U and Th, respectively. The method was satisfactorily validated for three solid matrices, by the analysis of a soil reference material (IAEA-375), a certified sediment reference material (BCR- 320R) and a phosphogypsum reference material (MatControl CSN-CIEMAT 2008). Besides, environmental samples were analyzed, showing a similar behavior, i.e. the content of radionuclides decreases with the successive extractions. In all cases, the accumulative leached fraction of U and Th for different solid matrices studied (soil, sediment and phosphogypsum) were extremely low, up to 0.05% and 0.005% of U and Th, respectively. However, a great variability was observed in terms of mass concentration released, e.g. between 44 and 13,967ngUkg -1 . Copyright © 2017 Elsevier B.V. All rights reserved.

  2. DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.

    Science.gov (United States)

    Supratak, Akara; Dong, Hao; Wu, Chao; Guo, Yike

    2017-11-01

    This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.

  3. A rapid and low-cost DNA extraction method for isolating ...

    African Journals Online (AJOL)

    The price of commercial DNA extraction methods makes the routine use of polymerase chain reaction amplification (PCR) based methods rather costly for scientists in developing countries. A guanidium thiocayante-based DNA extraction method was investigated in this study for the isolation of Escherichia coli (E. coli) DNA ...

  4. A hybrid particle swarm optimization-SVM classification for automatic cardiac auscultation

    Directory of Open Access Journals (Sweden)

    Prasertsak Charoen

    2017-04-01

    Full Text Available Cardiac auscultation is a method for a doctor to listen to heart sounds, using a stethoscope, for examining the condition of the heart. Automatic cardiac auscultation with machine learning is a promising technique to classify heart conditions without need of doctors or expertise. In this paper, we develop a classification model based on support vector machine (SVM and particle swarm optimization (PSO for an automatic cardiac auscultation system. The model consists of two parts: heart sound signal processing part and a proposed PSO for weighted SVM (WSVM classifier part. In this method, the PSO takes into account the degree of importance for each feature extracted from wavelet packet (WP decomposition. Then, by using principle component analysis (PCA, the features can be selected. The PSO technique is used to assign diverse weights to different features for the WSVM classifier. Experimental results show that both continuous and binary PSO-WSVM models achieve better classification accuracy on the heart sound samples, by reducing system false negatives (FNs, compared to traditional SVM and genetic algorithm (GA based SVM.

  5. Automatic Human Movement Assessment With Switching Linear Dynamic System: Motion Segmentation and Motor Performance.

    Science.gov (United States)

    de Souza Baptista, Roberto; Bo, Antonio P L; Hayashibe, Mitsuhiro

    2017-06-01

    Performance assessment of human movement is critical in diagnosis and motor-control rehabilitation. Recent developments in portable sensor technology enable clinicians to measure spatiotemporal aspects to aid in the neurological assessment. However, the extraction of quantitative information from such measurements is usually done manually through visual inspection. This paper presents a novel framework for automatic human movement assessment that executes segmentation and motor performance parameter extraction in time-series of measurements from a sequence of human movements. We use the elements of a Switching Linear Dynamic System model as building blocks to translate formal definitions and procedures from human movement analysis. Our approach provides a method for users with no expertise in signal processing to create models for movements using labeled dataset and later use it for automatic assessment. We validated our framework on preliminary tests involving six healthy adult subjects that executed common movements in functional tests and rehabilitation exercise sessions, such as sit-to-stand and lateral elevation of the arms and five elderly subjects, two of which with limited mobility, that executed the sit-to-stand movement. The proposed method worked on random motion sequences for the dual purpose of movement segmentation (accuracy of 72%-100%) and motor performance assessment (mean error of 0%-12%).

  6. A Noise-Assisted Data Analysis Method for Automatic EOG-Based Sleep Stage Classification Using Ensemble Learning.

    Science.gov (United States)

    Olesen, Alexander Neergaard; Christensen, Julie A E; Sorensen, Helge B D; Jennum, Poul J

    2016-08-01

    Reducing the number of recording modalities for sleep staging research can benefit both researchers and patients, under the condition that they provide as accurate results as conventional systems. This paper investigates the possibility of exploiting the multisource nature of the electrooculography (EOG) signals by presenting a method for automatic sleep staging using the complete ensemble empirical mode decomposition with adaptive noise algorithm, and a random forest classifier. It achieves a high overall accuracy of 82% and a Cohen's kappa of 0.74 indicating substantial agreement between automatic and manual scoring.

  7. Automatic measurement of axial length of human eye using three-dimensional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Watanabe, Masaki; Kiryu, Tohru

    2011-01-01

    The measurement of axial length and the evaluation of three dimensional (3D) form of an eye are essential to evaluate the mechanism of myopia progression. We propose a method of automatic measurement of axial length including adjustment of the pulse sequence of short-term scan which could suppress influence of eyeblink, using a magnetic resonance imaging (MRI) which acquires 3D images noninvasively. Acquiring T 2 -weighted images with 3.0 tesla MRI device and eight-channel phased-array head coil, we extracted left and right eye ball images, and then reconstructed 3D volume. The surface coordinates were calculated from 3D volume, fitting the ellipsoid model coordinates with the surface coordinates, and measured the axial length automatically. Measuring twenty one subjects, we compared the automatically measured values of axial length with the manually measured ones, then confirmed significant elongation in the axial length of myopia compared with that of emmetropia. Furthermore, there were no significant differences (P<0.05) between the means of automatic measurements and the manual ones. Accordingly, the automatic measurement process of axial length could be a tool for the elucidation of the mechanism of myopia progression, which would be suitable for evaluating the axial length easily and noninvasively. (author)

  8. Accelerated H-LBP-based edge extraction method for digital radiography

    Energy Technology Data Exchange (ETDEWEB)

    Qiao, Shuang; Zhao, Chen-yi; Huang, Ji-peng [School of Physics, Northeast Normal University, Changchun 130024 (China); Sun, Jia-ning, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun 130024 (China)

    2015-01-11

    With the goal of achieving real time and efficient edge extraction for digital radiography, an accelerated H-LBP-based edge extraction method (AH-LBP) is presented in this paper by improving the existing framework of local binary pattern with the H function (H-LBP). Since the proposed method avoids computationally expensive operations with no loss of quality, it possesses much lower computational complexity than H-LBP. Experimental results on real radiographies show desirable performance of our method. - Highlights: • An accelerated H-LBP method for edge extraction on digital radiography is proposed. • The novel AH-LBP relies on numerical analysis of the existing H-LBP method. • Aiming at accelerating, H-LBP is reformulated as a direct binary processing. • AH-LBP provides the same edge extraction result as H-LBP does. • AH-LBP has low computational complexity satisfying real time requirements.

  9. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  10. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  11. Automatic methods for long-term tracking and the detection and decoding of communication dances in honeybees

    Directory of Open Access Journals (Sweden)

    Fernando eWario

    2015-09-01

    Full Text Available The honeybee waggle dance communication system is an intriguing example of abstract animal communication and has been investigated thoroughly throughout the last seven decades. Typically, observables such as durations or angles are extracted manually directly from the observation hive or from video recordings to quantify dance properties, particularly to determine where bees have foraged. In recent years, biology has profited from automation, improving measurement precision, removing human bias, and accelerating data collection. As a further step, we have developed technologies to track all individuals of a honeybee colony and detect and decode communication dances automatically. In strong contrast to conventional approaches that focus on a small subset of the hive life, whether this regards time, space, or animal identity, our more inclusive system will help the understanding of the dance comprehensively in its spatial, temporal, and social context. In this contribution, we present full specifications of the recording setup and the software for automatic recognition and decoding of tags and dances, and we discuss potential research directions that may benefit from automation. Lastly, to exemplify the power of the methodology, we show experimental data and respective analyses for a continuous, experimental recording of nine weeks duration.

  12. Comparison of the methods for tissue triiodothyronine T(3) extraction and subsequent radioimmunoassay

    International Nuclear Information System (INIS)

    Takaishi, M.; Miyachi, Y.; Aoki, M.; Shishiba, Y.; Asahi Life Foundation, Tokyo

    1978-01-01

    Although there have been various reports on tissue T 3 concentration, the examination of the quality of radioimmunoassay has not been available. In the present study, we tried to determine whether the available methods for T 3 extraction are adequate for the various methods of T 3 radioimmunoassays used. T 3 was extracted from liver by ethanol extraction or by acid butanol extraction (Flock's method) and the extract was applied to radioimmunoassay either by Seralute T 3 column, ANS-double antibody or the ANS-charcoal method. The values of T 3 were compared with those obtained by isotope-equilibration method. The dilution curve of ethanol extract was not parallel with that of the standard in ANS-charcoal or ANS-double antibody technique. When the extract was tested by Seralate method, the dilution curve was parallel to the standard, whereas the T 3 value obtained with this method was two-fold higher than that with the isotope equilibration technique. The analysis of the ethanol extract suggested that the lipid extracted by ethanol interfered with the assay. The acid butanol extract when tested either by the ANS-double antibody or Seralate method, showed parallelism to the standard curve and gave T 3 values almost identical with those by the isotope-equilibration method. When tested by ANS-charcoal method, the dilution curve of the acid butanol extract was not parallel to the standard. Thus, to obtain reliable results, tissue extraction by Flock's method and subsequent T 3 radioimmunoassay by either ANS-double antibody or Seralate T 3 method are recommended. (author)

  13. Bengali text summarization by sentence extraction

    OpenAIRE

    Sarkar, Kamal

    2012-01-01

    Text summarization is a process to produce an abstract or a summary by selecting significant portion of the information from one or more texts. In an automatic text summarization process, a text is given to the computer and the computer returns a shorter less redundant extract or abstract of the original text(s). Many techniques have been developed for summarizing English text(s). But, a very few attempts have been made for Bengali text summarization. This paper presents a method for Bengali ...

  14. Assessment of proposed electromagnetic quantum vacuum energy extraction methods

    OpenAIRE

    Moddel, Garret

    2009-01-01

    In research articles and patents several methods have been proposed for the extraction of zero-point energy from the vacuum. None has been reliably demonstrated, but the proposals remain largely unchallenged. In this paper the feasibility of these methods is assessed in terms of underlying thermodynamics principles of equilibrium, detailed balance, and conservation laws. The methods are separated into three classes: nonlinear processing of the zero-point field, mechanical extraction using Cas...

  15. A Novel Approach to Extract Water Body from ASAR Dual-Polarized Data

    International Nuclear Information System (INIS)

    Ma, Jianwei; Song, Xiaoning; Leng, Pei; Zhou, Fangcheng; Li, Shuang; Li, Xiaotao

    2014-01-01

    SAR (Synthetic Aperture Radar) has become a useful and efficient method for monitoring flood extent due to its capability of 24-hour and all weather observation. In this paper, a novel approach is proposed to extract water bodies from ASAR dual-polarized images. Firstly, a new SAR image was created from ASAR Dual-Polarized data using a discrete wavelet transformation (DWT) fusion method. Then, a modified Otsu threshold method was used to extract water bodies of Poyang Lake with the new fused image. Next, this image was compared with the one extracted from ETM+ data. The result showed that the fused image was feasible and more accurate. Besides, it could reduce the influences of shadow and noise. Moreover, the approach could be conducted automatically, which is very important under urgent condition for flood monitoring

  16. Automatic extraction and processing of small RNAs on a multi-well/multi-channel (M&M) chip.

    Science.gov (United States)

    Zhong, Runtao; Flack, Kenneth; Zhong, Wenwan

    2012-12-07

    The study of the regulatory roles in small RNAs can be accelerated by techniques that permit simple, low-cost, and rapid extraction of small RNAs from a small number of cells. In order to ensure highly specific and sensitive detection, the extracted RNAs should be free of the background nucleic acids and present stably in a small volume. To meet these criteria, we designed a multi-well/multi-channel (M&M) chip to carry out automatic and selective isolation of small RNAs via solid-phase extraction (SPE), followed by reverse-transcription (RT) to convert them to the more stable cDNAs in a final volume of 2 μL. Droplets containing buffers for RNA binding, washing, and elution were trapped in microwells, which were connected by one channel, and suspended in mineral oil. The silica magnetic particles (SMPs) for SPE were moved along the channel from well to well, i.e. in between droplets, by a fixed magnet and a translation stage, allowing the nucleic acid fragments to bind to the SMPs, be washed, and then be eluted for RT reaction within 15 minutes. RNAs shorter than 63 nt were selectively enriched from cell lysates, with recovery comparable to that of a commercial kit. Physical separation of the droplets on our M&M chip allowed the usage of multiple channels for parallel processing of multiple samples. It also permitted smooth integration with on-chip RT-PCR, which simultaneously detected the target microRNA, mir-191, expressed in fewer than 10 cancer cells. Our results have demonstrated that the M&M chip device is a valuable and cost-saving platform for studying small RNA expression patterns in a limited number of cells with reasonable sample throughput.

  17. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    Science.gov (United States)

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  18. Deep learning for automatic localization, identification, and segmentation of vertebral bodies in volumetric MR images

    Science.gov (United States)

    Suzani, Amin; Rasoulian, Abtin; Seitel, Alexander; Fels, Sidney; Rohling, Robert N.; Abolmaesumi, Purang

    2015-03-01

    This paper proposes an automatic method for vertebra localization, labeling, and segmentation in multi-slice Magnetic Resonance (MR) images. Prior work in this area on MR images mostly requires user interaction while our method is fully automatic. Cubic intensity-based features are extracted from image voxels. A deep learning approach is used for simultaneous localization and identification of vertebrae. The localized points are refined by local thresholding in the region of the detected vertebral column. Thereafter, a statistical multi-vertebrae model is initialized on the localized vertebrae. An iterative Expectation Maximization technique is used to register the vertebral body of the model to the image edges and obtain a segmentation of the lumbar vertebral bodies. The method is evaluated by applying to nine volumetric MR images of the spine. The results demonstrate 100% vertebra identification and a mean surface error of below 2.8 mm for 3D segmentation. Computation time is less than three minutes per high-resolution volumetric image.

  19. Influence of Extraction Methods on the Yield of Steviol Glycosides and Antioxidants in Stevia rebaudiana Extracts.

    Science.gov (United States)

    Periche, Angela; Castelló, Maria Luisa; Heredia, Ana; Escriche, Isabel

    2015-06-01

    This study evaluated the application of ultrasound techniques and microwave energy, compared to conventional extraction methods (high temperatures at atmospheric pressure), for the solid-liquid extraction of steviol glycosides (sweeteners) and antioxidants (total phenols, flavonoids and antioxidant capacity) from dehydrated Stevia leaves. Different temperatures (from 50 to 100 °C), times (from 1 to 40 min) and microwave powers (1.98 and 3.30 W/g extract) were used. There was a great difference in the resulting yields according to the treatments applied. Steviol glycosides and antioxidants were negatively correlated; therefore, there is no single treatment suitable for obtaining the highest yield in both groups of compounds simultaneously. The greatest yield of steviol glycosides was obtained with microwave energy (3.30 W/g extract, 2 min), whereas, the conventional method (90 °C, 1 min) was the most suitable for antioxidant extraction. Consequently, the best process depends on the subsequent use (sweetener or antioxidant) of the aqueous extract of Stevia leaves.

  20. Using Probe Vehicle Data for Automatic Extraction of Road Traffic Parameters

    Directory of Open Access Journals (Sweden)

    Roman Popescu Maria Alexandra

    2016-12-01

    Full Text Available Through this paper the author aims to study and find solutions for automatic detection of traffic light position and for automatic calculation of the waiting time at traffic light. The first objective serves mainly the road transportation field, mainly because it removes the need for collaboration with local authorities to establish a national network of traffic lights. The second objective is important not only for companies which are providing navigation solutions, but especially for authorities, institutions, companies operating in road traffic management systems. Real-time dynamic determination of traffic queue length and of waiting time at traffic lights allow the creation of dynamic systems, intelligent and flexible, adapted to actual traffic conditions, and not to generic, theoretical models. Thus, cities can approach the Smart City concept by boosting, efficienting and greening the road transport, promoted in Europe through the Horizon 2020, Smart Cities, Urban Mobility initiative.

  1. Automatic detection of optic disc based on PCA and mathematical morphology.

    Science.gov (United States)

    Morales, Sandra; Naranjo, Valery; Angulo, Us; Alcaniz, Mariano

    2013-04-01

    The algorithm proposed in this paper allows to automatically segment the optic disc from a fundus image. The goal is to facilitate the early detection of certain pathologies and to fully automate the process so as to avoid specialist intervention. The method proposed for the extraction of the optic disc contour is mainly based on mathematical morphology along with principal component analysis (PCA). It makes use of different operations such as generalized distance function (GDF), a variant of the watershed transformation, the stochastic watershed, and geodesic transformations. The input of the segmentation method is obtained through PCA. The purpose of using PCA is to achieve the grey-scale image that better represents the original RGB image. The implemented algorithm has been validated on five public databases obtaining promising results. The average values obtained (a Jaccard's and Dice's coefficients of 0.8200 and 0.8932, respectively, an accuracy of 0.9947, and a true positive and false positive fractions of 0.9275 and 0.0036) demonstrate that this method is a robust tool for the automatic segmentation of the optic disc. Moreover, it is fairly reliable since it works properly on databases with a large degree of variability and improves the results of other state-of-the-art methods.

  2. How far away is far enough for extracting numerical waveforms, and how much do they depend on the extraction method?

    International Nuclear Information System (INIS)

    Pazos, Enrique; Dorband, Ernst Nils; Nagar, Alessandro; Palenzuela, Carlos; Schnetter, Erik; Tiglio, Manuel

    2007-01-01

    We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors

  3. How far away is far enough for extracting numerical waveforms, and how much do they depend on the extraction method?

    Energy Technology Data Exchange (ETDEWEB)

    Pazos, Enrique [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Dorband, Ernst Nils [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Nagar, Alessandro [Dipartimento di Fisica, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Torino (Italy); Palenzuela, Carlos [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Schnetter, Erik [Center for Computation and Technology, 216 Johnston Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Tiglio, Manuel [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2007-06-21

    We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors

  4. Critical assessment of extracellular polymeric substances extraction methods from mixed culture biomass

    DEFF Research Database (Denmark)

    Pellicer i Nàcher, Carles; Domingo Felez, Carlos; Mutlu, Ayten Gizem

    2013-01-01

    . This study presents a rigorous and critical assessment of existing physical and chemical EPS extraction methods applied to mixed-culture biomass samples (nitrifying, nitritation-anammox, and activated sludge biomass). A novel fluorescence-based method was developed and calibrated to quantify the lysis...... potential of different EPS extraction protocols. We concluded that commonly used methods to assess cell lysis (DNA concentrations or G6PDH activities in EPS extracts) do not correlate with cell viability. Furthermore, we discovered that the presence of certain chemicals in EPS extracts results in severe...... underestimation of protein and carbohydrate concentrations by using standard analytical methods. Keeping both maximum EPS extraction yields and minimal biomass lysis as criteria, it was identified a sonication-based extraction method as the best to determine and compare tightly-bound EPS fractions in different...

  5. Improved Cole parameter extraction based on the least absolute deviation method

    International Nuclear Information System (INIS)

    Yang, Yuxiang; Ni, Wenwen; Sun, Qiang; Wen, He; Teng, Zhaosheng

    2013-01-01

    The Cole function is widely used in bioimpedance spectroscopy (BIS) applications. Fitting the measured BIS data onto the model and then extracting the Cole parameters (R 0 , R ∞ , α and τ) is a common practice. Accurate extraction of the Cole parameters from the measured BIS data has great significance for evaluating the physiological or pathological status of biological tissue. The traditional least-squares (LS)-based curve fitting method for Cole parameter extraction is often sensitive to noise or outliers and becomes non-robust. This paper proposes an improved Cole parameter extraction based on the least absolute deviation (LAD) method. Comprehensive simulation experiments are carried out and the performances of the LAD method are compared with those of the LS method under the conditions of outliers, random noises and both disturbances. The proposed LAD method exhibits much better robustness under all circumstances, which demonstrates that the LAD method is deserving as an improved alternative to the LS method for Cole parameter extraction for its robustness to outliers and noises. (paper)

  6. MIAQuant, a novel system for automatic segmentation, measurement, and localization comparison of different biomarkers from serialized histological slices

    Directory of Open Access Journals (Sweden)

    Elena Casiraghi

    2017-11-01

    Full Text Available In the clinical practice, automatic image analysis methods quickly quantizing histological results by objective and replicable methods are getting more and more necessary and widespread. Despite several commercial software products are available for this task, they are very little flexible, and provided as black boxes without modifiable source code. To overcome the aforementioned problems, we employed the commonly used MATLAB platform to develop an automatic method, MIAQuant, for the analysis of histochemical and immunohistochemical images, stained with various methods and acquired by different tools. It automatically extracts and quantifies markers characterized by various colors and shapes; furthermore, it aligns contiguous tissue slices stained by different markers and overlaps them with differing colors for visual comparison of their localization. Application of MIAQuant for clinical research fields, such as oncology and cardiovascular disease studies, has proven its efficacy, robustness and flexibility with respect to various problems; we highlight that, the flexibility of MIAQuant makes it an important tool to be exploited for basic researches where needs are constantly changing. MIAQuant software and its user manual are freely available for clinical studies, pathological research, and diagnosis.

  7. The method for simultaneous extraction and back extraction in liquid three-phase system and equipment for simultaneous extraction and back extraction in liquid three-phase system

    International Nuclear Information System (INIS)

    Palyska, W.; Chmielewski, A.G.

    1992-01-01

    The method for simultaneous extraction and back extraction in liquid three-phase system has been worked out. The equipment designed for that process has been also subject of the patent. The interesting component is extracted first to intermediate phase consists of magnetic solvent keeping two extracting phases separately. The intermediate magnetic liquid has been kept in its position using a stable magnet maintained on the surface of the extraction vessel. Then the component pass from intermediate phase to the third phase as a result of back extraction. Mixing in the extraction and back extraction zones is organized by means of rotating shaft going along the whole apparatus. The extraction and back extraction processes occur simultaneously as a result of continuous flow of solvent in their zones. The single extraction back extraction facilities can be joined in larger batteries. 3 figs

  8. AUTOMATIC SEGMENTATION OF BROADCAST AUDIO SIGNALS USING AUTO ASSOCIATIVE NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    P. Dhanalakshmi

    2010-12-01

    Full Text Available In this paper, we describe automatic segmentation methods for audio broadcast data. Today, digital audio applications are part of our everyday lives. Since there are more and more digital audio databases in place these days, the importance of effective management for audio databases have become prominent. Broadcast audio data is recorded from the Television which comprises of various categories of audio signals. Efficient algorithms for segmenting the audio broadcast data into predefined categories are proposed. Audio features namely Linear prediction coefficients (LPC, Linear prediction cepstral coefficients, and Mel frequency cepstral coefficients (MFCC are extracted to characterize the audio data. Auto Associative Neural Networks are used to segment the audio data into predefined categories using the extracted features. Experimental results indicate that the proposed algorithms can produce satisfactory results.

  9. ISS Contingency Attitude Control Recovery Method for Loss of Automatic Thruster Control

    Science.gov (United States)

    Bedrossian, Nazareth; Bhatt, Sagar; Alaniz, Abran; McCants, Edward; Nguyen, Louis; Chamitoff, Greg

    2008-01-01

    In this paper, the attitude control issues associated with International Space Station (ISS) loss of automatic thruster control capability are discussed and methods for attitude control recovery are presented. This scenario was experienced recently during Shuttle mission STS-117 and ISS Stage 13A in June 2007 when the Russian GN&C computers, which command the ISS thrusters, failed. Without automatic propulsive attitude control, the ISS would not be able to regain attitude control after the Orbiter undocked. The core issues associated with recovering long-term attitude control using CMGs are described as well as the systems engineering analysis to identify recovery options. It is shown that the recovery method can be separated into a procedure for rate damping to a safe harbor gravity gradient stable orientation and a capability to maneuver the vehicle to the necessary initial conditions for long term attitude hold. A manual control option using Soyuz and Progress vehicle thrusters is investigated for rate damping and maneuvers. The issues with implementing such an option are presented and the key issue of closed-loop stability is addressed. A new non-propulsive alternative to thruster control, Zero Propellant Maneuver (ZPM) attitude control method is introduced and its rate damping and maneuver performance evaluated. It is shown that ZPM can meet the tight attitude and rate error tolerances needed for long term attitude control. A combination of manual thruster rate damping to a safe harbor attitude followed by a ZPM to Stage long term attitude control orientation was selected by the Anomaly Resolution Team as the alternate attitude control method for such a contingency.

  10. Comparison of protein extraction methods suitable for proteomics ...

    African Journals Online (AJOL)

    An efficient protein extraction method is a prerequisite for successful implementation of proteomics. In this study, seedling roots of Jerusalem artichoke were treated with the concentration of 250 mM NaCl for 36 h. Subsequently, six different protocols of protein extraction were applied to seedling roots of Jerusalem artichoke ...

  11. Most probable dimension value and most flat interval methods for automatic estimation of dimension from time series

    International Nuclear Information System (INIS)

    Corana, A.; Bortolan, G.; Casaleggio, A.

    2004-01-01

    We present and compare two automatic methods for dimension estimation from time series. Both methods, based on conceptually different approaches, work on the derivative of the bi-logarithmic plot of the correlation integral versus the correlation length (log-log plot). The first method searches for the most probable dimension values (MPDV) and associates to each of them a possible scaling region. The second one searches for the most flat intervals (MFI) in the derivative of the log-log plot. The automatic procedures include the evaluation of the candidate scaling regions using two reliability indices. The data set used to test the methods consists of time series from known model attractors with and without the addition of noise, structured time series, and electrocardiographic signals from the MIT-BIH ECG database. Statistical analysis of results was carried out by means of paired t-test, and no statistically significant differences were found in the large majority of the trials. Consistent results are also obtained dealing with 'difficult' time series. In general for a more robust and reliable estimate, the use of both methods may represent a good solution when time series from complex systems are analyzed. Although we present results for the correlation dimension only, the procedures can also be used for the automatic estimation of generalized q-order dimensions and pointwise dimension. We think that the proposed methods, eliminating the need of operator intervention, allow a faster and more objective analysis, thus improving the usefulness of dimension analysis for the characterization of time series obtained from complex dynamical systems

  12. Automatic Cobb Angle Determination From Radiographic Images

    NARCIS (Netherlands)

    Sardjono, Tri Arief; Wilkinson, Michael H. F.; Veldhuizen, Albert G.; van Ooijen, Peter M. A.; Purnama, Ketut E.; Verkerke, Gijsbertus J.

    2013-01-01

    Study Design. Automatic measurement of Cobb angle in patients with scoliosis. Objective. To test the accuracy of an automatic Cobb angle determination method from frontal radiographical images. Summary of Background Data. Thirty-six frontal radiographical images of patients with scoliosis. Methods.

  13. Enhancing biomedical text summarization using semantic relation extraction.

    Directory of Open Access Journals (Sweden)

    Yue Shang

    Full Text Available Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1 We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2 We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3 For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  14. Enhancing biomedical text summarization using semantic relation extraction.

    Science.gov (United States)

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  15. Method of automatic image registration of three-dimensional range of archaeological restoration

    International Nuclear Information System (INIS)

    Garcia, O.; Perez, M.; Morales, N.

    2012-01-01

    We propose an automatic registration system for reconstruction of various positions of a large object based on a static structured light pattern. The system combines the technology of stereo vision, structured light pattern, the positioning system of the vision sensor and an algorithm that simplifies the process of finding correspondence for the modeling of large objects. A new structured light pattern based on Kautz sequence is proposed, using this pattern as static implement a proposed new registration method. (Author)

  16. Optimizing pressurized liquid extraction of microbial lipids using the response surface method.

    Science.gov (United States)

    Cescut, J; Severac, E; Molina-Jouve, C; Uribelarrea, J-L

    2011-01-21

    Response surface methodology (RSM) was used for the determination of optimum extraction parameters to reach maximum lipid extraction yield with yeast. Total lipids were extracted from oleaginous yeast (Rhodotorula glutinis) using pressurized liquid extraction (PLE). The effects of extraction parameters on lipid extraction yield were studied by employing a second-order central composite design. The optimal condition was obtained as three cycles of 15 min at 100°C with a ratio of 144 g of hydromatrix per 100 g of dry cell weight. Different analysis methods were used to compare the optimized PLE method with two conventional methods (Soxhlet and modification of Bligh and Dyer methods) under efficiency, selectivity and reproducibility criteria thanks to gravimetric analysis, GC with flame ionization detector, High Performance Liquid Chromatography linked to Evaporative Light Scattering Detector (HPLC-ELSD) and thin-layer chromatographic analysis. For each sample, the lipid extraction yield with optimized PLE was higher than those obtained with referenced methods (Soxhlet and Bligh and Dyer methods with, respectively, a recovery of 78% and 85% compared to PLE method). Moreover, the use of PLE led to major advantages such as an analysis time reduction by a factor of 10 and solvent quantity reduction by 70%, compared with traditional extraction methods. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Automatic building extraction from LiDAR data fusion of point and grid-based features

    Science.gov (United States)

    Du, Shouji; Zhang, Yunsheng; Zou, Zhengrong; Xu, Shenghua; He, Xue; Chen, Siyang

    2017-08-01

    This paper proposes a method for extracting buildings from LiDAR point cloud data by combining point-based and grid-based features. To accurately discriminate buildings from vegetation, a point feature based on the variance of normal vectors is proposed. For a robust building extraction, a graph cuts algorithm is employed to combine the used features and consider the neighbor contexture information. As grid feature computing and a graph cuts algorithm are performed on a grid structure, a feature-retained DSM interpolation method is proposed in this paper. The proposed method is validated by the benchmark ISPRS Test Project on Urban Classification and 3D Building Reconstruction and compared to the state-art-of-the methods. The evaluation shows that the proposed method can obtain a promising result both at area-level and at object-level. The method is further applied to the entire ISPRS dataset and to a real dataset of the Wuhan City. The results show a completeness of 94.9% and a correctness of 92.2% at the per-area level for the former dataset and a completeness of 94.4% and a correctness of 95.8% for the latter one. The proposed method has a good potential for large-size LiDAR data.

  18. A simple and efficient total genomic DNA extraction method for individual zooplankton.

    Science.gov (United States)

    Fazhan, Hanafiah; Waiho, Khor; Shahreza, Md Sheriff

    2016-01-01

    Molecular approaches are widely applied in species identification and taxonomic studies of minute zooplankton. One of the most focused zooplankton nowadays is from Subclass Copepoda. Accurate species identification of all life stages of the generally small sized copepods through molecular analysis is important, especially in taxonomic and systematic assessment of harpacticoid copepod populations and to understand their dynamics within the marine community. However, total genomic DNA (TGDNA) extraction from individual harpacticoid copepods can be problematic due to their small size and epibenthic behavior. In this research, six TGDNA extraction methods done on individual harpacticoid copepods were compared. The first new simple, feasible, efficient and consistent TGDNA extraction method was designed and compared with the commercial kit and modified available TGDNA extraction methods. The newly described TGDNA extraction method, "Incubation in PCR buffer" method, yielded good and consistent results based on the high success rate of PCR amplification (82%) compared to other methods. Coupled with its relatively consistent and economical method the "Incubation in PCR buffer" method is highly recommended in the TGDNA extraction of other minute zooplankton species.

  19. Compact device for the extraction of sup(99m)TcO4Na

    International Nuclear Information System (INIS)

    Pliego, O.H.; Mitta, A.E.A.

    1982-01-01

    A non automatic device for the extraction of sup(99m)TcO 4 Na by liquid-liquid extraction method is described. It has been developed at the Laboratory of Labelled Molecules of the CNEA and was installed at the Nuclear Medicine Centre of the Hospital de Clinicas Jose de San Martin. The solutions of sup(99m)TcO 4 Na are used for the labelling of radiopharmaceuticals and also for making radiodiagnosis. (author) [es

  20. Comparison of extraction methods for quantifying vitamin E from animal tissues.

    Science.gov (United States)

    Xu, Zhimin

    2008-12-01

    Four extraction methods: (1) solvent (SOL), (2) ultrasound assisted solvent (UA), (3) saponification and solvent (SP), and (4) saponification and ultrasound assisted solvent (SP-UA), were used in sample preparation for quantifying vitamin E (tocopherols) in chicken liver and plasma samples. The extraction yields of SOL, UA, SP, and SP-UA methods obtained by adding delta-tocopherol as internal reference were 95%, 104%, 65%, and 62% for liver and 98%, 103%, 97%, and 94% for plasma, respectively. The methods with saponification significantly affected the stabilities of tocopherols in liver samples. The measured values of alpha- and gamma-tocopherols using the solvent only extraction (SOL) method were much lower than that using any of the other extraction methods. This indicated that less of the tocopherols in those samples were in a form that could be extracted directly by solvent. The measured value of alpha-tocopherol in the liver sample using the ultrasound assisted solvent (UA) method was 1.5-2.5 times of that obtained from the saponification and solvent (SP) method. The differences in measured values of tocopherols in the plasma samples by using the two methods were not significant. However, the measured value of the saponification and ultrasound assisted solvent (SP-UA) method was lower than either the saponification and solvent (SP) or the ultrasound assisted solvent (UA) method. Also, the reproducibility of the ultrasound assisted solvent (UA) method was greater than any of the saponification methods. Compared with the traditional saponification method, the ultrasound assisted solvent method could effectively extract tocopherols from sample matrix without any chemical degradation reactions, especially for complex animal tissue such as liver.

  1. Automatic Enhancement of the Reference Set for Multi-Criteria Sorting in The Frame of Theseus Method

    Directory of Open Access Journals (Sweden)

    Fernandez Eduardo

    2014-05-01

    Full Text Available Some recent works have established the importance of handling abundant reference information in multi-criteria sorting problems. More valid information allows a better characterization of the agent’s assignment policy, which can lead to an improved decision support. However, sometimes information for enhancing the reference set may be not available, or may be too expensive. This paper explores an automatic mode of enhancing the reference set in the framework of the THESEUS multi-criteria sorting method. Some performance measures are defined in order to test results of the enhancement. Several theoretical arguments and practical experiments are provided here, supporting a basic advantage of the automatic enhancement: a reduction of the vagueness measure that improves the THESEUS accuracy, without additional efforts from the decision agent. The experiments suggest that the errors coming from inadequate automatic assignments can be kept at a manageable level.

  2. Automatic identification of temporal sequences in chewing sounds

    NARCIS (Netherlands)

    Amft, O.D.; Kusserow, M.; Tröster, G.

    2007-01-01

    Chewing is an essential part of food intake. The analysis and detection of food patterns is an important component of an automatic dietary monitoring system. However chewing is a time-variable process depending on food properties. We present an automated methodology to extract sub-sequences of

  3. Clinical study of Atopic Dermatitis patient treated with Poison Extraction Method

    Directory of Open Access Journals (Sweden)

    Park Chi-young

    2007-06-01

    Full Text Available Objectives : This study is desinged in order to evaluate the Poison extraction method for the Atopic dermatitis. Methods : The authors observed the two cases of Atopic dermatitis patients who previously used steroid-based ointment. for treating the Poison Extraction Method. Conclusions : 1. In case 1, the patient with mild case of Atopic dermatitis in face is treated with the Poison extraction method. Rash symptoms intensed for first few days. As sweating appeared in the local area from the seventh day, all the symptoms was disappeared rapidly. No recurrence was found. 2. In case 2, the patient with severe case of Atopic dermatitis in whole body is treated with the Poison extraction method. The symptoms intensed after two months and all the symptoms of itchiness, rash, scaly letter dissapeared in the third and fourth months. No recurrence was found. 3. In both cases of mild and severe cases of Atopic dermatitis. all the symptoms were disappeared and no recurrence was found with the Poison Extraction Method.

  4. Automatic sleep staging using empirical mode decomposition, discrete wavelet transform, time-domain, and nonlinear dynamics features of heart rate variability signals.

    Science.gov (United States)

    Ebrahimi, Farideh; Setarehdan, Seyed-Kamaledin; Ayala-Moyeda, Jose; Nazeran, Homer

    2013-10-01

    The conventional method for sleep staging is to analyze polysomnograms (PSGs) recorded in a sleep lab. The electroencephalogram (EEG) is one of the most important signals in PSGs but recording and analysis of this signal presents a number of technical challenges, especially at home. Instead, electrocardiograms (ECGs) are much easier to record and may offer an attractive alternative for home sleep monitoring. The heart rate variability (HRV) signal proves suitable for automatic sleep staging. Thirty PSGs from the Sleep Heart Health Study (SHHS) database were used. Three feature sets were extracted from 5- and 0.5-min HRV segments: time-domain features, nonlinear-dynamics features and time-frequency features. The latter was achieved by using empirical mode decomposition (EMD) and discrete wavelet transform (DWT) methods. Normalized energies in important frequency bands of HRV signals were computed using time-frequency methods. ANOVA and t-test were used for statistical evaluations. Automatic sleep staging was based on HRV signal features. The ANOVA followed by a post hoc Bonferroni was used for individual feature assessment. Most features were beneficial for sleep staging. A t-test was used to compare the means of extracted features in 5- and 0.5-min HRV segments. The results showed that the extracted features means were statistically similar for a small number of features. A separability measure showed that time-frequency features, especially EMD features, had larger separation than others. There was not a sizable difference in separability of linear features between 5- and 0.5-min HRV segments but separability of nonlinear features, especially EMD features, decreased in 0.5-min HRV segments. HRV signal features were classified by linear discriminant (LD) and quadratic discriminant (QD) methods. Classification results based on features from 5-min segments surpassed those obtained from 0.5-min segments. The best result was obtained from features using 5-min HRV

  5. Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yuntian Feng

    2017-01-01

    Full Text Available We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q-Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.

  6. Joint Extraction of Entities and Relations Using Reinforcement Learning and Deep Learning.

    Science.gov (United States)

    Feng, Yuntian; Zhang, Hongjun; Hao, Wenning; Chen, Gang

    2017-01-01

    We use both reinforcement learning and deep learning to simultaneously extract entities and relations from unstructured texts. For reinforcement learning, we model the task as a two-step decision process. Deep learning is used to automatically capture the most important information from unstructured texts, which represent the state in the decision process. By designing the reward function per step, our proposed method can pass the information of entity extraction to relation extraction and obtain feedback in order to extract entities and relations simultaneously. Firstly, we use bidirectional LSTM to model the context information, which realizes preliminary entity extraction. On the basis of the extraction results, attention based method can represent the sentences that include target entity pair to generate the initial state in the decision process. Then we use Tree-LSTM to represent relation mentions to generate the transition state in the decision process. Finally, we employ Q -Learning algorithm to get control policy π in the two-step decision process. Experiments on ACE2005 demonstrate that our method attains better performance than the state-of-the-art method and gets a 2.4% increase in recall-score.

  7. Excavation-drier method of energy-peat extraction reduces long-term climatic impact

    Energy Technology Data Exchange (ETDEWEB)

    Silvan, N.; Silvan, K.; Laine, J. [Finnish Forest Research Inst., Parkano (Finland)], e-mail: niko.silvan@metla.fi; Vaisanen, S.; Soukka, R. [Lappeenranta Univ.of Techology (Finland)

    2012-11-01

    Climatic impacts of energy-peat extraction are of increasing concern due to EU emissions trading requirements. A new excavation-drier peat extraction method has been developed to reduce the climatic impact and increase the efficiency of peat extraction. To quantify and compare the soil GHG fluxes of the excavation drier and the traditional milling methods, as well as the areas from which the energy peat is planned to be extracted in the future (extraction reserve area types), soil CO{sub 2}, CH{sub 4} and N{sub 2}O fluxes were measured during 2006-2007 at three sites in Finland. Within each site, fluxes were measured from drained extraction reserve areas, extraction fields and stockpiles of both methods and additionally from the biomass driers of the excavation-drier method. The Life Cycle Assessment (LCA), described at a principal level in ISO Standards 14040:2006 and 14044:2006, was used to assess the long-term (100 years) climatic impact from peatland utilisation with respect to land use and energy production chains where utilisation of coal was replaced with peat. Coal was used as a reference since in many cases peat and coal can replace each other in same power plants. According to this study, the peat extraction method used was of lesser significance than the extraction reserve area type in regards to the climatic impact. However, the excavation-drier method seems to cause a slightly reduced climatic impact as compared with the prevailing milling method. (orig.)

  8. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  9. Automatic differentiation algorithms in model analysis

    NARCIS (Netherlands)

    Huiskes, M.J.

    2002-01-01

    Title: Automatic differentiation algorithms in model analysis
    Author: M.J. Huiskes
    Date: 19 March, 2002

    In this thesis automatic differentiation algorithms and derivative-based methods

  10. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations

    Science.gov (United States)

    Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  11. Effects of Extraction Method on the Physicochemical and ...

    African Journals Online (AJOL)

    The effects of improved method of extraction on the physicochemical, mycological and stability of crude Canarium Schweinfurthii fruit oil were studied. The extracted oils were then stored at 25±5oC for 24 months with samples analyzed at 6months interval for; pH, saponification value, acid value, peroxide value and iodine ...

  12. TagDust2: a generic method to extract reads from sequencing data.

    Science.gov (United States)

    Lassmann, Timo

    2015-01-28

    Arguably the most basic step in the analysis of next generation sequencing data (NGS) involves the extraction of mappable reads from the raw reads produced by sequencing instruments. The presence of barcodes, adaptors and artifacts subject to sequencing errors makes this step non-trivial. Here I present TagDust2, a generic approach utilizing a library of hidden Markov models (HMM) to accurately extract reads from a wide array of possible read architectures. TagDust2 extracts more reads of higher quality compared to other approaches. Processing of multiplexed single, paired end and libraries containing unique molecular identifiers is fully supported. Two additional post processing steps are included to exclude known contaminants and filter out low complexity sequences. Finally, TagDust2 can automatically detect the library type of sequenced data from a predefined selection. Taken together TagDust2 is a feature rich, flexible and adaptive solution to go from raw to mappable NGS reads in a single step. The ability to recognize and record the contents of raw reads will help to automate and demystify the initial, and often poorly documented, steps in NGS data analysis pipelines. TagDust2 is freely available at: http://tagdust.sourceforge.net .

  13. ARCOCT: Automatic detection of lumen border in intravascular OCT images.

    Science.gov (United States)

    Cheimariotis, Grigorios-Aris; Chatzizisis, Yiannis S; Koutkias, Vassilis G; Toutouzas, Konstantinos; Giannopoulos, Andreas; Riga, Maria; Chouvarda, Ioanna; Antoniadis, Antonios P; Doulaverakis, Charalambos; Tsamboulatidis, Ioannis; Kompatsiaris, Ioannis; Giannoglou, George D; Maglaveras, Nicos

    2017-11-01

    Intravascular optical coherence tomography (OCT) is an invaluable tool for the detection of pathological features on the arterial wall and the investigation of post-stenting complications. Computational lumen border detection in OCT images is highly advantageous, since it may support rapid morphometric analysis. However, automatic detection is very challenging, since OCT images typically include various artifacts that impact image clarity, including features such as side branches and intraluminal blood presence. This paper presents ARCOCT, a segmentation method for fully-automatic detection of lumen border in OCT images. ARCOCT relies on multiple, consecutive processing steps, accounting for image preparation, contour extraction and refinement. In particular, for contour extraction ARCOCT employs the transformation of OCT images based on physical characteristics such as reflectivity and absorption of the tissue and, for contour refinement, local regression using weighted linear least squares and a 2nd degree polynomial model is employed to achieve artifact and small-branch correction as well as smoothness of the artery mesh. Our major focus was to achieve accurate contour delineation in the various types of OCT images, i.e., even in challenging cases with branches and artifacts. ARCOCT has been assessed in a dataset of 1812 images (308 from stented and 1504 from native segments) obtained from 20 patients. ARCOCT was compared against ground-truth manual segmentation performed by experts on the basis of various geometric features (e.g. area, perimeter, radius, diameter, centroid, etc.) and closed contour matching indicators (the Dice index, the Hausdorff distance and the undirected average distance), using standard statistical analysis methods. The proposed method was proven very efficient and close to the ground-truth, exhibiting non statistically-significant differences for most of the examined metrics. ARCOCT allows accurate and fully-automated lumen border

  14. Sieve-based relation extraction of gene regulatory networks from biological literature.

    Science.gov (United States)

    Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko

    2015-01-01

    Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming

  15. TIE POINTS EXTRACTION FOR SAR IMAGES BASED ON DIFFERENTIAL CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    X. Xiong

    2018-04-01

    Full Text Available Automatically extracting tie points (TPs on large-size synthetic aperture radar (SAR images is still challenging because the efficiency and correct ratio of the image matching need to be improved. This paper proposes an automatic TPs extraction method based on differential constraints for large-size SAR images obtained from approximately parallel tracks, between which the relative geometric distortions are small in azimuth direction and large in range direction. Image pyramids are built firstly, and then corresponding layers of pyramids are matched from the top to the bottom. In the process, the similarity is measured by the normalized cross correlation (NCC algorithm, which is calculated from a rectangular window with the long side parallel to the azimuth direction. False matches are removed by the differential constrained random sample consensus (DC-RANSAC algorithm, which appends strong constraints in azimuth direction and weak constraints in range direction. Matching points in the lower pyramid images are predicted with the local bilinear transformation model in range direction. Experiments performed on ENVISAT ASAR and Chinese airborne SAR images validated the efficiency, correct ratio and accuracy of the proposed method.

  16. Automatic temperature control method of shipping can

    International Nuclear Information System (INIS)

    Nishikawa, Kaoru.

    1992-01-01

    The present invention provides a method of rapidly and accurately controlling the temperature of a shipping can, which is used upon shipping inspection for a nuclear fuel assembly. That is, a measured temperature value of the shipping can is converted to a gas pressure setting value in a jacket of the shipping can by conducting a predetermined logic calculation by using a fuzzy logic. A gas pressure control section compares the pressure setting value of a fuzzy estimation section and the measured value of the gas pressure in the jacket of the shipping can, and conducts air supply or exhaustion of the jacket gas so as to adjust the measured value with the setting value. These fuzzy estimation section and gas pressure control section control the gas pressure in the jacket of the shipping can to control the water level in the jacket. As a result, the temperature of the shipping can is controlled. With such procedures, since the water level in the jacket can be controlled directly and finely, temperature of the shipping can is automatically controlled rapidly and accurately compared with a conventional case. (I.S.)

  17. A method for the automatic separation of the images of galaxies and stars from measurements made with the COSMOS machine

    International Nuclear Information System (INIS)

    MacGillivray, H.T.; Martin, R.; Pratt, N.M.; Reddish, V.C.; Seddon, H.; Alexander, L.W.G.; Walker, G.S.; Williams, P.R.

    1976-01-01

    A method has been developed which allows the computer to distinguish automatically between the images of galaxies and those of stars from measurements made with the COSMOS automatic plate-measuring machine at the Royal Observatory, Edinburgh. Results have indicated that a 90 to 95 per cent separation between galaxies and stars is possible. (author)

  18. Comparative study of methods for extraction and purification of ...

    African Journals Online (AJOL)

    USER

    2010-08-02

    Aug 2, 2010 ... and or enzymatic lysis for direct or indirect extraction of. DNA followed by ... strength wastewater sludge in order to determine the best. DNA extraction protocol ... Ammonium acetate purification method was used to remove the.

  19. A method of applying two-pump system in automatic transmissions for energy conservation

    Directory of Open Access Journals (Sweden)

    Peng Dong

    2015-06-01

    Full Text Available In order to improve the hydraulic efficiency, modern automatic transmissions tend to apply electric oil pump in their hydraulic system. The electric oil pump can support the mechanical oil pump for cooling, lubrication, and maintaining the line pressure at low engine speeds. In addition, the start–stop function can be realized by means of the electric oil pump; thus, the fuel consumption can be further reduced. This article proposes a method of applying two-pump system (one electric oil pump and one mechanical oil pump in automatic transmissions based on the forward driving simulation. A mathematical model for calculating the transmission power loss is developed. The power loss transfers to heat which requires oil flow for cooling and lubrication. A leakage model is developed to calculate the leakage of the hydraulic system. In order to satisfy the flow requirement, a flow-based control strategy for the electric oil pump is developed. Simulation results of different driving cycles show that there is a best combination of the size of electric oil pump and the size of mechanical oil pump with respect to the optimal energy conservation. Besides, the two-pump system can also satisfy the requirement of the start–stop function. This research is extremely valuable for the forward design of a two-pump system in automatic transmissions with respect to energy conservation and start–stop function.

  20. Systems and methods for automatically identifying and linking names in digital resources

    Science.gov (United States)

    Parker, Charles T.; Lyons, Catherine M.; Roston, Gerald P.; Garrity, George M.

    2017-06-06

    The present invention provides systems and methods for automatically identifying name-like-strings in digital resources, matching these name-like-string against a set of names held in an expertly curated database, and for those name-like-strings found in said database, enhancing the content by associating additional matter with the name, wherein said matter includes information about the names that is held within said database and pointers to other digital resources which include the same name and it synonyms.

  1. Method for quantifying the uncertainty with the extraction of the raw data of a gamma ray spectrum by deconvolution software

    International Nuclear Information System (INIS)

    Vigineix, Thomas; Guillot, Nicolas; Saurel, Nicolas

    2013-06-01

    Gamma ray spectrometry is a passive non destructive assay most commonly used to identify and quantify the radionuclides present in complex huge objects such as nuclear waste packages. The treatment of spectra from the measurement of nuclear waste is done in two steps: the first step is to extract the raw data from the spectra (energies and the net photoelectric absorption peaks area) and the second step is to determine the detection efficiency of the measuring scene. Commercial software use different methods to extract the raw data spectrum but none are optimal in the treatment of spectra containing actinides. Spectra should be handled individually and requires settings and an important feedback part from the operator, which prevents the automatic process of spectrum and increases the risk of human error. In this context the Nuclear Measurement and Valuation Laboratory (LMNE) in the Atomic Energy Commission Valduc (CEA Valduc) has developed a new methodology for quantifying the uncertainty associated with the extraction of the raw data over spectrum. This methodology was applied with raw data and commercial software that need configuration by the operator (GENIE2000, Interwinner...). This robust and fully automated methodology of uncertainties calculation is performed on the entire process of the software. The methodology ensures for all peaks processed by the deconvolution software an extraction of energy peaks closed to 2 channels and an extraction of net areas with an uncertainty less than 5 percents. The methodology was tested experimentally with actinides spectrum. (authors)

  2. RESEARCH ON REMOTE SENSING GEOLOGICAL INFORMATION EXTRACTION BASED ON OBJECT ORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2018-04-01

    Full Text Available The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  3. Development an Automatic Speech to Facial Animation Conversion for Improve Deaf Lives

    Directory of Open Access Journals (Sweden)

    S. Hamidreza Kasaei

    2011-05-01

    Full Text Available In this paper, we propose design and initial implementation of a robust system which can automatically translates voice into text and text to sign language animations. Sign Language
    Translation Systems could significantly improve deaf lives especially in communications, exchange of information and employment of machine for translation conversations from one language to another has. Therefore, considering these points, it seems necessary to study the speech recognition. Usually, the voice recognition algorithms address three major challenges. The first is extracting feature form speech and the second is when limited sound gallery are available for recognition, and the final challenge is to improve speaker dependent to speaker independent voice recognition. Extracting feature form speech is an important stage in our method. Different procedures are available for extracting feature form speech. One of the commonest of which used in speech
    recognition systems is Mel-Frequency Cepstral Coefficients (MFCCs. The algorithm starts with preprocessing and signal conditioning. Next extracting feature form speech using Cepstral coefficients will be done. Then the result of this process sends to segmentation part. Finally recognition part recognizes the words and then converting word recognized to facial animation. The project is still in progress and some new interesting methods are described in the current report.

  4. DNA extraction methods for detecting genetically modified foods: A comparative study.

    Science.gov (United States)

    Elsanhoty, Rafaat M; Ramadan, Mohamed Fawzy; Jany, Klaus Dieter

    2011-06-15

    The work presented in this manuscript was achieved to compare six different methods for extracting DNA from raw maize and its derived products. The methods that gave higher yield and quality of DNA were chosen to detect the genetic modification in the samples collected from the Egyptian market. The different methods used were evaluated for extracting DNA from maize kernels (without treatment), maize flour (mechanical treatment), canned maize (sweet corn), frozen maize (sweet corn), maize starch, extruded maize, popcorn, corn flacks, maize snacks, and bread made from corn flour (mechanical and thermal treatments). The quality and quantity of the DNA extracted from the standards, containing known percentages of GMO material and from the different food products were evaluated. For qualitative detection of the GMO varieties in foods, the GMOScreen 35S/NOS test kit was used, to screen the genetic modification in the samples. The positive samples for the 35S promoter and/or the NOS terminator were identified by the standard methods adopted by EU. All of the used methods extracted yielded good DNA quality. However, we noted that the purest DNA extract were obtained using the DNA extraction kit (Roche) and this generally was the best method for extracting DNA from most of the maize-derived foods. We have noted that the yield of DNA extracted from maize-derived foods was generally lower in the processed products. The results indicated that 17 samples were positive for the presence of 35S promoter, while 34% from the samples were positive for the genetically modified maize line Bt-176. Copyright © 2010 Elsevier Ltd. All rights reserved.

  5. DNA extraction method for PCR in mycorrhizal fungi.

    Science.gov (United States)

    Manian, S; Sreenivasaprasad, S; Mills, P R

    2001-10-01

    To develop a simple and rapid DNA extraction protocol for PCR in mycorrhizal fungi. The protocol combines the application of rapid freezing and boiling cycles and passage of the extracts through DNA purification columns. PCR amplifiable DNA was obtained from a number of endo- and ecto-mycorrhizal fungi using minute quantities of spores and mycelium, respectively. DNA extracted following the method, was used to successfully amplify regions of interest from high as well as low copy number genes. The amplicons were suitable for further downstream applications such as sequencing and PCR-RFLPs. The protocol described is simple, short and facilitates rapid isolation of PCR amplifiable genomic DNA from a large number of fungal isolates in a single day. The method requires only minute quantities of starting material and is suitable for mycorrhizal fungi as well as a range of other fungi.

  6. An analysis of line-drawings based upon automatically inferred grammar and its application to chest x-ray images

    International Nuclear Information System (INIS)

    Nakayama, Akira; Yoshida, Yuuji; Fukumura, Teruo

    1984-01-01

    There is a technique using inferring grammer as image- structure analyzing technique. This technique involves a few problems if it is applied to naturally obtained images, as the practical grammatical technique for two-dimensional image is not established. The authors developed a technique which solved the above problems for the main purpose of the automated structure analysis of naturally obtained image. The first half of this paper describes on the automatic inference of line drawing generation grammar and the line drawing analysis based on that automatic inference. The second half of the paper reports on the actual analysis. The proposed technique is that to extract object line drawings out of the line drawings containing noise. The technique was evaluated for its effectiveness with an example of extracting rib center lines out of thin line chest X-ray images having practical scale and complexity. In this example, the total number of characteristic points (ends, branch points and intersections) composing line drawings per one image was 377, and the total number of line segments composing line drawings was 566 on average per sheet. The extraction ratio was 86.6 % which seemed to be proper when the complexity of input line drawings was considered. Further, the result was compared with the identified rib center lines with the automatic screening system AISCR-V3 for comparison with the conventional processing technique, and it was satisfactory when the versatility of this method was considered. (Wakatsuki, Y.)

  7. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    Science.gov (United States)

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  8. Evaluation of sample extraction methods for proteomics analysis of green algae Chlorella vulgaris.

    Science.gov (United States)

    Gao, Yan; Lim, Teck Kwang; Lin, Qingsong; Li, Sam Fong Yau

    2016-05-01

    Many protein extraction methods have been developed for plant proteome analysis but information is limited on the optimal protein extraction method from algae species. This study evaluated four protein extraction methods, i.e. direct lysis buffer method, TCA-acetone method, phenol method, and phenol/TCA-acetone method, using green algae Chlorella vulgaris for proteome analysis. The data presented showed that phenol/TCA-acetone method was superior to the other three tested methods with regards to shotgun proteomics. Proteins identified using shotgun proteomics were validated using sequential window acquisition of all theoretical fragment-ion spectra (SWATH) technique. Additionally, SWATH provides protein quantitation information from different methods and protein abundance using different protein extraction methods was evaluated. These results highlight the importance of green algae protein extraction method for subsequent MS analysis and identification. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Optic disc detection and boundary extraction in retinal images.

    Science.gov (United States)

    Basit, A; Fraz, Muhammad Moazam

    2015-04-10

    With the development of digital image processing, analysis and modeling techniques, automatic retinal image analysis is emerging as an important screening tool for early detection of ophthalmologic disorders such as diabetic retinopathy and glaucoma. In this paper, a robust method for optic disc detection and extraction of the optic disc boundary is proposed to help in the development of computer-assisted diagnosis and treatment of such ophthalmic disease. The proposed method is based on morphological operations, smoothing filters, and the marker controlled watershed transform. Internal and external markers are used to first modify the gradient magnitude image and then the watershed transformation is applied on this modified gradient magnitude image for boundary extraction. This method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. The proposed method has optic disc detection success rate of 100%, 100%, 100% and 98.9% for the DRIVE, Shifa, CHASE_DB1, and DIARETDB1 databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 61.88%, 70.96%, 45.61%, and 54.69% for these databases, respectively, which are higher than currents methods.

  10. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  11. A New Approach to Urban Road Extraction Using High-Resolution Aerial Image

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2016-07-01

    Full Text Available Road information is fundamental not only in the military field but also common daily living. Automatic road extraction from a remote sensing images can provide references for city planning as well as transportation database and map updating. However, owing to the spectral similarity between roads and impervious structures, the current methods solely using spectral characteristics are often ineffective. By contrast, the detailed information discernible from the high-resolution aerial images enables road extraction with spatial texture features. In this study, a knowledge-based method is established and proposed; this method incorporates the spatial texture feature into urban road extraction. The spatial texture feature is initially extracted by the local Moran’s I, and the derived texture is added to the spectral bands of image for image segmentation. Subsequently, features like brightness, standard deviation, rectangularity, aspect ratio, and area are selected to form the hypothesis and verification model based on road knowledge. Finally, roads are extracted by applying the hypothesis and verification model and are post-processed based on the mathematical morphology. The newly proposed method is evaluated by conducting two experiments. Results show that the completeness, correctness, and quality of the results could reach approximately 94%, 90% and 86% respectively, indicating that the proposed method is effective for urban road extraction.

  12. [Comparison of two nucleic acid extraction methods for norovirus in oysters].

    Science.gov (United States)

    Yuan, Qiao; Li, Hui; Deng, Xiaoling; Mo, Yanling; Fang, Ling; Ke, Changwen

    2013-04-01

    To explore a convenient and effective method for norovirus nucleic acid extraction from oysters suitable for long-term viral surveillance. Two methods, namely method A (glycine washing and polyethylene glycol precipitation of the virus followed by silica gel centrifugal column) and method B (protease K digestion followed by application of paramagnetic silicon) were compared for their performance in norovirus nucleic acid extraction from oysters. Real-time RT-PCR was used to detect norovirus in naturally infected oysters and in oysters with induced infection. The two methods yielded comparable positive detection rates for the samples, but the recovery rate of the virus was higher with method B than with method A. Method B is a more convenient and rapid method for norovirus nucleic acid extraction from oysters and suitable for long-term surveillance of norovirus.

  13. Automatic radiation dose monitoring for CT of trauma patients with different protocols: feasibility and accuracy

    International Nuclear Information System (INIS)

    Higashigaito, K.; Becker, A.S.; Sprengel, K.; Simmen, H.-P.; Wanner, G.; Alkadhi, H.

    2016-01-01

    Aim: To demonstrate the feasibility and accuracy of automatic radiation dose monitoring software for computed tomography (CT) of trauma patients in a clinical setting over time, and to evaluate the potential of radiation dose reduction using iterative reconstruction (IR). Materials and methods: In a time period of 18 months, data from 378 consecutive thoraco-abdominal CT examinations of trauma patients were extracted using automatic radiation dose monitoring software, and patients were split into three cohorts: cohort 1, 64-section CT with filtered back projection, 200 mAs tube current–time product; cohort 2, 128-section CT with IR and identical imaging protocol; cohort 3, 128-section CT with IR, 150 mAs tube current–time product. Radiation dose parameters from the software were compared with the individual patient protocols. Image noise was measured and image quality was semi-quantitatively determined. Results: Automatic extraction of radiation dose metrics was feasible and accurate in all (100%) patients. All CT examinations were of diagnostic quality. There were no differences between cohorts 1 and 2 regarding volume CT dose index (CTDI_v_o_l; p=0.62), dose–length product (DLP), and effective dose (ED, both p=0.95), while noise was significantly lower (chest and abdomen, both −38%, p<0.017). Compared to cohort 1, CTDI_v_o_l, DLP, and ED in cohort 3 were significantly lower (all −25%, p<0.017), similar to the noise in the chest (–32%) and abdomen (–27%, both p<0.017). Compared to cohort 2, CTDI_v_o_l (–28%), DLP, and ED (both –26%) in cohort 3 was significantly lower (all, p<0.017), while noise in the chest (+9%) and abdomen (+18%) was significantly higher (all, p<0.017). Conclusion: Automatic radiation dose monitoring software is feasible and accurate, and can be implemented in a clinical setting for evaluating the effects of lowering radiation doses of CT protocols over time. - Highlights: • Automatic dose monitoring software can be

  14. Automatic segmentation of coronary angiograms based on fuzzy inferring and probabilistic tracking

    Directory of Open Access Journals (Sweden)

    Shoujun Zhou

    2010-08-01

    Full Text Available Abstract Background Segmentation of the coronary angiogram is important in computer-assisted artery motion analysis or reconstruction of 3D vascular structures from a single-plan or biplane angiographic system. Developing fully automated and accurate vessel segmentation algorithms is highly challenging, especially when extracting vascular structures with large variations in image intensities and noise, as well as with variable cross-sections or vascular lesions. Methods This paper presents a novel tracking method for automatic segmentation of the coronary artery tree in X-ray angiographic images, based on probabilistic vessel tracking and fuzzy structure pattern inferring. The method is composed of two main steps: preprocessing and tracking. In preprocessing, multiscale Gabor filtering and Hessian matrix analysis were used to enhance and extract vessel features from the original angiographic image, leading to a vessel feature map as well as a vessel direction map. In tracking, a seed point was first automatically detected by analyzing the vessel feature map. Subsequently, two operators [e.g., a probabilistic tracking operator (PTO and a vessel structure pattern detector (SPD] worked together based on the detected seed point to extract vessel segments or branches one at a time. The local structure pattern was inferred by a multi-feature based fuzzy inferring function employed in the SPD. The identified structure pattern, such as crossing or bifurcation, was used to control the tracking process, for example, to keep tracking the current segment or start tracking a new one, depending on the detected pattern. Results By appropriate integration of these advanced preprocessing and tracking steps, our tracking algorithm is able to extract both vessel axis lines and edge points, as well as measure the arterial diameters in various complicated cases. For example, it can walk across gaps along the longitudinal vessel direction, manage varying vessel

  15. Semi-automatic watershed medical image segmentation methods for customized cancer radiation treatment planning simulation

    International Nuclear Information System (INIS)

    Kum Oyeon; Kim Hye Kyung; Max, N.

    2007-01-01

    A cancer radiation treatment planning simulation requires image segmentation to define the gross tumor volume, clinical target volume, and planning target volume. Manual segmentation, which is usual in clinical settings, depends on the operator's experience and may, in addition, change for every trial by the same operator. To overcome this difficulty, we developed semi-automatic watershed medical image segmentation tools using both the top-down watershed algorithm in the insight segmentation and registration toolkit (ITK) and Vincent-Soille's bottom-up watershed algorithm with region merging. We applied our algorithms to segment two- and three-dimensional head phantom CT data and to find pixel (or voxel) numbers for each segmented area, which are needed for radiation treatment optimization. A semi-automatic method is useful to avoid errors incurred by both human and machine sources, and provide clear and visible information for pedagogical purpose. (orig.)

  16. Analysis of lignans in Magnoliae Flos by turbulent flow chromatography with online solid-phase extraction and high-performance liquid chromatography with tandem mass spectrometry.

    Science.gov (United States)

    Zhou, Xuan; Chen, Cen; Ye, Xiaolan; Song, Fenyun; Fan, Guorong; Wu, Fuhai

    2016-04-01

    In this study, a method coupling turbulent flow chromatography with online solid-phase extraction and high-performance liquid chromatography with tandem mass spectrometry was developed for analyzing the lignans in Magnoliae Flos. By the online pretreatment of turbulent flow chromatography solid-phase extraction, the impurities removal and analytes concentration were automatically processed, and the lignans were separated rapidly and well. Seven lignans of Magnoliae Flos including epieudesmin, magnolin, 1-irioresinol-B-dimethyl ether, epi-magnolin, fargesin aschantin, and demethoxyaschantin were identified by comparing their retention behavior, UV spectra, and mass spectra with those of reference substances or literature data. The developed method was validated, and the good results showed that the method was not only automatic and rapid, but also accurate and reliable. The turbulent flow chromatography with online solid-phase extraction and high-performance liquid chromatography with tandem mass spectrometry method holds a high potential to become an effective method for the quality control of lignans in Magnoliae Flos and a useful tool for the analysis of other complex mixtures. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Automatic fringe enhancement with novel bidimensional sinusoids-assisted empirical mode decomposition.

    Science.gov (United States)

    Wang, Chenxing; Kemao, Qian; Da, Feipeng

    2017-10-02

    Fringe-based optical measurement techniques require reliable fringe analysis methods, where empirical mode decomposition (EMD) is an outstanding one due to its ability of analyzing complex signals and the merit of being data-driven. However, two challenging issues hinder the application of EMD in practical measurement. One is the tricky mode mixing problem (MMP), making the decomposed intrinsic mode functions (IMFs) have equivocal physical meaning; the other is the automatic and accurate extraction of the sinusoidal fringe from the IMFs when unpredictable and unavoidable background and noise exist in real measurements. Accordingly, in this paper, a novel bidimensional sinusoids-assisted EMD (BSEMD) is proposed to decompose a fringe pattern into mono-component bidimensional IMFs (BIMFs), with the MMP solved; properties of the resulted BIMFs are then analyzed to recognize and enhance the useful fringe component. The decomposition and the fringe recognition are integrated and the latter provides a feedback to the former, helping to automatically stop the decomposition to make the algorithm simpler and more reliable. A series of experiments show that the proposed method is accurate, efficient and robust to various fringe patterns even with poor quality, rendering it a potential tool for practical use.

  18. Automatic Generation System of Multiple-Choice Cloze Questions and its Evaluation

    Directory of Open Access Journals (Sweden)

    Takuya Goto

    2010-09-01

    Full Text Available Since English expressions vary according to the genres, it is important for students to study questions that are generated from sentences of the target genre. Although various questions are prepared, it is still not enough to satisfy various genres which students want to learn. On the other hand, when producing English questions, sufficient grammatical knowledge and vocabulary are needed, so it is difficult for non-expert to prepare English questions by themselves. In this paper, we propose an automatic generation system of multiple-choice cloze questions from English texts. Empirical knowledge is necessary to produce appropriate questions, so machine learning is introduced to acquire knowledge from existing questions. To generate the questions from texts automatically, the system (1 extracts appropriate sentences for questions from texts based on Preference Learning, (2 estimates a blank part based on Conditional Random Field, and (3 generates distracters based on statistical patterns of existing questions. Experimental results show our method is workable for selecting appropriate sentences and blank part. Moreover, our method is appropriate to generate the available distracters, especially for the sentence that does not contain the proper noun.

  19. Automatic comic page image understanding based on edge segment analysis

    Science.gov (United States)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  20. Automatic detection of adverse events to predict drug label changes using text and data mining techniques.

    Science.gov (United States)

    Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki

    2013-11-01

    The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.