WorldWideScience

Sample records for automatable method extract

  1. Morphological method for automatic extraction of the coronary arteries

    International Nuclear Information System (INIS)

    Coronary arteriography is a clinically important diagnostic tool for the evaluation of coronary artery disease, and can provide detailed information. For the quantitative assessment of the coronary arteriograms. Several studies concerning the extraction of vessel edges have been published, and automatic extraction of vessel edges has been used in clinical diagnostic systems. However, these methods are not satisfactory, because manual modification by the operator is unavoidable in some cases. To reduce manual operation, accurate and automatic extraction of the coronary arteries is necessary. In this paper, we propose a new technique for automatic extraction of the coronary arteries using morphological operators. This method includes the following steps: contrast enhancement using a morphological Top-Hat operator, enhancement of thin vessels and reduction of pulse noise using a morphological erosion operator, elimination of obvious background pixels by semi-binary thresholding, and extraction of the coronary arteries by labeling and counting the area. (author)

  2. Automatic extraction of candidate nomenclature terms using the doublet method

    Directory of Open Access Journals (Sweden)

    Berman Jules J

    2005-10-01

    nomenclature. Results A 31+ Megabyte corpus of pathology journal abstracts was parsed using the doublet extraction method. This corpus consisted of 4,289 records, each containing an abstract title. The total number of words included in the abstract titles was 50,547. New candidate terms for the nomenclature were automatically extracted from the titles of abstracts in the corpus. Total execution time on a desktop computer with CPU speed of 2.79 GHz was 2 seconds. The resulting output consisted of 313 new candidate terms, each consisting of concatenated doublets found in the reference nomenclature. Human review of the 313 candidate terms yielded a list of 285 terms approved by a curator. A final automatic extraction of duplicate terms yielded a final list of 222 new terms (71% of the original 313 extracted candidate terms that could be added to the reference nomenclature. Conclusion The doublet method for automatically extracting candidate nomenclature terms can be used to quickly find new terms from vast amounts of text. The method can be immediately adapted for virtually any text and any nomenclature. An implementation of the algorithm, in the Perl programming language, is provided with this article.

  3. An automatic abrupt information extraction method based on singular value decomposition and higher-order statistics

    International Nuclear Information System (INIS)

    One key aspect of local fault diagnosis is how to effectively extract abrupt features from the vibration signals. This paper proposes a method to automatically extract abrupt information based on singular value decomposition and higher-order statistics. In order to observe the distribution law of singular values, a numerical analysis to simulate the noise, periodic signal, abrupt signal and singular value distribution is conducted. Based on higher-order statistics and spectrum analysis, a method to automatically choose the upper and lower borders of the singular value interval reflecting the abrupt information is built. And the selected singular values derived from this method are used to reconstruct abrupt signals. It is proven that the method is able to obtain accurate results by processing the rub-impact fault signal measured from the experiments. The analytical and experimental results indicate that the proposed method is feasible for automatically extracting abrupt information caused by faults like the rotor–stator rub-impact. (paper)

  4. Development of automatic extraction method of left ventricular contours on long axis view MR cine images

    International Nuclear Information System (INIS)

    In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)

  5. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  6. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  7. A FAST AND ACCURATE METHOD FOR AUTOMATIC CORONARY ARTERIAL TREE EXTRACTION IN ANGIOGRAMS

    Directory of Open Access Journals (Sweden)

    Rohollah Moosavi Tayebi

    2014-01-01

    Full Text Available Coronary arterial tree extraction in angiograms is an essential component of each cardiac image processing system. Once physicians decide to check up coronary arteries from x-ray angiograms, extraction must be done precisely, fast, automatically and including whole arterial tree to help diagnosis or treatment during the cardiac surgical operation. This application is very helpful for the surgeon on deciding the target vessels prior to coronary artery bypass graft surgery. Some techniques and algorithms are proposed for extracting coronary arteries in angiograms. However, most of them suffer from some disadvantages such as time complexity, low accuracy, extracting only parts of main arteries instead of the full coronary arterial tree, need manual segmentation, appearance of artifacts and so forth. This study presents a new method for extracting whole coronary arterial tree in angiography images using Starlet wavelet transform. To this end, firstly we remove noise from raw angiograms and then sharpen the coronary arteries. Then coronary arterial tree is extracted by applying a modified Starlet wavelet transform and afterwards the residual noises and artifacts are cleaned. For evaluation, we measure proposed method performance on our created data set from 4932 Left Coronary Artery (LCA and Right Coronary Artery (RCA angiograms and compared with some state-of-the-art approaches. The proposed method shows much higher accuracy 96% for LCA and 97% for RCA, higher sensitivity 86% for LCA and 89% for RCA, higher specificity 98% for LCA and 99% for RCA and also higher precision 87% for LCA and 93% for RCA angiograms.

  8. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  9. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  10. A Semi-Automatic Method for Extracting Vocal-Tract Movements from X-Ray Films

    OpenAIRE

    Fontecave Jallon, Julie; Berthommier, Frédéric

    2008-01-01

    Despite the development of new imaging techniques, existing X-ray data remain an appropriate tool to study speech production phenomena. However, to exploit these images, the shapes of the vocal tract articulators must first be extracted. This task, usually manually realized, is long and laborious. This paper describes a semi-automatic technique for facilitating the extraction of vocal tract contours from complete sequences of large existing cineradiographic databases in the context of continu...

  11. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    CERN Document Server

    Ghahari, Alireza

    2009-01-01

    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  12. An automatic integrated image segmentation, registration and change detection method for water-body extraction using HSR images and GIS data

    OpenAIRE

    H.G. Sui; Chen, G.; Hua, L.

    2013-01-01

    Automatic water-body extraction from remote sense images is a challenging problem. Using GIS data to update and extract waterbody is an old but active topic. However, automatic registration and change detection of the two data sets often presents difficulties. In this paper, a novel automatic water-body extraction method is proposed. The core idea is to integrate image segmentation, image registration and change detection with GIS data as a whole processing. A new iterative segmentat...

  13. An automatic heart wall contour extraction method on MR images using the Active Contour Model

    International Nuclear Information System (INIS)

    In this paper, we propose a new method of extracting heart wall contours using the Active Contour Model (snakes). We use an adaptive contrast enhancing method, which made it possible to extract both inner and outer contours of the left ventricule of the heart. Experimental results showed the efficiency of this method. (author)

  14. Automatic Keyword Extraction from Individual Documents

    Energy Technology Data Exchange (ETDEWEB)

    Rose, Stuart J.; Engel, David W.; Cramer, Nicholas O.; Cowley, Wendy E.

    2010-05-03

    This paper introduces a novel and domain-independent method for automatically extracting keywords, as sequences of one or more words, from individual documents. We describe the method’s configuration parameters and algorithm, and present an evaluation on a benchmark corpus of technical abstracts. We also present a method for generating lists of stop words for specific corpora and domains, and evaluate its ability to improve keyword extraction on the benchmark corpus. Finally, we apply our method of automatic keyword extraction to a corpus of news articles and define metrics for characterizing the exclusivity, essentiality, and generality of extracted keywords within a corpus.

  15. Automatic Vehicle Extraction from Airborne LiDAR Data Using an Object-Based Point Cloud Analysis Method

    Directory of Open Access Journals (Sweden)

    Jixian Zhang

    2014-09-01

    Full Text Available Automatic vehicle extraction from an airborne laser scanning (ALS point cloud is very useful for many applications, such as digital elevation model generation and 3D building reconstruction. In this article, an object-based point cloud analysis (OBPCA method is proposed for vehicle extraction from an ALS point cloud. First, a segmentation-based progressive TIN (triangular irregular network densification is employed to detect the ground points, and the potential vehicle points are detected based on the normalized heights of the non-ground points. Second, 3D connected component analysis is performed to group the potential vehicle points into segments. At last, vehicle segments are detected based on three features, including area, rectangularity and elongatedness. Experiments suggest that the proposed method is capable of achieving higher accuracy than the exiting mean-shift-based method for vehicle extraction from an ALS point cloud. Moreover, the larger the point density is, the higher the achieved accuracy is.

  16. Method of automatic endocardium extraction from chest MRI images using three-dimensional digital image processing

    International Nuclear Information System (INIS)

    In this paper, we propose a method of endocardium extraction from chest MRI images. The proposed procedure constructed with three-dimentional digital image processing techniques is executed without manual intervention. A digital figure of endocardium is obtained as two components: left chambers and right chambers. The shape of extracted endocardium was verified by observing a voxel expression image displayed with depth-coded shading. Volume change curves of left and right chambers were calculated to show feasibility of using the results for measurement of cardiac functions. (author)

  17. AUTOMATIC EXTRACTION OF ROCK JOINTS FROM LASER SCANNED DATA BY MOVING LEAST SQUARES METHOD AND FUZZY K-MEANS CLUSTERING

    Directory of Open Access Journals (Sweden)

    S. Oh

    2012-09-01

    Full Text Available Recent development of laser scanning device increased the capability of representing rock outcrop in a very high resolution. Accurate 3D point cloud model with rock joint information can help geologist to estimate stability of rock slope on-site or off-site. An automatic plane extraction method was developed by computing normal directions and grouping them in similar direction. Point normal was calculated by moving least squares (MLS method considering every point within a given distance to minimize error to the fitting plane. Normal directions were classified into a number of dominating clusters by fuzzy K-means clustering. Region growing approach was exploited to discriminate joints in a point cloud. Overall procedure was applied to point cloud with about 120,000 points, and successfully extracted joints with joint information. The extraction procedure was implemented to minimize number of input parameters and to construct plane information into the existing point cloud for less redundancy and high usability of the point cloud itself.

  18. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  19. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    OpenAIRE

    Li, Y.; Hu, X.; H. Guan; Liu, P.

    2016-01-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these...

  20. Automatic Extraction of Planetary Image Features

    Science.gov (United States)

    Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.

    2009-01-01

    With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.

  1. Comparison of sample preparation methods for reliable plutonium and neptunium urinalysis using automatic extraction chromatography

    DEFF Research Database (Denmark)

    Qiao, Jixin; Xu, Yihong; Hou, Xiaolin;

    2014-01-01

    This paper describes improvement and comparison of analytical methods for simultaneous determination of trace-level plutonium and neptunium in urine samples by inductively coupled plasma mass spectrometry (ICP-MS). Four sample pre-concentration techniques, including calcium phosphate, iron......-concentration approaches afford dissimilar method performances and care should be taken for specific experimental parameters for improving chemical yields. The best analytical performances in terms of turnaround time (6 h) and chemical yields for plutonium (88.7 +/- 11.6%) and neptunium (94.2 +/- 2.0%) were achieved...... of plutonium and neptunium associated with organic compounds in real urine assays. In this work, different protocols for decomposing organic matter in urine were investigated, of which potassium persulfate (K2S2O8) treatment provided the highest chemical yield of neptunium in the iron hydroxide co...

  2. Automatic Feature Extraction from Planetary Images

    Science.gov (United States)

    Troglio, Giulia; Le Moigne, Jacqueline; Benediktsson, Jon A.; Moser, Gabriele; Serpico, Sebastiano B.

    2010-01-01

    With the launch of several planetary missions in the last decade, a large amount of planetary images has already been acquired and much more will be available for analysis in the coming years. The image data need to be analyzed, preferably by automatic processing techniques because of the huge amount of data. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to planetary data that often present low contrast and uneven illumination characteristics. Different methods have already been presented for crater extraction from planetary images, but the detection of other types of planetary features has not been addressed yet. Here, we propose a new unsupervised method for the extraction of different features from the surface of the analyzed planet, based on the combination of several image processing techniques, including a watershed segmentation and the generalized Hough Transform. The method has many applications, among which image registration and can be applied to arbitrary planetary images.

  3. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    OpenAIRE

    Haijian Chen; Dongmei Han; Yonghui Dai; Lina Zhao

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course ...

  4. Automatic spectrophotometric method involving liquid-liquid extraction for the determination of europium in the presence of other lanthanides, yttrium and scandium

    International Nuclear Information System (INIS)

    A liquid-liquid extraction step has been incorporated into an automatic method for determination of europium in the presence of other lanthanides, yttrium and scandium. Europium(III) is selectively reduced on a Jones reductor and the europium(II) reacted with molybdophosphoric acid to produce a molybdenum blue which is extracted into isoamyl alcohol for spectrophotometric determination. Incorporation of the extraction step increases the sensitivity of the method by a factor of 5 enabling from 2 to 50 μg of europium per ml of aqueous sample solution to be determined but reduces the sampling rate from 20 to 10 samples per hour. The method has been applied to the determination of europium in lanthanide oxides and in the minerals bastnasite and monazite following a lanthanide group separation. (orig.)

  5. Automatic liquid-liquid extraction system

    International Nuclear Information System (INIS)

    This invention concerns an automatic liquid-liquid extraction system ensuring great reproducibility on a number of samples, stirring and decanting of the two liquid phases, then the quantitative removal of the entire liquid phase present in the extraction vessel at the end of the operation. This type of system has many applications, particularly in carrying out analytical processes comprising a stage for the extraction, by means of an appropriate solvent, of certain components of the sample under analysis

  6. An automatic system for multielement solvent extractions

    International Nuclear Information System (INIS)

    The automatic system described is suitable for multi-element separations by solvent extraction techniques with organic solvents heavier than water. The analysis is run automatically by a central control unit and includes steps such as pH regulation and reduction or oxidation. As an example, the separation of radioactive Hg2+, Cu2+, Mo6+, Cd2+, As5+, Sb5+, Fe3+, and Co3+ by means of diethyldithiocarbonate complexes is reported. (Auth.)

  7. Automatic extraction of left ventricular contours from MRI images

    International Nuclear Information System (INIS)

    In the MRI cardiac function analysis, left ventricular volume curves and diagnostic parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from the MR cine images. The ROI extractions had to be done by manual operations, so the examination efficiency and data analysis reproducibility were poor in diagnoses on site. In this paper, we outline an automatic extraction method for the left ventricular contours from MR cine images to improve cardiac function diagnosis. With this method, the operator needs to manually indicate only 3 points on the 1st image, and can then get all the contours from the total sequence of images automatically. (author)

  8. Automatic target extraction in complicated background for camera calibration

    Science.gov (United States)

    Guo, Xichao; Wang, Cheng; Wen, Chenglu; Cheng, Ming

    2016-03-01

    In order to perform high precise calibration of camera in complex background, a novel design of planar composite target and the corresponding automatic extraction algorithm are presented. Unlike other commonly used target designs, the proposed target contains the information of feature point coordinate and feature point serial number simultaneously. Then based on the original target, templates are prepared by three geometric transformations and used as the input of template matching based on shape context. Finally, parity check and region growing methods are used to extract the target as final result. The experimental results show that the proposed method for automatic extraction and recognition of the proposed target is effective, accurate and reliable.

  9. Automatic Keywords Extraction for Punjabi Language

    Directory of Open Access Journals (Sweden)

    Vishal Gupta

    2011-09-01

    Full Text Available Automatic keywords extraction is the task to identify a small set of words, key phrases, keywords, or key segments from a document that can describe the meaning of the document. Keywords are useful tools as they give the shortest summary of the document. This paper concentrates on Automatic keywords extraction for Punjabi language text. It includes various phases like removing stop words, Identification of Punjabi nouns and noun stemming, Calculation of Term Frequency and Inverse Sentence Frequency (TF-ISF, Punjabi keywords as nouns with high TF-ISF score and title/headline feature for Punjabi text. The extracted keywords are very much helpful in automatic indexing, text summarization, information retrieval, classification, clustering, topic detection and tracking and web searches etc.

  10. Automatic extraction of left ventricle in SPECT myocardial perfusion imaging

    International Nuclear Information System (INIS)

    An automatic method of extracting left ventricle from SPECT myocardial perfusion data was introduced. This method was based on the least square analysis of the positions of all short-axis slices pixels from the half sphere-cylinder myocardial model, and used a iterative reconstruction technique to automatically cut off the non-left ventricular tissue from the perfusion images. Thereby, this technique provided the bases for further quantitative analysis

  11. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  12. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  13. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    OpenAIRE

    Chuqing Cao; Ying Sun

    2014-01-01

    Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data....

  14. A Novel Characteristic Frequency Bands Extraction Method for Automatic Bearing Fault Diagnosis Based on Hilbert Huang Transform

    Directory of Open Access Journals (Sweden)

    Xiao Yu

    2015-11-01

    Full Text Available Because roller element bearings (REBs failures cause unexpected machinery breakdowns, their fault diagnosis has attracted considerable research attention. Established fault feature extraction methods focus on statistical characteristics of the vibration signal, which is an approach that loses sight of the continuous waveform features. Considering this weakness, this article proposes a novel feature extraction method for frequency bands, named Window Marginal Spectrum Clustering (WMSC to select salient features from the marginal spectrum of vibration signals by Hilbert–Huang Transform (HHT. In WMSC, a sliding window is used to divide an entire HHT marginal spectrum (HMS into window spectrums, following which Rand Index (RI criterion of clustering method is used to evaluate each window. The windows returning higher RI values are selected to construct characteristic frequency bands (CFBs. Next, a hybrid REBs fault diagnosis is constructed, termed by its elements, HHT-WMSC-SVM (support vector machines. The effectiveness of HHT-WMSC-SVM is validated by running series of experiments on REBs defect datasets from the Bearing Data Center of Case Western Reserve University (CWRU. The said test results evidence three major advantages of the novel method. First, the fault classification accuracy of the HHT-WMSC-SVM model is higher than that of HHT-SVM and ST-SVM, which is a method that combines statistical characteristics with SVM. Second, with Gauss white noise added to the original REBs defect dataset, the HHT-WMSC-SVM model maintains high classification accuracy, while the classification accuracy of ST-SVM and HHT-SVM models are significantly reduced. Third, fault classification accuracy by HHT-WMSC-SVM can exceed 95% under a Pmin range of 500–800 and a m range of 50–300 for REBs defect dataset, adding Gauss white noise at Signal Noise Ratio (SNR = 5. Experimental results indicate that the proposed WMSC method yields a high REBs fault

  15. Automatic Extraction of JPF Options and Documentation

    Science.gov (United States)

    Luks, Wojciech; Tkachuk, Oksana; Buschnell, David

    2011-01-01

    Documenting existing Java PathFinder (JPF) projects or developing new extensions is a challenging task. JPF provides a platform for creating new extensions and relies on key-value properties for their configuration. Keeping track of all possible options and extension mechanisms in JPF can be difficult. This paper presents jpf-autodoc-options, a tool that automatically extracts JPF projects options and other documentation-related information, which can greatly help both JPF users and developers of JPF extensions.

  16. Automatic extraction of planetary image features

    Science.gov (United States)

    LeMoigne-Stewart, Jacqueline J. (Inventor); Troglio, Giulia (Inventor); Benediktsson, Jon A. (Inventor); Serpico, Sebastiano B. (Inventor); Moser, Gabriele (Inventor)

    2013-01-01

    A method for the extraction of Lunar data and/or planetary features is provided. The feature extraction method can include one or more image processing techniques, including, but not limited to, a watershed segmentation and/or the generalized Hough Transform. According to some embodiments, the feature extraction method can include extracting features, such as, small rocks. According to some embodiments, small rocks can be extracted by applying a watershed segmentation algorithm to the Canny gradient. According to some embodiments, applying a watershed segmentation algorithm to the Canny gradient can allow regions that appear as close contours in the gradient to be segmented.

  17. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    Directory of Open Access Journals (Sweden)

    Chuqing Cao

    2014-09-01

    Full Text Available Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.

  18. Rapid, potentially automatable, method extract biomarkers for HPLC/ESI/MS/MS to detect and identify BW agents

    Energy Technology Data Exchange (ETDEWEB)

    White, D.C. [Univ. of Tennessee, Knoxville, TN (United States). Center for Environmental Biotechnology]|[Oak Ridge National Lab., TN (United States). Environmental Science Div.; Burkhalter, R.S.; Smith, C. [Univ. of Tennessee, Knoxville, TN (United States). Center for Environmental Biotechnology; Whitaker, K.W. [Microbial Insights, Inc., Rockford, TN (United States)

    1997-12-31

    The program proposes to concentrate on the rapid recovery of signature biomarkers based on automated high-pressure, high-temperature solvent extraction (ASE) and/or supercritical fluid extraction (SFE) to produce lipids, nucleic acids and proteins sequentially concentrated and purified in minutes with yields especially from microeukaryotes, Gram-positive bacteria and spores. Lipids are extracted in higher proportions greater than classical one-phase, room temperature solvent extraction without major changes in lipid composition. High performance liquid chromatography (HPLC) with or without derivatization, electrospray ionization (ESI) and highly specific detection by mass spectrometry (MS) particularly with (MS){sup n} provides the detection, identification and because the signature lipid biomarkers are both phenotypic as well as genotypic biomarkers, insights into potential infectivity of BW agents. Feasibility has been demonstrated with detection, identification, and determination of infectious potential of Cryptosporidium parvum at the sensitivity of a single oocyst (which is unculturable in vitro) and accurate identification and prediction, pathogenicity, and drug-resistance of Mycobacteria spp.

  19. Automatic Extraction of Protein Interaction in Literature

    OpenAIRE

    Liu, Peilei; Wang, Ting

    2014-01-01

    Protein-protein interaction extraction is the key precondition of the construction of protein knowledge network, and it is very important for the research in the biomedicine. This paper extracted directional protein-protein interaction from the biological text, using the SVM-based method. Experiments were evaluated on the LLL05 corpus with good results. The results show that dependency features are import for the protein-protein interaction extraction and features related to the interaction w...

  20. A new generic method for semi-automatic extraction of river and road networks in low- and mid-resolution satellite images

    Science.gov (United States)

    Grazzini, Jacopo; Dillard, Scott; Soille, Pierre

    2010-10-01

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  1. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  2. Automatic Railway Power Line Extraction Using Mobile Laser Scanning Data

    Science.gov (United States)

    Zhang, Shanxin; Wang, Cheng; Yang, Zhuang; Chen, Yiping; Li, Jonathan

    2016-06-01

    Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning) data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis) combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  3. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs.

    Science.gov (United States)

    Chen, Haijian; Han, Dongmei; Dai, Yonghui; Zhao, Lina

    2015-01-01

    In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of "C programming language" are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738

  4. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  5. Automatically extracting sheet-metal features from solid model

    Institute of Scientific and Technical Information of China (English)

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷

    2004-01-01

    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  6. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data from...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank....

  7. Automatic Eye Extraction in Human Face Images

    Institute of Scientific and Technical Information of China (English)

    LIU Rujie; YUAN Baozong

    2001-01-01

    This paper presents a fuzzy-basedmethod to locate the position and the size of irises ina head-shoulder image with plain background. Thismethod is composed of two stages: the face region es-timation stage and the eye feature extraction stage.In the first stage, a region growing method is adoptedto estimate the face region. In the second stage, thecoarse eye area is firstly extracted based on the loca-tion of the nasion, and the deformable template al-gorithm is then applied to eye area to determine theposition and the size of irises. Experimental resultsshow the efficiency and robustness of this method.

  8. An Automatic Collocation Extraction from Arabic Corpus

    Directory of Open Access Journals (Sweden)

    Abdulgabbar M. Saif

    2011-01-01

    Full Text Available Problem statement: The identification of collocations is very important part in natural language processing applications that require some degree of semantic interpretation such as, machine translation, information retrieval and text summarization. Because of the complexities of Arabic, the collocations undergo some variations such as, morphological, graphical, syntactic variation that constitutes the difficulties of identifying the collocation. Approach: We used the hybrid method for extracting the collocations from Arabic corpus that is based on linguistic information and association measures. Results: This method extracted the bi-gram candidates of Arabic collocation from corpus and evaluated the association measures by using the n-best evaluation method. We reported the precision values for each association measure in each n-best list. Conclusion: The experimental results showed that the log-likelihood ratio is the best association measure that achieved highest precision.

  9. Automatic object extraction over multiscale edge field for multimedia retrieval.

    Science.gov (United States)

    Kiranyaz, Serkan; Ferreira, Miguel; Gabbouj, Moncef

    2006-12-01

    In this work, we focus on automatic extraction of object boundaries from Canny edge field for the purpose of content-based indexing and retrieval over image and video databases. A multiscale approach is adopted where each successive scale provides further simplification of the image by removing more details, such as texture and noise, while keeping major edges. At each stage of the simplification, edges are extracted from the image and gathered in a scale-map, over which a perceptual subsegment analysis is performed in order to extract true object boundaries. The analysis is mainly motivated by Gestalt laws and our experimental results suggest a promising performance for main objects extraction, even for images with crowded textural edges and objects with color, texture, and illumination variations. Finally, integrating the whole process as feature extraction module into MUVIS framework allows us to test the mutual performance of the proposed object extraction method and subsequent shape description in the context of multimedia indexing and retrieval. A promising retrieval performance is achieved, and especially in some particular examples, the experimental results show that the proposed method presents such a retrieval performance that cannot be achieved by using other features such as color or texture. PMID:17153949

  10. Automatic Waterline Extraction from Smartphone Images

    Science.gov (United States)

    Kröhnert, M.

    2016-06-01

    Considering worldwide increasing and devastating flood events, the issue of flood defence and prediction becomes more and more important. Conventional methods for the observation of water levels, for instance gauging stations, provide reliable information. However, they are rather cost-expensive in purchase, installation and maintenance and hence mostly limited for monitoring large streams only. Thus, small rivers with noticeable increasing flood hazard risks are often neglected. State-of-the-art smartphones with powerful camera systems may act as affordable, mobile measuring instruments. Reliable and effective image processing methods may allow the use of smartphone-taken images for mobile shoreline detection and thus for water level monitoring. The paper focuses on automatic methods for the determination of waterlines by spatio-temporal texture measures. Besides the considerable challenge of dealing with a wide range of smartphone cameras providing different hardware components, resolution, image quality and programming interfaces, there are several limits in mobile device processing power. For test purposes, an urban river in Dresden, Saxony was observed. The results show the potential of deriving the waterline with subpixel accuracy by a column-by-column four-parameter logistic regression and polynomial spline modelling. After a transformation into object space via suitable landmarks (which is not addressed in this paper), this corresponds to an accuracy in the order of a few centimetres when processing mobile device images taken from small rivers at typical distances.

  11. Automatic Extraction of Metadata from Scientific Publications for CRIS Systems

    Science.gov (United States)

    Kovacevic, Aleksandar; Ivanovic, Dragan; Milosavljevic, Branko; Konjovic, Zora; Surla, Dusan

    2011-01-01

    Purpose: The aim of this paper is to develop a system for automatic extraction of metadata from scientific papers in PDF format for the information system for monitoring the scientific research activity of the University of Novi Sad (CRIS UNS). Design/methodology/approach: The system is based on machine learning and performs automatic extraction…

  12. Automatic ion extraction from high-frequency ion source

    International Nuclear Information System (INIS)

    A description and results of tests of device for automatic extraction of ions from a high-frequency ion source are presented. The automatic regime is realized by introducing feedback with respect to the current of the source cathode and requires low sinusoidal modulation of the exctracting voltage. By varying the power of the discharge the beam current was controlled in the 90-1470μA range with automatic preservation of the optimal conditions in the extraction system. The device was used on a 210-kV neutron generator

  13. Morphological Feature Extraction for Automatic Registration of Multispectral Images

    Science.gov (United States)

    Plaza, Antonio; LeMoigne, Jacqueline; Netanyahu, Nathan S.

    2007-01-01

    The task of image registration can be divided into two major components, i.e., the extraction of control points or features from images, and the search among the extracted features for the matching pairs that represent the same feature in the images to be matched. Manual extraction of control features can be subjective and extremely time consuming, and often results in few usable points. On the other hand, automated feature extraction allows using invariant target features such as edges, corners, and line intersections as relevant landmarks for registration purposes. In this paper, we present an extension of a recently developed morphological approach for automatic extraction of landmark chips and corresponding windows in a fully unsupervised manner for the registration of multispectral images. Once a set of chip-window pairs is obtained, a (hierarchical) robust feature matching procedure, based on a multiresolution overcomplete wavelet decomposition scheme, is used for registration purposes. The proposed method is validated on a pair of remotely sensed scenes acquired by the Advanced Land Imager (ALI) multispectral instrument and the Hyperion hyperspectral instrument aboard NASA's Earth Observing-1 satellite.

  14. Condition Monitoring Method for Automatic Transmission Clutches

    Directory of Open Access Journals (Sweden)

    Agusmian Partogi Ompusunggu

    2012-01-01

    Full Text Available This paper presents the development of a condition monitoring method for wet friction clutches which might be useful for automatic transmission applications. The method is developed based on quantifying the change of the relative rotational velocity signal measured between the input and output shaft of a clutch. Prior to quantifying the change, the raw velocity signal is preprocessed to capture the relative velocity signal of interest. Three dimensionless parameters, namely the normalized engagement duration, the normalized Euclidean distance and the spectral angle mapper distance, that can be easily extracted from the signal of interest are proposed in this paper to quantify the change. In order to experimentally evaluate and verify the potential of the proposed method, clutches' life data obtained by conducting accelerated life tests on some commercial clutches with different lining friction materials using a fully instrumented SAE#2 test setup, are utilized for this purpose. The aforementioned parameters extracted from the experimental data exhibit clearly progressive changes during the clutch service life and are well correlated with the evolution of the mean coefficient of friction (COF, which can be seen as a reference feature. Hence, the quantities proposed in this paper can therefore be seen as principle features that may enable us to monitor and assess the condition of wet friction clutches.

  15. Methods of automatic scanning of SSNTDs

    International Nuclear Information System (INIS)

    The methods of automatic scanning of solid state nuclear track detectors are reviewed. The paper deals with transmission of light, charged particles, chemicals and electrical current through conventionally etched detectors. Special attention is given to the jumping spark technique and breakdown counters. Eventually optical automatic devices are examined. (orig.)

  16. Automatically extracting functionally equivalent proteins from SwissProt

    Directory of Open Access Journals (Sweden)

    Martin Andrew CR

    2008-10-01

    Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and

  17. Automatic Statistics Extraction for Amateur Soccer Videos

    NARCIS (Netherlands)

    Gemert, J.C. van; Schavemaker, J.G.M.; Bonenkamp, C.W.B.

    2014-01-01

    Amateur soccer statistics have interesting applications such as providing insights to improve team performance, individual coaching, monitoring team progress and personal or team entertainment. Professional soccer statistics are extracted with labor intensive expensive manual effort which is not rea

  18. Layer-Wise Floorplan Extraction for Automatic Urban Building Reconstruction.

    Science.gov (United States)

    Sui, Wei; Wang, Lingfeng; Fan, Bin; Xiao, Hongfei; Wu, Huaiyu; Pan, Chunhong

    2016-03-01

    Urban building reconstruction is an important step for urban digitization and realisticvisualization. In this paper, we propose a novel automatic method to recover urban building geometry from 3D point clouds. The proposed method is suitable for buildings composed of planar polygons and aligned with the gravity direction, which are quite common in the city. Our key observation is that the building shapes are usually piecewise constant along the gravity direction and determined by several dominant shapes. Based on this observation, we formulate building reconstruction as an energy minimization problem under the Markov Random Field (MRF) framework. Specifically, point clouds are first cutinto a sequence of slices along the gravity direction. Then, floorplans are reconstructed by extracting boundaries of these slices, among which dominant floorplans are extracted and propagated to other floors via MRF. To guarantee correct propagation, a new distance measurement for floorplans is designed, which first encodes floorplans into strings and then calculates distances between their corresponding strings. Additionally, an image based editing method is also proposed to recover detailed window structures. Experimental results on both synthetic and real data sets have validated the effectiveness of our method. PMID:26661472

  19. A Risk Assessment System with Automatic Extraction of Event Types

    Science.gov (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  20. Automatic selection of the left ventricular sampling region by nuclear angiocardiography and extraction of the ejection fraction as compared with the three-region-method by hand

    International Nuclear Information System (INIS)

    A program for automatic determination of the left ventricle contour and for automatic calculation of the ejection fraction is presented. The results are comparable to those of the tedious manual evaluation procedure. Preconditions are: A suitable statistics of counting rates and a correct projection of the left ventricle without superposition of the left atrium or the right ventricle. (WU)

  1. Automatic Extraction of Mangrove Vegetation from Optical Satellite Data

    Science.gov (United States)

    Agrawal, Mayank; Sushma Reddy, Devireddy; Prasad, Ram Chandra

    2016-06-01

    Mangrove, the intertidal halophytic vegetation, are one of the most significant and diverse ecosystem in the world. They protect the coast from sea erosion and other natural disasters like tsunami and cyclone. In view of their increased destruction and degradation in the current scenario, mapping of this vegetation is at priority. Globally researchers mapped mangrove vegetation using visual interpretation method or digital classification approaches or a combination of both (hybrid) approaches using varied spatial and spectral data sets. In the recent past techniques have been developed to extract these coastal vegetation automatically using varied algorithms. In the current study we tried to delineate mangrove vegetation using LISS III and Landsat 8 data sets for selected locations of Andaman and Nicobar islands. Towards this we made an attempt to use segmentation method, that characterize the mangrove vegetation based on their tone and the texture and the pixel based classification method, where the mangroves are identified based on their pixel values. The results obtained from the both approaches are validated using maps available for the region selected and obtained better accuracy with respect to their delineation. The main focus of this paper is simplicity of the methods and the availability of the data on which these methods are applied as these data (Landsat) are readily available for many regions. Our methods are very flexible and can be applied on any region.

  2. Super pixel density based clustering automatic image classification method

    Science.gov (United States)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  3. 基于厦门岛的海岸线自动提取方法研究%The Method of Coastline Automatic Extraction in Xiamen Island

    Institute of Scientific and Technical Information of China (English)

    齐宇; 任航科

    2012-01-01

    应用遥感的方法监测海岸线变化、提取海岸线、进行景观分析具有范围广、精度高、可动态监测的特点。提取海岸线由于海岸带类型的不同,选取的提取方法不同,会得出不同的结果。本文以厦门岛海岸线为例,使用TM和遥感影像,利用两种提取海岸线方法,得到计算机自动提取的两种海岸线位置,并通过实地调查确认海岸类型和叠加高空间分辨率的SPOT影像进行精度分析。探讨了根据不同海岸带类型,如何选取海岸线自动提取方法的问题。%In terms of supervising the change of coastline, extracting the coastline, and analyzing the landscape, re- mote sensing has many advantages: it is wider, more precise and dynamic. Owing to different types of coastal zone and different extracting methods,the results of coastline auto-extraction may differ significantly. Taking the coastal zone a- round Xiamen island as an example, this paper uses TM image and two different methods of computer coastline auto-ex- traction to extract its coastline,which are of two types: sandy-beach and artificial beach coasts. Based on the result of field research around the Xiamen island, the paper also precisely analyzes the result of the fusion with its high spatial res- olution SPOT image. Finally, the paper discusses how to select the methods of computer coastline auto-extraction subject to different coastal zones

  4. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    OpenAIRE

    Mohammad Awrangjeb; Fraser, Clive S.

    2014-01-01

    Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging) data. Using the ground height from a DEM (digital elevation model), the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the b...

  5. Fully automatic extraction of salient objects from videos in near real-time

    CERN Document Server

    Kazuma, Akamine; Kimura, Akisato; Takagi, Shigeru

    2010-01-01

    Automatic video segmentation plays an important role in a wide range of computer vision and image processing applications. Recently, various methods have been proposed for this purpose. The problem is that most of these methods are far from real-time processing even for low-resolution videos due to the complex procedures. To this end, we propose a new and quite fast method for automatic video segmentation with the help of 1) efficient optimization of Markov random fields with polynomial time of number of pixels by introducing graph cuts, 2) automatic, computationally efficient but stable derivation of segmentation priors using visual saliency and sequential update mechanism, and 3) an implementation strategy in the principle of stream processing with graphics processor units (GPUs). Test results indicates that our method extracts appropriate regions from videos as precisely as and much faster than previous semi-automatic methods even though any supervisions have not been incorporated.

  6. Feature extraction of musical content for automatic music transcription

    OpenAIRE

    Zhou, Ruohua; Mattavelli, Marco

    2007-01-01

    The purpose of this thesis is to develop new methods for automatic transcription of melody and harmonic parts of real-life music signal. Music transcription is here defined as an act of analyzing a piece of music signal and writing down the parameter representations, which indicate the pitch, onset time and duration of each pitch, loudness and instrument applied in the analyzed music signal. The proposed algorithms and methods aim at resolving two key sub-problems in automatic music transcrip...

  7. Feature extraction of musical content for automatic music transcription

    OpenAIRE

    Zhou, Ruohua

    2006-01-01

    The purpose of this thesis is to develop new methods for automatic transcription of melody and harmonic parts of real-life music signal. Music transcription is here defined as an act of analyzing a piece of music signal and writing down the parameter representations, which indicate the pitch, onset time and duration of each pitch, loudness and instrument applied in the analyzed music signal. The proposed algorithms and methods aim at resolving two key sub-problems in automatic music transcrip...

  8. Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text

    Science.gov (United States)

    Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.

    2015-12-01

    We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction

  9. Automatic Building Extraction From LIDAR Data Covering Complex Urban Scenes

    Science.gov (United States)

    Awrangjeb, M.; Lu, G.; Fraser, C.

    2014-08-01

    This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the

  10. Painful Bile Extraction Methods

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    It was only in the past 20 years that countries in Asia began to search for an alternative to protect moon bears from being killed for their bile and other body parts. In the early 1980s, a new method of extracting bile from living bears was developed in North Korea. In 1983, Chinese scientists imported this technique from North Korea. According to the Animals Asia Foundation, the most original method of bile extraction is to embed a latex catheter, a narrow rubber

  11. Automatic extraction of soft tissues from 3D MRI images of the head

    International Nuclear Information System (INIS)

    This paper presents an automatic extraction method of soft tissues from 3D MRI images of the head. A 3D region growing algorithm is used to extract soft tissues such as cerebrum, cerebellum and brain stem. Four information sources are used to control the 3D region growing. Model of each soft tissue has been constructed in advance and provides a 3D region growing space. Head skin area which is automatically extracted from input image provides an unsearchable area. Zero-crossing points are detected by using Laplacian operator, and by examining sign change between neighborhoods. They are used as a control condition in the 3D region growing process. Graylevels of voxels are also directly used to extract each tissue region as a control condition. Experimental results applied to 19 samples show that the method is successful. (author)

  12. Condition Monitoring Method for Automatic Transmission Clutches

    OpenAIRE

    Agusmian Partogi Ompusunggu; Jean-Michel Papy; Steve Vandenplas; Paul Sas; Hendrik Van Brussel

    2012-01-01

    This paper presents the development of a condition monitoring method for wet friction clutches which might be useful for automatic transmission applications. The method is developed based on quantifying the change of the relative rotational velocity signal measured between the input and output shaft of a clutch. Prior to quantifying the change, the raw velocity signal is preprocessed to capture the relative velocity signal of interest. Three dimensionless parameters, namely the normalized eng...

  13. Automatic extraction of gene and protein synonyms from MEDLINE and journal articles.

    OpenAIRE

    Hong YU; Hatzivassiloglou, Vasileios; Friedman, Carol; Rzhetsky, Andrey; Wilbur, W. John

    2002-01-01

    Genes and proteins are often associated with multiple names, and more names are added as new functional or structural information is discovered. Because authors often alternate between these synonyms, information retrieval and extraction benefits from identifying these synonymous names. We have developed a method to extract automatically synonymous gene and protein names from MEDLINE and journal articles. We first identified patterns authors use to list synonymous gene and protein names. We d...

  14. Sensitive, automatic method for the determination of diazepam and its five metabolites in human oral fluid by online solid-phase extraction and liquid chromatography with tandem mass spectrometry.

    Science.gov (United States)

    Jiang, Fengli; Rao, Yulan; Wang, Rong; Johansen, Sys Stybe; Ni, Chunfang; Liang, Chen; Zheng, Shuiqing; Ye, Haiying; Zhang, Yurong

    2016-05-01

    A novel and simple online solid-phase extraction liquid chromatography-tandem mass spectrometry method was developed and validated for the simultaneous determination of diazepam and its five metabolites including nordazepam, oxazepam, temazepam, oxazepam glucuronide, and temazepam glucuronide in human oral fluid. Human oral fluid was obtained using the Salivette(®) collection device, and 100 μL of oral fluid samples were loaded onto HySphere Resin GP cartridge for extraction. Analytes were separated on a Waters Xterra C18 column and quantified by liquid chromatography with tandem mass spectrometry using the multiple reaction monitoring mode. The whole procedure was automatic, and the total run time was 21 min. The limit of detection was in the range of 0.05-0.1 ng/mL for all analytes. The linearity ranged from 0.25 to 250 ng/mL for oxazepam, and 0.1 to 100 ng/mL for the other five analytes. Intraday and interday precision for all analytes was 0.6-12.8 and 1.0-9.2%, respectively. Accuracy ranged from 95.6 to 114.7%. Method recoveries were in the range of 65.1-80.8%. This method was fully automated, simple, and sensitive. Authentic oral fluid samples collected from two volunteers after consuming a single oral dose of 10 mg diazepam were analyzed to demonstrate the applicability of this method. PMID:27005561

  15. Physiologically Motivated Feature Extraction for Robust Automatic Speech Recognition

    Directory of Open Access Journals (Sweden)

    Ibrahim Missaoui

    2016-04-01

    Full Text Available In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that the proposed feature extraction method outperforms the classic methods such as Perceptual Linear Prediction, Linear Predictive Coding, Linear Prediction Cepstral coefficients and Mel Frequency Cepstral Coefficients.

  16. ANALYSIS METHOD OF AUTOMATIC PLANETARY TRANSMISSION KINEMATICS

    OpenAIRE

    Józef DREWNIAK; Stanisław ZAWIŚLAK; Wieczorek, Andrzej

    2014-01-01

    In the present paper, planetary automatic transmission is modeled by means of contour graphs. The goals of modeling could be versatile: ratio calculating via algorithmic equation generation, analysis of velocity and accelerations. The exemplary gears running are analyzed, several drives/gears are consecutively taken into account discussing functional schemes, assigned contour graphs and generated system of equations and their solutions. The advantages of the method are: algorithmic approach, ...

  17. Automatic local Gabor Features extraction for face recognition

    CERN Document Server

    Jemaa, Yousra Ben

    2009-01-01

    We present in this paper a biometric system of face detection and recognition in color images. The face detection technique is based on skin color information and fuzzy classification. A new algorithm is proposed in order to detect automatically face features (eyes, mouth and nose) and extract their correspondent geometrical points. These fiducial points are described by sets of wavelet components which are used for recognition. To achieve the face recognition, we use neural networks and we study its performances for different inputs. We compare the two types of features used for recognition: geometric distances and Gabor coefficients which can be used either independently or jointly. This comparison shows that Gabor coefficients are more powerful than geometric distances. We show with experimental results how the importance recognition ratio makes our system an effective tool for automatic face detection and recognition.

  18. Physiologically Motivated Feature Extraction for Robust Automatic Speech Recognition

    OpenAIRE

    Ibrahim Missaoui; Zied Lachiri

    2016-01-01

    In this paper, a new method is presented to extract robust speech features in the presence of the external noise. The proposed method based on two-dimensional Gabor filters takes in account the spectro-temporal modulation frequencies and also limits the redundancy on the feature level. The performance of the proposed feature extraction method was evaluated on isolated speech words which are extracted from TIMIT corpus and corrupted by background noise. The evaluation results demonstrate that ...

  19. Feature extraction and classification in automatic weld seam radioscopy

    International Nuclear Information System (INIS)

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM)

  20. ANALYSIS METHOD OF AUTOMATIC PLANETARY TRANSMISSION KINEMATICS

    Directory of Open Access Journals (Sweden)

    Józef DREWNIAK

    2014-06-01

    Full Text Available In the present paper, planetary automatic transmission is modeled by means of contour graphs. The goals of modeling could be versatile: ratio calculating via algorithmic equation generation, analysis of velocity and accelerations. The exemplary gears running are analyzed, several drives/gears are consecutively taken into account discussing functional schemes, assigned contour graphs and generated system of equations and their solutions. The advantages of the method are: algorithmic approach, general approach where particular drives are cases of the generally created model. Moreover, the method allows for further analyzes and synthesis tasks e.g. checking isomorphism of design solutions.

  1. Automatic landmark extraction from image data using modified growing neural gas network.

    Science.gov (United States)

    Fatemizadeh, Emad; Lucas, Caro; Soltanian-Zadeh, Hamid

    2003-06-01

    A new method for automatic landmark extraction from MR brain images is presented. In this method, landmark extraction is accomplished by modifying growing neural gas (GNG), which is a neural-network-based cluster-seeking algorithm. Using modified GNG (MGNG) corresponding dominant points of contours extracted from two corresponding images are found. These contours are borders of segmented anatomical regions from brain images. The presented method is compared to: 1) the node splitting-merging Kohonen model and 2) the Teh-Chin algorithm (a well-known approach for dominant points extraction of ordered curves). It is shown that the proposed algorithm has lower distortion error, ability of extracting landmarks from two corresponding curves simultaneously, and also generates the best match according to five medical experts. PMID:12834162

  2. Automatic archaeological feature extraction from satellite VHR images

    Science.gov (United States)

    Jahjah, Munzer; Ulivieri, Carlo

    2010-05-01

    Archaeological applications need a methodological approach on a variable scale able to satisfy the intra-site (excavation) and the inter-site (survey, environmental research). The increased availability of high resolution and micro-scale data has substantially favoured archaeological applications and the consequent use of GIS platforms for reconstruction of archaeological landscapes based on remotely sensed data. Feature extraction of multispectral remotely sensing image is an important task before any further processing. High resolution remote sensing data, especially panchromatic, is an important input for the analysis of various types of image characteristics; it plays an important role in the visual systems for recognition and interpretation of given data. The methods proposed rely on an object-oriented approach based on a theory for the analysis of spatial structures called mathematical morphology. The term "morphology" stems from the fact that it aims at analysing object shapes and forms. It is mathematical in the sense that the analysis is based on the set theory, integral geometry, and lattice algebra. Mathematical morphology has proven to be a powerful image analysis technique; two-dimensional grey tone images are seen as three-dimensional sets by associating each image pixel with an elevation proportional to its intensity level. An object of known shape and size, called the structuring element, is then used to investigate the morphology of the input set. This is achieved by positioning the origin of the structuring element to every possible position of the space and testing, for each position, whether the structuring element either is included or has a nonempty intersection with the studied set. The shape and size of the structuring element must be selected according to the morphology of the searched image structures. Other two feature extraction techniques were used, eCognition and ENVI module SW, in order to compare the results. These techniques were

  3. AUTOMATIC ROAD EXTRACTION BASED ON INTEGRATION OF HIGH RESOLUTION LIDAR AND AERIAL IMAGERY

    OpenAIRE

    Rahimi, S.; H. Arefi; Bahmanyar, R.

    2015-01-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads’ precise...

  4. Method for automatic detection of wheezing in lung sounds

    Directory of Open Access Journals (Sweden)

    R.J. Riella

    2009-07-01

    Full Text Available The present report describes the development of a technique for automatic wheezing recognition in digitally recorded lung sounds. This method is based on the extraction and processing of spectral information from the respiratory cycle and the use of these data for user feedback and automatic recognition. The respiratory cycle is first pre-processed, in order to normalize its spectral information, and its spectrogram is then computed. After this procedure, the spectrogram image is processed by a two-dimensional convolution filter and a half-threshold in order to increase the contrast and isolate its highest amplitude components, respectively. Thus, in order to generate more compressed data to automatic recognition, the spectral projection from the processed spectrogram is computed and stored as an array. The higher magnitude values of the array and its respective spectral values are then located and used as inputs to a multi-layer perceptron artificial neural network, which results an automatic indication about the presence of wheezes. For validation of the methodology, lung sounds recorded from three different repositories were used. The results show that the proposed technique achieves 84.82% accuracy in the detection of wheezing for an isolated respiratory cycle and 92.86% accuracy for the detection of wheezes when detection is carried out using groups of respiratory cycles obtained from the same person. Also, the system presents the original recorded sound and the post-processed spectrogram image for the user to draw his own conclusions from the data.

  5. Automatic extraction of tumor region on x-ray image of animals

    International Nuclear Information System (INIS)

    Diagnosis by CT images is increasing. But we often use X-ray images because of scanning time and scan cost. It is difficult to extract tumor region of X-ray images, and pathologists have to diagnose many images of tumor. Therefore, demand for the development of CAD system is increasing to support pathologists. Images that we use are dog images. Many people research human Images, but animal images are not researched well. In this paper, automatic extraction of tumor region is studied. We used operation of normalized correlation. Template of this filter looks like mountain. We also used Quoit-filter, that detected region that had possibility of tumor. We calculated edge of tumor to see easily tumors. Our method detected some tumor candidate edge. As future work, we should extract bone region, and some fixed value, including filter size should be automatically determined. (author)

  6. Automatic centerline extraction of coronary arteries in coronary computed tomographic angiography.

    Science.gov (United States)

    Yang, Guanyu; Kitslaar, Pieter; Frenay, Michel; Broersen, Alexander; Boogers, Mark J; Bax, Jeroen J; Reiber, Johan H C; Dijkstra, Jouke

    2012-04-01

    Coronary computed tomographic angiography (CCTA) is a non-invasive imaging modality for the visualization of the heart and coronary arteries. To fully exploit the potential of the CCTA datasets and apply it in clinical practice, an automated coronary artery extraction approach is needed. The purpose of this paper is to present and validate a fully automatic centerline extraction algorithm for coronary arteries in CCTA images. The algorithm is based on an improved version of Frangi's vesselness filter which removes unwanted step-edge responses at the boundaries of the cardiac chambers. Building upon this new vesselness filter, the coronary artery extraction pipeline extracts the centerlines of main branches as well as side-branches automatically. This algorithm was first evaluated with a standardized evaluation framework named Rotterdam Coronary Artery Algorithm Evaluation Framework used in the MICCAI Coronary Artery Tracking challenge 2008 (CAT08). It includes 128 reference centerlines which were manually delineated. The average overlap and accuracy measures of our method were 93.7% and 0.30 mm, respectively, which ranked at the 1st and 3rd place compared to five other automatic methods presented in the CAT08. Secondly, in 50 clinical datasets, a total of 100 reference centerlines were generated from lumen contours in the transversal planes which were manually corrected by an expert from the cardiology department. In this evaluation, the average overlap and accuracy were 96.1% and 0.33 mm, respectively. The entire processing time for one dataset is less than 2 min on a standard desktop computer. In conclusion, our newly developed automatic approach can extract coronary arteries in CCTA images with excellent performances in extraction ability and accuracy. PMID:21637981

  7. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition as

  8. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær; Kero, Tanja; Orndahl, Lovisa Holm; Kim, Won Yong; Bjerner, Tomas; Bouchelouche, Kirsten; Wiggers, Henrik; Frøkiær, Jørgen; Sørensen, Jens

    2015-01-01

    Background The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. Methods 35 subjects underwent a...... dynamic 11 C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic 15 O-water PET and 11 C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically...

  9. Automatic Metadata Extraction - The High Energy Physics Use Case

    CERN Document Server

    Boyd, Joseph; Rajman, Martin

    Automatic metadata extraction (AME) of scientific papers has been described as one of the hardest problems in document engineering. Heterogeneous content, varying style, and unpredictable placement of article components render the problem inherently indeterministic. Conditional random fields (CRF), a machine learning technique, can be used to classify document metadata amidst this uncertainty, annotating document contents with semantic labels. High energy physics (HEP) papers, such as those written at CERN, have unique content and structural characteristics, with scientific collaborations of thousands of authors altering article layouts dramatically. The distinctive qualities of these papers necessitate the creation of specialised datasets and model features. In this work we build an unprecedented training set of HEP papers and propose and evaluate a set of innovative features for CRF models. We build upon state-of-the-art AME software, GROBID, a tool coordinating a hierarchy of CRF models in a full document ...

  10. EXIF Custom: Automatic image metadata extraction for Scratchpads and Drupal

    Directory of Open Access Journals (Sweden)

    Ed Baker

    2013-09-01

    Full Text Available Many institutions and individuals use embedded metadata to aid in the management of their image collections. Many deskop image management solutions such as Adobe Bridge and online tools such as Flickr also make use of embedded metadata to describe, categorise and license images. Until now Scratchpads (a data management system and virtual research environment for biodiversity  have not made use of these metadata, and users have had to manually re-enter this information if they have wanted to display it on their Scratchpad site. The Drupal described here allows users to map metadata embedded in their images to the associated field in the Scratchpads image form using one or more customised mappings. The module works seamlessly with the bulk image uploader used on Scratchpads and it is therefore possible to upload hundreds of images easily with automatic metadata (EXIF, XMP and IPTC extraction and mapping.

  11. Automatic Road Extraction Based on Integration of High Resolution LIDAR and Aerial Imagery

    Science.gov (United States)

    Rahimi, S.; Arefi, H.; Bahmanyar, R.

    2015-12-01

    In recent years, the rapid increase in the demand for road information together with the availability of large volumes of high resolution Earth Observation (EO) images, have drawn remarkable interest to the use of EO images for road extraction. Among the proposed methods, the unsupervised fully-automatic ones are more efficient since they do not require human effort. Considering the proposed methods, the focus is usually to improve the road network detection, while the roads' precise delineation has been less attended to. In this paper, we propose a new unsupervised fully-automatic road extraction method, based on the integration of the high resolution LiDAR and aerial images of a scene using Principal Component Analysis (PCA). This method discriminates the existing roads in a scene; and then precisely delineates them. Hough transform is then applied to the integrated information to extract straight lines; which are further used to segment the scene and discriminate the existing roads. The roads' edges are then precisely localized using a projection-based technique, and the round corners are further refined. Experimental results demonstrate that our proposed method extracts and delineates the roads with a high accuracy.

  12. An automatic extraction algorithm of three dimensional shape of brain parenchyma from MR images

    International Nuclear Information System (INIS)

    For the simulation of surgical operations, the extraction of the selected region using MR images is useful. However, this segmentation requires a high level of skill and experience from the technicians. We have developed an unique automatic extraction algorithm for extracting three dimensional brain parenchyma using MR head images. It is named the ''three dimensional gray scale clumsy painter method''. In this method, a template having the shape of a pseudo-circle, a so called clumsy painter (CP), moves along the contour of the selected region and extracts the region surrounded by the contour. This method has advantages compared with the morphological filtering and the region growing method. Previously, this method was applied to binary images, but there were some problems in that the results of the extractions were varied by the value of the threshold level. We introduced gray level information of images to decide the threshold, and depend upon the change of image density between the brain parenchyma and CSF. We decided the threshold level by the vector of a map of templates, and changed the map according to the change of image density. As a result, the over extracted ratio was improved by 36%, and the under extracted ratio was improved by 20%. (author)

  13. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten;

    2014-01-01

    was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France...

  14. An automatic building reconstruction method: a structural approach using high resolution satellite images

    OpenAIRE

    Lafarge, Florent; Descombes, Xavier; Zerubia, Josiane; Deseilligny, Marc-Pierrot

    2006-01-01

    We present an automatic 3D city model of dense urban areas from high resolution satellite data. The proposed method is developed using a structural approach : we construct complex buildings by merging simple parametric models with rectangular ground footprint. To do so, an automatic building extraction method based on marked point processes is used to provide rectangular building footprints. A collection of 3D parametric models is defined in order to be fixed onto these building footprints. A...

  15. An automatic and effective tooth isolation method for dental radiographs

    Science.gov (United States)

    Lin, P.-L.; Huang, P.-W.; Cho, Y. S.; Kuo, C.-H.

    2013-03-01

    Tooth isolation is a very important step for both computer-aided dental diagnosis and automatic dental identification systems, because it will directly affect the accuracy of feature extraction and, thereby, the final results of both types of systems. This paper presents an effective and fully automatic tooth isolation method for dental X-ray images, which contains up-per-lower jaw separation, single tooth isolation, over-segmentation verification, and under-segmentation detection. The upper-lower jaw separation mechanism is based on a gray-scale integral projection to avoid possible information loss and incorporates with the angle adjustment to handle skewed images. In a single tooth isolation, an adaptive windowing scheme for locating gap valleys is proposed to improve the accuracy. In over-segmentation, an isolation-curve verification scheme is proposed to remove excessive curves; and in under-segmentation, a missing-teeth detection scheme is proposed. The experimental results demonstrate that our method achieves the accuracy rates of 95.63% and 98.71% for the upper and lower jaw images, respectively, from the test database of 60 bitewing dental radiographs, and performs better for images with severe teeth occlusion, excessive dental works, and uneven illumination than that of Nomir and Abdel-Mottaleb's method. The method without upper-lower jaw separation step also works well for panoramic and periapical images.

  16. Computer Vision Based Automatic Extraction and Thickness Measurement of Deep Cervical Flexor from Ultrasonic Images.

    Science.gov (United States)

    Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun

    2016-01-01

    Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of the answers for that problem. Another difficulty is to compensate information loss that happened during such image processing procedures. Many morphologically motivated image processing algorithms are applied for that purpose. The proposed method is verified as successful in extracting DCFs and measuring thicknesses in experiment using two hundred 800 × 600 DICOM ultrasonography images with 98.5% extraction rate. Also, the thickness of DCFs automatically measured by this software has small difference (less than 0.3 cm) for 89.8% of extracted DCFs. PMID:26949411

  17. Automatic Key-Frame Extraction from Optical Motion Capture Data

    Institute of Scientific and Technical Information of China (English)

    ZHANG Qiang; YU Shao-pei; ZHOU Dong-sheng; WEI Xiao-peng

    2013-01-01

    Optical motion capture is an increasingly popular animation technique. In the last few years, plenty of methods have been proposed for key-frame extraction of motion capture data, and it is a common method to extract key-frame using quaternion. Here, one main difficulty is due to the fact that previous algorithms often need to manually set various parameters. In addition, it is problematic to predefine the appropriate threshold without knowing the data content. In this paper, we present a novel adaptive threshold-based extraction method. Key-frame can be found according to quaternion distance. We propose a simple and efficient algorithm to extract key-frame from a motion sequence based on adaptive threshold. It is convenient with no need to predefine parameters to meet certain compression ratio. Experimental results of many motion captures with different traits demonstrate good performance of the proposed algorithm. Our experiments show that one can typically cut down the process of extraction from several minutes to a couple of seconds.

  18. Automatic Aircraft Collision Avoidance System and Method

    Science.gov (United States)

    Skoog, Mark (Inventor); Hook, Loyd (Inventor); McWherter, Shaun (Inventor); Willhite, Jaimie (Inventor)

    2014-01-01

    The invention is a system and method of compressing a DTM to be used in an Auto-GCAS system using a semi-regular geometric compression algorithm. In general, the invention operates by first selecting the boundaries of the three dimensional map to be compressed and dividing the three dimensional map data into regular areas. Next, a type of free-edged, flat geometric surface is selected which will be used to approximate terrain data of the three dimensional map data. The flat geometric surface is used to approximate terrain data for each regular area. The approximations are checked to determine if they fall within selected tolerances. If the approximation for a specific regular area is within specified tolerance, the data is saved for that specific regular area. If the approximation for a specific area falls outside the specified tolerances, the regular area is divided and a flat geometric surface approximation is made for each of the divided areas. This process is recursively repeated until all of the regular areas are approximated by flat geometric surfaces. Finally, the compressed three dimensional map data is provided to the automatic ground collision system for an aircraft.

  19. An Extended Keyword Extraction Method

    Science.gov (United States)

    Hong, Bao; Zhen, Deng

    Among numerous Chinese keyword extraction methods, Chinese characteristics were shortly considered. This phenomenon going against the precision enhancement of the Chinese keyword extraction. An extended term frequency based method(Extended TF) is proposed in this paper which combined Chinese linguistic characteristics with basic TF method. Unary, binary and ternary grammars for the candidate keyword extraction as well as other linguistic features were all taken into account. The method establishes classification model using support vector machine. Tests show that the proposed extraction method improved key words precision and recall rate significantly. We applied the key words extracted by the extended TF method into the text file classification. Results show that the key words extracted by the proposed method contributed greatly to raising the precision of text file classification.

  20. Automatic Segmentation of Raw LIDAR Data for Extraction of Building Roofs

    Directory of Open Access Journals (Sweden)

    Mohammad Awrangjeb

    2014-04-01

    Full Text Available Automatic extraction of building roofs from remote sensing data is important for many applications, including 3D city modeling. This paper proposes a new method for automatic segmentation of raw LIDAR (light detection and ranging data. Using the ground height from a DEM (digital elevation model, the raw LIDAR points are separated into two groups. The first group contains the ground points that form a “building mask”. The second group contains non-ground points that are clustered using the building mask. A cluster of points usually represents an individual building or tree. During segmentation, the planar roof segments are extracted from each cluster of points and refined using rules, such as the coplanarity of points and their locality. Planes on trees are removed using information, such as area and point height difference. Experimental results on nine areas of six different data sets show that the proposed method can successfully remove vegetation and, so, offers a high success rate for building detection (about 90% correctness and completeness and roof plane extraction (about 80% correctness and completeness, when LIDAR point density is as low as four points/m2. Thus, the proposed method can be exploited in various applications.

  1. Automatic extraction of tumors from multiple MR images with self-organizing maps

    International Nuclear Information System (INIS)

    In MR images, the contrast between tumors and surrounding soft tissues is not always clear, and it may be difficult to determine the tumor region. In this report, we propose a method for the automatic and objective extraction of tumors based on the correlations among multiple MR images. First, a map reflecting the correlations of three types of MR images (Gd-enhanced, T1-weighted, and T2-weighted images) is created by training of Self-Organizing Maps (SOM). Second, the SOM are grouped into a number of clusters determined in advance, and the original MR images are divided into clusters according to the clustered SOM. Finally, the tumor region in the clustered MR images is refined by reclassification, improving the accuracy of extraction. This method was applied to 10 cases in a clinical study, and in 8 of these cases, the tumor could be distinguished from other regions as an independent cluster. The proposed method is expected to be useful for the automatic extraction of tumors in MR images. (author)

  2. Study on automatic control of high uranium concentration solvent extraction with pulse sieve-plate column

    International Nuclear Information System (INIS)

    The author mainly described the working condition of the automatic control system of high uranium concentration solvent extraction with pulse sieve-plate column on a large scale test. The use of the automatic instrument and meter, automatic control circuit, and the best feedback control point of the solvent extraction processing with pulse sieve-plate column are discussed in detail. The writers point out the success of this experiment on automation, also present some questions that should be cared for the automatic control, instruments and meters in production in the future

  3. Automatic extraction of abnormal signals from diffusion-weighted images using 3D-ACTIT

    International Nuclear Information System (INIS)

    Recent developments in medical imaging equipment have made it possible to acquire large amounts of image data and to perform detailed diagnosis. However, it is difficult for physicians to evaluate all of the image data obtained. To address this problem, computer-aided detection (CAD) and expert systems have been investigated. In these investigations, as the types of images used for diagnosis has expanded, the requirements for image processing have become more complex. We therefore propose a new method which we call Automatic Construction of Tree-structural Image Transformation (3D-ACTIT) to perform various 3D image processing procedures automatically using instance-based learning. We have conducted research on diffusion-weighted image (DWI) data and its processing. In this report, we describe how 3D-ACTIT performs processing to extract only abnormal signal regions from 3D-DWI data. (author)

  4. 牛肉大理石花纹图像特征信息提取及自动分级方法%Method of information extraction of marbling image characteristic and automatic classification for beef

    Institute of Scientific and Technical Information of China (English)

    周彤; 彭彦昆

    2013-01-01

    Beef marbling is one of the most important indexes to assess beef quality. The grade of beef marbling is a measure of the fat distribution density in the rib-eye region. However, quality grades of beef in most of the beef slaughtering houses and enterprises depends on trainees using their visual senses or comparing to the standard sample cards in China. This manual grading method demands not only great labor but also lacks objectivity and accuracy. The objective of this research was to investigate an optimal method for grading the beef marbling based on computer vision and image processing technologies to meet the requirement of the meat industry. A practical algorithm that can be used in a beef marbling grading system is proposed in this research. The beef sample images were collected by a machine vision image acquisition system. The system consisted of an image acquisition device, computer, and image processing algorithm equipped into the self developed system software. The images of the beef samples in an aluminum plate were captured by CCD. Light intensity was regulated through a light controller, and the distance between the camera lens and the beef samples was adjusted though translation stages in the image acquisition device. Collected images were automatically stored in the computer for further image processing. First, some methods such as image denoising, background removal, and image enhancement were adopted to preprocess the image to obtain a region of interest (ROI). In this step, the image was cropped to separate the beef from the background. Then, an iteration method was used to segment the beef area, obtain the beef marbling area and fat area. The redundant fat area was removed to extract an effective rib-eye region. Ten characteristic parameters of beef marbling namely, the rate of marbling area in the rib-eye region, the number of large grain fat, medium grain fat, small grain fat, total grain fat, the density of large grain fat, medium grain fat

  5. Historical Patterns Based on Automatically Extracted Data: the Case of Classical Composers

    DEFF Research Database (Denmark)

    Borowiecki, Karol; O'Hagan, John

    2012-01-01

    application that automatically extracts and processes information was developed to generate data on the birth location, occupations and importance (using word count methods) of over 12,000 composers over six centuries. Quantitative measures of the relative importance of different types of music and of the......The purpose of this paper is to demonstrate the potential for generating interesting aggregate data on certain aspect of the lives of thousands of composers, and indeed other creative groups, from large on-line dictionaries and to be able to do so relatively quickly. A purpose-built java...

  6. An automatic and fast centerline extraction algorithm for virtual colonoscopy.

    Science.gov (United States)

    Jiang, Guangxiang; Gu, Lixu

    2005-01-01

    This paper introduces a new refined centerline extraction algorithm, which is based on and significantly improved from distance mapping algorithms. The new approach include three major parts: employing a colon segmentation method; designing and realizing a fast Euclidean Transform algorithm and inducting boundary voxels cutting (BVC) approach. The main contribution is the BVC processing, which greatly speeds up the Dijkstra algorithm and improves the whole performance of the new algorithm. Experimental results demonstrate that the new centerline algorithm was more efficient and accurate comparing with existing algorithms. PMID:17281406

  7. A General Method for Module Automatic Testing in Avionics Systems

    Directory of Open Access Journals (Sweden)

    Li Ma

    2013-05-01

    Full Text Available The traditional Automatic Test Equipment (ATE systems are insufficient to cope with the challenges of testing more and more complex avionics systems. In this study, we propose a general method for module automatic testing in the avionics test platform based on PXI bus. We apply virtual instrument technology to realize the automatic testing and the fault reporting of signal performance. Taking the avionics bus ARINC429 as an example, we introduce the architecture of automatic test system as well as the implementation of algorithms in Lab VIEW. The comprehensive experiments show the proposed method can effectively accomplish the automatic testing and fault reporting of signal performance. It greatly improves the generality and reliability of ATE in avionics systems.

  8. Automatic cell object extraction of red tide algae in microscopic images

    Science.gov (United States)

    Yu, Kun; Ji, Guangrong; Zheng, Haiyong

    2016-05-01

    Extracting the cell objects of red tide algae is the most important step in the construction of an automatic microscopic image recognition system for harmful algal blooms. This paper describes a set of composite methods for the automatic segmentation of cells of red tide algae from microscopic images. Depending on the existence of setae, we classify the common marine red tide algae into non-setae algae species and Chaetoceros, and design segmentation strategies for these two categories according to their morphological characteristics. In view of the varied forms and fuzzy edges of non-setae algae, we propose a new multi-scale detection algorithm for algal cell regions based on border- correlation, and further combine this with morphological operations and an improved GrabCut algorithm to segment single-cell and multicell objects. In this process, similarity detection is introduced to eliminate the pseudo cellular regions. For Chaetoceros, owing to the weak grayscale information of their setae and the low contrast between the setae and background, we propose a cell extraction method based on a gray surface orientation angle model. This method constructs a gray surface vector model, and executes the gray mapping of the orientation angles. The obtained gray values are then reconstructed and linearly stretched. Finally, appropriate morphological processing is conducted to preserve the orientation information and tiny features of the setae. Experimental results demonstrate that the proposed methods can eff ectively remove noise and accurately extract both categories of algae cell objects possessing a complete shape, regular contour, and clear edge. Compared with other advanced segmentation techniques, our methods are more robust when considering images with different appearances and achieve more satisfactory segmentation eff ects.

  9. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties. PMID:25019134

  10. Automatic Extraction of Tide-Coordinated Shoreline Using Open Source Software and Landsat Imagery

    Science.gov (United States)

    Goncalves, G.; Duro, N.; Sousa, E.; Figueiredo, I.

    2015-04-01

    Due to both natural and anthropogenic causes, the coastal lines keeps changing dynamically and continuously their shape, position and extend over time. In this paper we propose an approach to derive a tide-coordinate shoreline from two extracted instantaneous shorelines corresponding to a nearly low tide and high tide events. First, all the multispectral images are panshaperned to meet the 15 meters spatial resolution of the panchromatic images. Second, by using the Modification of Normalized Difference Water Index (MNDWI) and the kmeans clustering method we extract the raster shoreline for each image acquisition time. Third, each raster shoreline is smoothed and vectorized using a penalized least square method. Fourth, a 2D constrained Delaunay triangulation is built from the two extracted instantaneous shorelines with their respective heights interpolated from a Tidal gauche station. Finally, the desired tide-coordinate shoreline is interpolated from the previous triangular intertidal surface. The results show that an automatic tide-coordinated extraction method can be efficiently implemented using free available remote sensing imagery data (Landsat 8) and open source software (QGIS and Orfeo toolbox) and python scripting for task automation and software integration.

  11. A method of automatic control procedures cardiopulmonary resuscitation

    Science.gov (United States)

    Bureev, A. Sh.; Zhdanov, D. S.; Kiseleva, E. Yu.; Kutsov, M. S.; Trifonov, A. Yu.

    2015-11-01

    The study is to present the results of works on creation of methods of automatic control procedures of cardiopulmonary resuscitation (CPR). A method of automatic control procedure of CPR by evaluating the acoustic data of the dynamics of blood flow in the bifurcation of carotid arteries and the dynamics of air flow in a trachea according to the current guidelines for CPR is presented. Evaluation of the patient is carried out by analyzing the respiratory noise and blood flow in the interspaces between the chest compressions and artificial pulmonary ventilation. The device operation algorithm of automatic control procedures of CPR and its block diagram has been developed.

  12. Research of x-ray automatic image mosaic method

    Science.gov (United States)

    Liu, Bin; Chen, Shunan; Guo, Lianpeng; Xu, Wanpeng

    2013-10-01

    Image mosaic has widely applications value in the fields of medical image analysis, and it is a technology that carries on the spatial matching to a series of image which are overlapped with each other, and finally builds a seamless and high quality image which has high resolution and big eyeshot. In this paper, the method of grayscale cutting pseudo-color enhancement was firstly used to complete the mapping transformation from gray to the pseudo-color, and to extract SIFT features from the images. And then by making use of a similar measure of NCC (normalized cross correlation - Normalized cross-correlation), the method of RANSAC (Random Sample Consensus) was used to exclude the pseudofeature points right in order to complete the exact match of feature points. Finally, seamless mosaic and color fusion were completed by using wavelet multi-decomposition. The experiment shows that the method we used can effectively improve the precision and automation of the medical image mosaic, and provide an effective technical approach for automatic medical image mosaic.

  13. Statistical Analysis of Automatic Seed Word Acquisition to Improve Harmful Expression Extraction in Cyberbullying Detection

    Directory of Open Access Journals (Sweden)

    Suzuha Hatakeyama

    2016-04-01

    Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. [1]. We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.

  14. Automatic extraction of highway light poles and towers from mobile LiDAR data

    Science.gov (United States)

    Yan, Wai Yeung; Morsy, Salem; Shaker, Ahmed; Tulloch, Mark

    2016-03-01

    Mobile LiDAR has been recently demonstrated as a viable technique for pole-like object detection and classification. Despite that a desirable accuracy (around 80%) has been reported in the existing studies, majority of them were presented in the street level with relatively flat ground and very few of them addressed how to extract the entire pole structure from the ground or curb surface. Therefore, this paper attempts to fill the research gap by presenting a workflow for automatic extraction of light poles and towers from mobile LiDAR data point cloud, with a particular focus on municipal highway. The data processing workflow includes (1) an automatic ground filtering mechanism to separate aboveground and ground features, (2) an unsupervised clustering algorithm to cluster the aboveground data point cloud, (3) a set of decision rules to identify and classify potential light poles and towers, and (4) a least-squares circle fitting algorithm to fit the circular pole structure so as to remove the ground points. The workflow was tested with a set of mobile LiDAR data collected for a section of highway 401 located in Toronto, Ontario, Canada. The results showed that the proposed method can achieve an over 91% of detection rate for five types of light poles and towers along the study area.

  15. Automatic Extraction and Regularization of Building Outlines from Airborne LIDAR Point Clouds

    Science.gov (United States)

    Albers, Bastian; Kada, Martin; Wichmann, Andreas

    2016-06-01

    Building outlines are needed for various applications like urban planning, 3D city modelling and updating cadaster. Their automatic reconstruction, e.g. from airborne laser scanning data, as regularized shapes is therefore of high relevance. Today's airborne laser scanning technology can produce dense 3D point clouds with high accuracy, which makes it an eligible data source to reconstruct 2D building outlines or even 3D building models. In this paper, we propose an automatic building outline extraction and regularization method that implements a trade-off between enforcing strict shape restriction and allowing flexible angles using an energy minimization approach. The proposed procedure can be summarized for each building as follows: (1) an initial building outline is created from a given set of building points with the alpha shape algorithm; (2) a Hough transform is used to determine the main directions of the building and to extract line segments which are oriented accordingly; (3) the alpha shape boundary points are then repositioned to both follow these segments, but also to respect their original location, favoring long line segments and certain angles. The energy function that guides this trade-off is evaluated with the Viterbi algorithm.

  16. A semi-automatic method for ontology mapping

    OpenAIRE

    PEREZ, Laura Haide; Cechich, Alejandra; Buccella, Agustina

    2007-01-01

    Ontology mapping involves the task of finding similarities among overlapping sources by using ontologies. In a Federated System in which distributed, autonomous and heterogeneous information sources must be integrated, ontologies have emerged as tools to solve semantic heterogeneity problems. In this paper we propose a three-level approach that provides a semi-automatic method to ontology mapping. It performs some tasks automatically and guides the user in performing other tasks for which ...

  17. Automatic Extraction of High-Resolution Rainfall Series from Rainfall Strip Charts

    Science.gov (United States)

    Saa-Requejo, Antonio; Valencia, Jose Luis; Garrido, Alberto; Tarquis, Ana M.

    2015-04-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on a host of factors, including climate, soil, topography, cropping and land management practices among others. Most models for soil erosion or hydrological processes need an accurate storm characterization. However, this data are not always available and in some cases indirect models are generated to fill this gap. In Spain, the rain intensity data known for time periods less than 24 hours back to 1924 and many studies are limited by it. In many cases this data is stored in rainfall strip charts in the meteorological stations but haven't been transfer in a numerical form. To overcome this deficiency in the raw data a process of information extraction from large amounts of rainfall strip charts is implemented by means of computer software. The method has been developed that largely automates the intensive-labour extraction work based on van Piggelen et al. (2011). The method consists of the following five basic steps: 1) scanning the charts to high-resolution digital images, 2) manually and visually registering relevant meta information from charts and pre-processing, 3) applying automatic curve extraction software in a batch process to determine the coordinates of cumulative rainfall lines on the images (main step), 4) post processing the curves that were not correctly determined in step 3, and 5) aggregating the cumulative rainfall in pixel coordinates to the desired time resolution. A colour detection procedure is introduced that automatically separates the background of the charts and rolls from the grid and subsequently the rainfall curve. The rainfall curve is detected by minimization of a cost function. Some utilities have been added to improve the previous work and automates some auxiliary processes: readjust the bands properly, merge bands when

  18. Automatic target recognition apparatus and method

    Energy Technology Data Exchange (ETDEWEB)

    Baumgart, Chris W. (Santa Fe, NM); Ciarcia, Christopher A. (Los Alamos, NM)

    2000-01-01

    An automatic target recognition apparatus (10) is provided, having a video camera/digitizer (12) for producing a digitized image signal (20) representing an image containing therein objects which objects are to be recognized if they meet predefined criteria. The digitized image signal (20) is processed within a video analysis subroutine (22) residing in a computer (14) in a plurality of parallel analysis chains such that the objects are presumed to be lighter in shading than the background in the image in three of the chains and further such that the objects are presumed to be darker than the background in the other three chains. In two of the chains the objects are defined by surface texture analysis using texture filter operations. In another two of the chains the objects are defined by background subtraction operations. In yet another two of the chains the objects are defined by edge enhancement processes. In each of the analysis chains a calculation operation independently determines an error factor relating to the probability that the objects are of the type which should be recognized, and a probability calculation operation combines the results of the analysis chains.

  19. FOLKSONOMIES VERSUS AUTOMATIC KEYWORD EXTRACTION: AN EMPIRICAL STUDY

    OpenAIRE

    Al-Khalifa, Hend S.; Davis, Hugh C.

    2006-01-01

    This paper reports on an evaluation of the keywords produced by Yahoo API context-based term extractor compared to a folksonomy set for the same website. The evaluation process is made in two ways: automatically, by measuring the percentage of overlap between the folksonomy set and Yahoo keywords set; and subjectively, by asking a human indexer to rate the quality of the generated keywords from both systems. The result of the experiment will be considered as an evidence for the rich semantics...

  20. Automatic extraction of gene ontology annotation and its correlation with clusters in protein networks

    Directory of Open Access Journals (Sweden)

    Mazo Ilya

    2007-07-01

    Full Text Available Abstract Background Uncovering cellular roles of a protein is a task of tremendous importance and complexity that requires dedicated experimental work as well as often sophisticated data mining and processing tools. Protein functions, often referred to as its annotations, are believed to manifest themselves through topology of the networks of inter-proteins interactions. In particular, there is a growing body of evidence that proteins performing the same function are more likely to interact with each other than with proteins with other functions. However, since functional annotation and protein network topology are often studied separately, the direct relationship between them has not been comprehensively demonstrated. In addition to having the general biological significance, such demonstration would further validate the data extraction and processing methods used to compose protein annotation and protein-protein interactions datasets. Results We developed a method for automatic extraction of protein functional annotation from scientific text based on the Natural Language Processing (NLP technology. For the protein annotation extracted from the entire PubMed, we evaluated the precision and recall rates, and compared the performance of the automatic extraction technology to that of manual curation used in public Gene Ontology (GO annotation. In the second part of our presentation, we reported a large-scale investigation into the correspondence between communities in the literature-based protein networks and GO annotation groups of functionally related proteins. We found a comprehensive two-way match: proteins within biological annotation groups form significantly denser linked network clusters than expected by chance and, conversely, densely linked network communities exhibit a pronounced non-random overlap with GO groups. We also expanded the publicly available GO biological process annotation using the relations extracted by our NLP technology

  1. Beam extraction system in AIC-144 automatic isochronous cyclotron

    International Nuclear Information System (INIS)

    Project of beam extraction system in Cracow AIC-144 cyclotron is described. The problems of increase of beam emittance, and change of the magnetic field in the cyclotron chamber are discussed. Expected extraction coefficient of the beam is about 0.7. (S.B.)

  2. Data mining of geospatial data: combining visual and automatic methods

    OpenAIRE

    Demšar, Urška

    2006-01-01

    Most of the largest databases currently available have a strong geospatial component and contain potentially useful information which might be of value. The discipline concerned with extracting this information and knowledge is data mining. Knowledge discovery is performed by applying automatic algorithms which recognise patterns in the data. Classical data mining algorithms assume that data are independently generated and identically distributed. Geospatial data are multidimensional, spatial...

  3. Automatic extraction of insulators from 3D LiDAR data of an electrical substation

    Science.gov (United States)

    Arastounia, M.; Lichti, D. D.

    2013-10-01

    A considerable percentage of power outages are caused by animals that come into contact with conductive elements of electrical substations. These can be prevented by insulating conductive electrical objects, for which a 3D as-built plan of the substation is crucial. This research aims to create such a 3D as-built plan using terrestrial LiDAR data while in this paper the aim is to extract insulators, which are key objects in electrical substations. This paper proposes a segmentation method based on a new approach of finding the principle direction of points' distribution. This is done by forming and analysing the distribution matrix whose elements are the range of points in 9 different directions in 3D space. Comparison of the computational performance of our method with PCA (principal component analysis) shows that our approach is 25% faster since it utilizes zero-order moments while PCA computes the first- and second-order moments, which is more time-consuming. A knowledge-based approach has been developed to automatically recognize points on insulators. The method utilizes known insulator properties such as diameter and the number and the spacing of their rings. The results achieved indicate that 24 out of 27 insulators could be recognized while the 3 un-recognized ones were highly occluded. Check point analysis was performed by manually cropping all points on insulators. The results of check point analysis show that the accuracy, precision and recall of insulator recognition are 98%, 86% and 81%, respectively. It is concluded that automatic object extraction from electrical substations using only LiDAR data is not only possible but also promising. Moreover, our developed approach to determine the directional distribution of points is computationally more efficient for segmentation of objects in electrical substations compared to PCA. Finally our knowledge-based method is promising to recognize points on electrical objects as it was successfully applied for

  4. Comparison of mentha extracts obtained by different extraction methods

    OpenAIRE

    Milić Slavica; Lepojević Žika; Adamović Dušan; Mujić Ibrahim; Zeković Zoran

    2006-01-01

    The different methods of mentha extraction, such as steam distillation, extraction by methylene chloride (Soxhlet extraction) and supercritical fluid extraction (SFE) by carbon dioxide (CO J were investigated. SFE by CO, was performed at pressure of 100 bar and temperature of40°C. The extraction yield, as well as qualitative and quantitative composition of obtained extracts, determined by GC-MS method, were compared.

  5. Automatic Extraction of Optimal Endmembers from Airborne Hyperspectral Imagery Using Iterative Error Analysis (IEA and Spectral Discrimination Measurements

    Directory of Open Access Journals (Sweden)

    Ahram Song

    2015-01-01

    Full Text Available Pure surface materials denoted by endmembers play an important role in hyperspectral processing in various fields. Many endmember extraction algorithms (EEAs have been proposed to find appropriate endmember sets. Most studies involving the automatic extraction of appropriate endmembers without a priori information have focused on N-FINDR. Although there are many different versions of N-FINDR algorithms, computational complexity issues still remain and these algorithms cannot consider the case where spectrally mixed materials are extracted as final endmembers. A sequential endmember extraction-based algorithm may be more effective when the number of endmembers to be extracted is unknown. In this study, we propose a simple but accurate method to automatically determine the optimal endmembers using such a method. The proposed method consists of three steps for determining the proper number of endmembers and for removing endmembers that are repeated or contain mixed signatures using the Root Mean Square Error (RMSE images obtained from Iterative Error Analysis (IEA and spectral discrimination measurements. A synthetic hyperpsectral image and two different airborne images such as Airborne Imaging Spectrometer for Application (AISA and Compact Airborne Spectrographic Imager (CASI data were tested using the proposed method, and our experimental results indicate that the final endmember set contained all of the distinct signatures without redundant endmembers and errors from mixed materials.

  6. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  7. Apparatus and methods for hydrocarbon extraction

    Energy Technology Data Exchange (ETDEWEB)

    Bohnert, George W.; Verhulst, Galen G.

    2016-04-26

    Systems and methods for hydrocarbon extraction from hydrocarbon-containing material. Such systems and methods relate to extracting hydrocarbon from hydrocarbon-containing material employing a non-aqueous extractant. Additionally, such systems and methods relate to recovering and reusing non-aqueous extractant employed for extracting hydrocarbon from hydrocarbon-containing material.

  8. Method to extract uranium compounds

    International Nuclear Information System (INIS)

    The uranium compounds present in gangue of phosphate ores are also to be determined and extracted with the proposed method. The gangue-water mixture in phosphate extraction is to be displaced, according to the invention, by a component which selectively dissolves the uranium compounds out of the gangue. The enriched solution is separated off and processed. Weak acids (e.g. phosphoric acid, acetre acid, citric acid), lyes (e.g. ammonium carbonate, soda) or salts (e.g. sodium hydrogen phosphate, NaHCO3 tartrates) are named as solution components. (UWI)

  9. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  10. Automatic Extraction of Leaf Characters from Herbarium Specimens

    OpenAIRE

    Corney, DPA; Clark, JY; Tang, HL; Wilkin, P

    2012-01-01

    Herbarium specimens are a vital resource in botanical taxonomy. Many herbaria are undertaking large-scale digitization projects to improve access and to preserve delicate specimens, and in doing so are creating large sets of images. Leaf characters are important for describing taxa and distinguishing between them and they can be measured from herbarium specimens. Here, we demonstrate that herbarium images can be analysed using suitable software and that leaf characters can be extracted automa...

  11. Automatic Extraction of DTM from Low Resolution Dsm by Twosteps Semi-Global Filtering

    Science.gov (United States)

    Zhang, Yanfeng; Zhang, Yongjun; Zhang, Yi; Li, Xin

    2016-06-01

    Automatically extracting DTM from DSM or LiDAR data by distinguishing non-ground points from ground points is an important issue. Many algorithms for this issue are developed, however, most of them are targeted at processing dense LiDAR data, and lack the ability of getting DTM from low resolution DSM. This is caused by the decrease of distinction on elevation variation between steep terrains and surface objects. In this paper, a method called two-steps semi-global filtering (TSGF) is proposed to extract DTM from low resolution DSM. Firstly, the DSM slope map is calculated and smoothed by SGF (semi-global filtering), which is then binarized and used as the mask of flat terrains. Secondly, the DSM is segmented with the restriction of the flat terrains mask. Lastly, each segment is filtered with semi-global algorithm in order to remove non-ground points, which will produce the final DTM. The first SGF is based on global distribution characteristic of large slope, which distinguishes steep terrains and flat terrains. The second SGF is used to filter non-ground points on DSM within flat terrain segments. Therefore, by two steps SGF non-ground points are removed robustly, while shape of steep terrains is kept. Experiments on DSM generated by ZY3 imagery with resolution of 10-30m demonstrate the effectiveness of the proposed method.

  12. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  13. An Automatic High Efficient Method for Dish Concentrator Alignment

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2014-01-01

    for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.

  14. Automatic apparatus for dispersing radiodiagnostic agents and method therefor

    International Nuclear Information System (INIS)

    This patent describes an apparatus and method for automatically measuring the activity of a radioactive solution, such as technetium 99, and diluting it with a diluent solution, such as a saline solution, to yield a preselected volume of the resultant radioactive solution having a preselected dose concentration

  15. The Automatic Start Method of Application Program Using API

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper introduces on a method about the au-tomactic start of application program. Through defining Registryby API function, the automatic start of specified application pro-gram is fulfilled when Windows98 is taking action. It gives facil-ities to many computer application works.

  16. Automatic Extraction of Document Keyphrases for Use in Digital Libraries: Evaluation and Applications.

    Science.gov (United States)

    Jones, Steve; Paynter, Gordon W.

    2002-01-01

    Discussion of finding relevant documents in electronic document collections focuses on an evaluation of the Kea automatic keyphrase extraction algorithm which was developed by members of the New Zealand Digital Library Project. Results are based on evaluations by human assessors of the quality and appropriateness of Kea keyphrases. (Author/LRW)

  17. Automatic Extraction of Pathological Area in 2D MR Brain Scan

    Czech Academy of Sciences Publication Activity Database

    Dvořák, P.; Bartušek, Karel; Gescheidtová, E.

    Cambridge: The Electromagnetics Academy, 2014, s. 1885-1889. ISBN 978-1-934142-28-8. [PIERS 2014. Progress In Electromagnetics Research Symposium /35./. Guangzhou (CN), 25.08.2014-28.08.2014] R&D Projects: GA ČR GAP102/12/1104 Institutional support: RVO:68081731 Keywords : brain tumor * MRI * automatic extraction Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  18. Automatic intra-modality brain image registration method

    International Nuclear Information System (INIS)

    Full text: Registration of 3D images of brain of the same or different subjects has potential importance in clinical diagnosis, treatment planning and neurological research. The broad aim of our work is to produce an automatic and robust intra-modality, brain image registration algorithm for intra-subject and inter-subject studies. Our algorithm is composed of two stages. Initial alignment is achieved by finding the values of nine transformation parameters (representing translation, rotation and scale) that minimise the nonoverlapping regions of the head. This is achieved by minimisation of the sum of the exclusive OR of two binary head images, produced using the head extraction procedure described by Ardekani et al. (J Comput Assist Tomogr, 19:613-623, 1995). The initial alignment successfully determines the scale parameters and gross translation and rotation parameters. Fine alignment uses an objective function described for inter-modality registration in Ardekani et al. (ibid.). The algorithm segments one of the images to be aligned into a set of connected components using K-means clustering. Registration is achieved by minimising the K-means variance of the segmentation induced in the other image. Similarity of images of the same modality makes the method attractive for intra-modality registration. A 3D MR image, with voxel dimensions, 2x2x6 mm, was misaligned. The registered image shows visually accurate registration. The average displacement of a pixel from its correct location was measured to be 3.3 mm. The algorithm was tested on intra-subject MR images and was found to produce good qualitative results. Using the data available, the algorithm produced promising qualitative results in intra-subject registration. Further work is necessary in its application to intersubject registration, due to large variability in brain structure between subjects. Clinical evaluation of the algorithm for selected applications is required

  19. Automatic urban building boundary extraction from high resolution aerial images using an innovative model of active contours

    Science.gov (United States)

    Ahmadi, Salman; Zoej, M. J. Valadan; Ebadi, Hamid; Moghaddam, Hamid Abrishami; Mohammadzadeh, Ali

    2010-06-01

    To present a new method for building boundary detection and extraction based on the active contour model, is the main objective of this research. Classical models of this type are associated with several shortcomings; they require extensive initialization, they are sensitive to noise, and adjustment issues often become problematic with complex images. In this research a new model of active contours has been proposed that is optimized for the automatic building extraction. This new active contour model, in comparison to the classical ones, can detect and extract the building boundaries more accurately, and is capable of avoiding detection of the boundaries of features in the neighborhood of buildings such as streets and trees. Finally, the detected building boundaries are generalized to obtain a regular shape for building boundaries. Tests with our proposed model demonstrate excellent accuracy in terms of building boundary extraction. However, due to the radiometric similarity between building roofs and the image background, our system fails to recognize a few buildings.

  20. Automatic extraction of ontological relations from Arabic text

    Directory of Open Access Journals (Sweden)

    Mohammed G.H. Al Zamil

    2014-12-01

    The proposed methodology has been designed to analyze Arabic text using lexical semantic patterns of the Arabic language according to a set of features. Next, the features have been abstracted and enriched with formal descriptions for the purpose of generalizing the resulted rules. The rules, then, have formulated a classifier that accepts Arabic text, analyzes it, and then displays related concepts labeled with its designated relationship. Moreover, to resolve the ambiguity of homonyms, a set of machine translation, text mining, and part of speech tagging algorithms have been reused. We performed extensive experiments to measure the effectiveness of our proposed tools. The results indicate that our proposed methodology is promising for automating the process of extracting ontological relations.

  1. A Method of Automatic Keyword Extraction Based on Word Span%基于词跨度的中文文本关键词自动提取方法

    Institute of Scientific and Technical Information of China (English)

    谢晋

    2012-01-01

    针对中文文本关键词提取方法中普遍存在的噪声干扰问题,本文提出一种基于词跨度的关键词自动提取方法。该方法通过在传统的关键词权重计算方法中,加入词跨因子,利用词跨度来过滤高频噪声数据,以达到降低噪声干扰的效果。整个关键词提取过程通过分词计算、停用词过滤、特征统计和权重计算,选出若干个能够表达文章主旨的关键词。复旦大学语料库的实验结果表明,该方法提高了关键词提取的精度,并且具备良好的稳定性。%Considering the noise interference problem commonly existing in keyword extraction, this paper proposes a new keyword extraction method in Chinese text by analyzing word span. The proposed scheme analyzes the relative importance of a word to a text through measuring the distance between the positions of this word appearing firstly and lastly in the given text. This distance, called word span, indicates the scope of the word appearing. Since a significant difference exists between the word spans of keyword and noise word, it is a valuable idea to adopt word span to precisely recognize and filter out the.noises. Here, word span is used to calculate the final weights of keywords extracted from text by analyzing characters including frequency, location and POS(part of speech). Some experiments were compete based on Fudan University Corpus, in which different types of texts were made to test this method. The results showed that this approach improved the quality of the keyword extraction, and had a stable performance effect on various texts.

  2. Uncertain Training Data Edition for Automatic Object-Based Change Map Extraction

    Science.gov (United States)

    Hajahmadi, S.; Mokhtarzadeh, M.; Mohammadzadeh, A.; Valadanzouj, M. J.

    2013-09-01

    Due to the rapid transformation of the societies, and the consequent growth of the cities, it is necessary to study these changes in order to achieve better control and management of urban areas and assist the decision-makers. Change detection involves the ability to quantify temporal effects using multi-temporal data sets. The available maps of the under study area is one of the most important sources for this reason. Although old data bases and maps are a great resource, it is more than likely that the training data extracted from them might contain errors, which affects the procedure of the classification; and as a result the process of the training sample editing is an essential matter. Due to the urban nature of the area studied and the problems caused in the pixel base methods, object-based classification is applied. To reach this, the image is segmented into 4 scale levels using a multi-resolution segmentation procedure. After obtaining the segments in required levels, training samples are extracted automatically using the existing old map. Due to the old nature of the map, these samples are uncertain containing wrong data. To handle this issue, an editing process is proposed according to K-nearest neighbour and k-means algorithms. Next, the image is classified in a multi-resolution object-based manner and the effects of training sample refinement are evaluated. As a final step this classified image is compared with the existing map and the changed areas are detected.

  3. Multiple Adaptive Neuro-Fuzzy Inference System with Automatic Features Extraction Algorithm for Cervical Cancer Recognition

    Directory of Open Access Journals (Sweden)

    Mohammad Subhi Al-batah

    2014-01-01

    Full Text Available To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL and high-grade squamous intraepithelial lesion (HSIL. The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy.

  4. Multiple adaptive neuro-fuzzy inference system with automatic features extraction algorithm for cervical cancer recognition.

    Science.gov (United States)

    Al-batah, Mohammad Subhi; Isa, Nor Ashidi Mat; Klaib, Mohammad Fadel; Al-Betar, Mohammed Azmi

    2014-01-01

    To date, cancer of uterine cervix is still a leading cause of cancer-related deaths in women worldwide. The current methods (i.e., Pap smear and liquid-based cytology (LBC)) to screen for cervical cancer are time-consuming and dependent on the skill of the cytopathologist and thus are rather subjective. Therefore, this paper presents an intelligent computer vision system to assist pathologists in overcoming these problems and, consequently, produce more accurate results. The developed system consists of two stages. In the first stage, the automatic features extraction (AFE) algorithm is performed. In the second stage, a neuro-fuzzy model called multiple adaptive neuro-fuzzy inference system (MANFIS) is proposed for recognition process. The MANFIS contains a set of ANFIS models which are arranged in parallel combination to produce a model with multi-input-multioutput structure. The system is capable of classifying cervical cell image into three groups, namely, normal, low-grade squamous intraepithelial lesion (LSIL) and high-grade squamous intraepithelial lesion (HSIL). The experimental results prove the capability of the AFE algorithm to be as effective as the manual extraction by human experts, while the proposed MANFIS produces a good classification performance with 94.2% accuracy. PMID:24707316

  5. Automatic Method for Visual Grading of Seed Food Products

    OpenAIRE

    Dubosclard, Pierre; Larnier, Stanislas; Konik, Hubert; Herbulot, Ariane; Devy, Michel

    2014-01-01

    This paper presents an automatic method for visual grading, designed to solve the industrial problem of evaluation of seed lots. The sample is thrown in bulk onto a tray placed in a chamber for acquiring color image. An image processing method had been developed to separate and characterize each seed. The approach adopted for the segmentation step is based on the use of marked point processes and active contour, leading to tackle the problem by a technique of energy minimization.

  6. An automatic segmentation method for fast imaging in PET

    International Nuclear Information System (INIS)

    A new segmentation method has been developed for PET fast imaging. The technique automatically segments the transmission images into different anatomical regions, it efficiently reduced the PET transmission scan time. The result shows that this method gives only 3 min-scan time which is perfect for attenuation correction of the PET images instead of the original 15-30 min-scan time. This approach has been successfully tested both on phantom and clinical data

  7. Automatic indicator dilution curve extraction in dynamic-contrast enhanced imaging using spectral clustering

    Science.gov (United States)

    Saporito, Salvatore; Herold, Ingeborg HF; Houthuizen, Patrick; van den Bosch, Harrie CM; Korsten, Hendrikus HM; van Assen, Hans C.; Mischi, Massimo

    2015-07-01

    Indicator dilution theory provides a framework for the measurement of several cardiovascular parameters. Recently, dynamic imaging and contrast agents have been proposed to apply the method in a minimally invasive way. However, the use of contrast-enhanced sequences requires the definition of regions of interest (ROIs) in the dynamic image series; a time-consuming and operator dependent task, commonly performed manually. In this work, we propose a method for the automatic extraction of indicator dilution curves, exploiting the time domain correlation between pixels belonging to the same region. Individual time intensity curves were projected into a low dimensional subspace using principal component analysis; subsequently, clustering was performed to identify the different ROIs. The method was assessed on clinically available DCE-MRI and DCE-US recordings, comparing the derived IDCs with those obtained manually. The robustness to noise of the proposed approach was shown on simulated data. The tracer kinetic parameters derived on real images were in agreement with those obtained from manual annotation. The presented method is a clinically useful preprocessing step prior to further ROI-based cardiac quantifications.

  8. Automatic indicator dilution curve extraction in dynamic-contrast enhanced imaging using spectral clustering

    International Nuclear Information System (INIS)

    Indicator dilution theory provides a framework for the measurement of several cardiovascular parameters. Recently, dynamic imaging and contrast agents have been proposed to apply the method in a minimally invasive way. However, the use of contrast-enhanced sequences requires the definition of regions of interest (ROIs) in the dynamic image series; a time-consuming and operator dependent task, commonly performed manually. In this work, we propose a method for the automatic extraction of indicator dilution curves, exploiting the time domain correlation between pixels belonging to the same region. Individual time intensity curves were projected into a low dimensional subspace using principal component analysis; subsequently, clustering was performed to identify the different ROIs. The method was assessed on clinically available DCE-MRI and DCE-US recordings, comparing the derived IDCs with those obtained manually. The robustness to noise of the proposed approach was shown on simulated data. The tracer kinetic parameters derived on real images were in agreement with those obtained from manual annotation. The presented method is a clinically useful preprocessing step prior to further ROI-based cardiac quantifications. (paper)

  9. Method of purifying neutral organophosphorus extractants

    Science.gov (United States)

    Horwitz, E. Philip; Gatrone, Ralph C.; Chiarizia, Renato

    1988-01-01

    A method for removing acidic contaminants from neutral mono and bifunctional organophosphorous extractants by contacting the extractant with a macroporous cation exchange resin in the H.sup.+ state followed by contact with a macroporous anion exchange resin in the OH.sup.- state, whereupon the resins take up the acidic contaminants from the extractant, purifying the extractant and improving its extraction capability.

  10. Machine learning based sample extraction for automatic speech recognition using dialectal Assamese speech.

    Science.gov (United States)

    Agarwalla, Swapna; Sarma, Kandarpa Kumar

    2016-06-01

    Automatic Speaker Recognition (ASR) and related issues are continuously evolving as inseparable elements of Human Computer Interaction (HCI). With assimilation of emerging concepts like big data and Internet of Things (IoT) as extended elements of HCI, ASR techniques are found to be passing through a paradigm shift. Oflate, learning based techniques have started to receive greater attention from research communities related to ASR owing to the fact that former possess natural ability to mimic biological behavior and that way aids ASR modeling and processing. The current learning based ASR techniques are found to be evolving further with incorporation of big data, IoT like concepts. Here, in this paper, we report certain approaches based on machine learning (ML) used for extraction of relevant samples from big data space and apply them for ASR using certain soft computing techniques for Assamese speech with dialectal variations. A class of ML techniques comprising of the basic Artificial Neural Network (ANN) in feedforward (FF) and Deep Neural Network (DNN) forms using raw speech, extracted features and frequency domain forms are considered. The Multi Layer Perceptron (MLP) is configured with inputs in several forms to learn class information obtained using clustering and manual labeling. DNNs are also used to extract specific sentence types. Initially, from a large storage, relevant samples are selected and assimilated. Next, a few conventional methods are used for feature extraction of a few selected types. The features comprise of both spectral and prosodic types. These are applied to Recurrent Neural Network (RNN) and Fully Focused Time Delay Neural Network (FFTDNN) structures to evaluate their performance in recognizing mood, dialect, speaker and gender variations in dialectal Assamese speech. The system is tested under several background noise conditions by considering the recognition rates (obtained using confusion matrices and manually) and computation time

  11. [A new method of automatic analysis of tongue deviation using self-correction].

    Science.gov (United States)

    Zhu, Mingfeng; Du, Jianqiang; Meng, Fan; Zhang, Kang; Ding, Chenghua

    2012-02-01

    The article analyzes the old analysis method of tongue deviation and introduces a new analysis method of it with self-correction avoiding the shortcomings of the old method. In this article, comparisons and analyses are made to current central axis extraction methods, and the facts proved that these methods were not suitable for central axis extraction of tongue images. To overcome the shortcoming that the old method utilized area symmetry to extract central axis so that it would lead to a failure to find central axis, we introduced a kind of shape symmetry analysis method to extract the central axis. This method was capable of correcting the edge of tongue root automatically, and it improved the accuracy of central axis extraction. In additional, in this article, a kind of mouth corner analysis method by analysis of variational hue of tongue images was introduced. In the experiment for comparison, this method was more accurate than the old one and its efficiency was higher than that of the old one. PMID:22404028

  12. Towards Automatic Music Transcription: Extraction of MIDI-Data out of Polyphonic Piano Music

    Directory of Open Access Journals (Sweden)

    Jens Wellhausen

    2005-06-01

    Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.

  13. Automatic extraction of forward stroke volume using dynamic 11C-acetate PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik; Kim, Won Yong; Wiggers, Henrik; Frøkiær, Jørgen; Sørensen, Jens

    TruePoint 64 PET/CT scanner after bolus injection of 399±27 MBq of 11C-acetate. The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was derived by automatic extrapolation of the down-slope of the TAC. FSV was then...... calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured in the left ventricular outflow tract by cardiovascular magnetic resonance using phase-contrast velocity mapping within two weeks of PET imaging. Results...

  14. A Method of Automatic Extraction of Image Control Points for UAV Image Based on POS Data%一种基于POS数据的无人机影像自动展绘控制点方法

    Institute of Scientific and Technical Information of China (English)

    鲁恒; 李永树; 江禹

    2011-01-01

    The Unmanned Aerial Vehicle (UAV) images have the characteristics of high overlapping degree and heavy image processing workload. In order to improve the efficiency of UAV photogrammetry and take the advantages of fast mapping by the UAV technology, a method of extraction of image control points by correcting POS data was put forward. According to the principle of correct UAV POS data, POS data correction model was established and POS data error correction parameter were acquired by layout a small amount of the control points in regional network, and the corrected POS data were used in extraction of UAV images control points. The study results show that the method for UAV image rapid processing has good practical value.%针对无人机影像重叠度高,影像处理工作量大的特点,为了提高无人机摄影测量的工作效率,充分发挥无人机技术快速成图的优点,提出了一种利用改正后POS数据自动展绘控制点的方法.该方法根据无人机POS数据纠正原理,通过在区域网内布设少量控制点,建立POS数据改正模型,从而获取POS数据误差改正参数对原始POS数据进行改正,利用改正后POS数据在无人机影像上自动展绘控制点.研究结果表明,该方法对于无人机影像快速处理具有较好的实用价值.

  15. Automatic Extraction of Femur Contours from Calibrated X-Ray Images using Statistical Information

    Directory of Open Access Journals (Sweden)

    Xiao Dong

    2007-09-01

    Full Text Available Automatic identification and extraction of bone contours from x-ray images is an essential first step task for further medical image analysis. In this paper we propose a 3D statistical model based framework for the proximal femur contour extraction from calibrated x-ray images. The automatic initialization to align the 3D model with the x-ray images is solved by an Estimation of Bayesian Network Algorithm to fit a simplified multiple component geometrical model of the proximal femur to the x-ray data. Landmarks can be extracted from the geometrical model for the initialization of the 3D statistical model. The contour extraction is then accomplished by a joint registration and segmentation procedure. We iteratively updates the extracted bone contours and an instanced 3D model to fit the x-ray images. Taking the projected silhouettes of the instanced 3D model on the registered x-ray images as templates, bone contours can be extracted by a graphical model based Bayesian inference. The 3D model can then be updated by a non-rigid 2D/3D registration between the 3D statistical model and the extracted bone contours. Preliminary experiments on clinical data sets verified its validity.

  16. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Science.gov (United States)

    Maquet, Pierre

    2016-01-01

    Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation) and individual characteristics (intellectual quotient). Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  17. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  18. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  19. Clinical application of automatic extraction of left ventricular contours. Evaluation of left ventricular volumes by contrast-enhanced breath-hold ultrafast cine MR imaging

    International Nuclear Information System (INIS)

    To assess the validity of automatic extraction of left ventricular inner contours based on contrast-enhanced ultrafast cine-MR images, phantom (n=15) and clinical (n=60) studies were performed. In phantom study, left ventricular volumes obtained by biplane modified Simpson's method based on automatic extraction of left ventricular inner contour was significantly correlated to phantom's volumes(r=0.991). Contrast-enhanced breath-hold ultrafast cine MR imaging was shown to provide accurate cardiac images with high success rate (89% in horizontal long axis section and 88% in vertical long axis section) in clinical study. However, the extraction of left ventricular inner contour depends on operator's manual tracing and the time required for data analysis is longer. The automatic extraction time of left ventricular inner contour was 4 second/frame, on the other hand conventional manual tracing time was 60-90 second/frame. Comparison with left ventricular volumes showed a high correlation between contrast-enhanced ultrafast cine MR imaging (monoplane area-length's and biplane modified Simpson's methods based on automatic extraction of left ventricle) and digital subtraction left ventriculography (biplane area-length's method). (author)

  20. Optimization of Doppler velocity echocardiographic measurements using an automatic contour detection method.

    Science.gov (United States)

    Gaillard, E; Kadem, L; Pibarot, P; Durand, L-G

    2009-01-01

    Intra- and inter-observer variability in Doppler velocity echocardiographic measurements (DVEM) is a significant issue. Indeed, imprecisions of DVEM can lead to diagnostic errors, particularly in the quantification of the severity of heart valve dysfunction. To minimize the variability and rapidity of DVEM, we have developed an automatic method of Doppler velocity wave contour detection, based on active contour models. To validate our new method, results obtained with this method were compared to those obtained manually by an experienced echocardiographer on Doppler echocardiographic images of left ventricular outflow tract and transvalvular flow velocity signals recorded in 30 patients, 15 with aortic stenosis and 15 with mitral stenosis. We focused on three essential variables that are measured routinely by Doppler echocardiography in the clinical setting: the maximum velocity, the mean velocity and the velocity-time integral. Comparison between the two methods has shown a very good agreement (linear correlation coefficient R(2) = 0.99 between the automatically and the manually extracted variables). Moreover, the computation time was really short, about 5s. This new method applied to DVEM could, therefore, provide a useful tool to eliminate the intra- and inter-observer variabilities associated with DVEM and thereby to improve the diagnosis of cardiovascular disease. This automatic method could also allow the echocardiographer to realize these measurements within a much shorter period of time compared to standard manual tracing method. From a practical point of view, the model developed can be easily implanted in a standard echocardiographic system. PMID:19965162

  1. Automatic Feature Extraction, Categorization and Detection of Malicious Code in Android Applications

    OpenAIRE

    Muhammad Zuhair Qadir; Atif Nisar Jilani; Hassam Ullah Sheikh

    2014-01-01

    Since Android has become a popular software platform for mobile devices recently; they offer almost the same functionality as personal computers. Malwares have also become a big concern. As the number of new Android applications tends to be rapidly increased in the near future, there is a need for automatic malware detection quickly and efficiently. In this paper, we define a simple static analysis approach to first extract the features of the android application based on intents and categori...

  2. Automatic facial feature extraction and expression recognition based on neural network

    OpenAIRE

    Khandait, S. P.; Dr. R.C.Thool; P.D.Khandait

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image...

  3. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  4. Semi-Automatically Extracting FAQs to Improve Accessibility of Software Development Knowledge

    CERN Document Server

    Henß, Stefan; Mezini, Mira

    2012-01-01

    Frequently asked questions (FAQs) are a popular way to document software development knowledge. As creating such documents is expensive, this paper presents an approach for automatically extracting FAQs from sources of software development discussion, such as mailing lists and Internet forums, by combining techniques of text mining and natural language processing. We apply the approach to popular mailing lists and carry out a survey among software developers to show that it is able to extract high-quality FAQs that may be further improved by experts.

  5. Automatic Identification and Data Extraction from 2-Dimensional Plots in Digital Documents

    CERN Document Server

    Brouwer, William; Das, Sujatha; Mitra, Prasenjit; Giles, C L

    2008-01-01

    Most search engines index the textual content of documents in digital libraries. However, scholarly articles frequently report important findings in figures for visual impact and the contents of these figures are not indexed. These contents are often invaluable to the researcher in various fields, for the purposes of direct comparison with their own work. Therefore, searching for figures and extracting figure data are important problems. To the best of our knowledge, there exists no tool to automatically extract data from figures in digital documents. If we can extract data from these images automatically and store them in a database, an end-user can query and combine data from multiple digital documents simultaneously and efficiently. We propose a framework based on image analysis and machine learning to extract information from 2-D plot images and store them in a database. The proposed algorithm identifies a 2-D plot and extracts the axis labels, legend and the data points from the 2-D plot. We also segrega...

  6. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    International Nuclear Information System (INIS)

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes. (paper)

  7. A technique for automatically extracting useful field of view and central field of view images

    Science.gov (United States)

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    Introduction: It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. Materials and Methods: This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. Results: The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. Conclusion: It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints. PMID:27095858

  8. A method for automatically constructing the initial contour of the common carotid artery

    Directory of Open Access Journals (Sweden)

    Yara Omran

    2013-10-01

    Full Text Available In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images of the transverse section of the common carotid artery, but it can be extended to be used on the images of the longitudinal section. The intensity profiles are classified using Support vector machine algorithm, and the results of different kernels are compared. The extracted features used for the classification are basically statistical features of the intensity profiles. The echogenicity of the arterial lumen, and gives the profiles that intersect the artery a special shape that helps recognizing these profiles from other general profiles.The outlining of the arterial walls may seem a classic task in image processing. However, most of the methods used to outline the artery start from a manual, or semi-automatic, initial contour.The proposed method is highly appreciated in automating the entire process of automatic artery detection and segmentation.

  9. Automatic Extraction and Size Distribution of Landslides in Kurdistan Region, NE Iraq

    Directory of Open Access Journals (Sweden)

    Arsalan A. Othman

    2013-05-01

    Full Text Available This study aims to assess the localization and size distribution of landslides using automatic remote sensing techniques in (semi- arid, non-vegetated, mountainous environments. The study area is located in the Kurdistan region (NE Iraq, within the Zagros orogenic belt, which is characterized by the High Folded Zone (HFZ, the Imbricated Zone and the Zagros Suture Zone (ZSZ. The available reference inventory includes 3,190 landslides mapped from sixty QuickBird scenes using manual delineation. The landslide types involve rock falls, translational slides and slumps, which occurred in different lithological units. Two hundred and ninety of these landslides lie within the ZSZ, representing a cumulated surface of 32 km2. The HFZ implicates 2,900 landslides with an overall coverage of about 26 km2. We first analyzed cumulative landslide number-size distributions using the inventory map. We then proposed a very simple and robust algorithm for automatic landslide extraction using specific band ratios selected upon the spectral signatures of bare surfaces as well as posteriori slope and the normalized difference vegetation index (NDVI thresholds. The index is based on the contrast between landslides and their background, whereas the landslides have high reflections in the green and red bands. We applied the slope threshold map to remove low slope areas, which have high reflectance in red and green bands. The algorithm was able to detect ~96% of the recent landslides known from the reference inventory on a test site. The cumulative landslide number-size distribution of automatically extracted landslide is very similar to the one based on visual mapping. The automatic extraction is therefore adapted for the quantitative analysis of landslides and thus can contribute to the assessment of hazards in similar regions.

  10. Progressive Concept Evaluation Method for Automatically Generated Concept Variants

    Directory of Open Access Journals (Sweden)

    Woldemichael Dereje Engida

    2014-07-01

    Full Text Available Conceptual design is one of the most critical and important phases of design process with least computer support system. Conceptual design support tool (CDST is a conceptual design support system developed to automatically generate concepts for each subfunction in functional structure. The automated concept generation process results in large number of concept variants which require a thorough evaluation process to select the best design. To address this, a progressive concept evaluation technique consisting of absolute comparison, concept screening and weighted decision matrix using analytical hierarchy process (AHP is proposed to eliminate infeasible concepts at each stage. The software implementation of the proposed method is demonstrated.

  11. An Automatic Detection Method of Nanocomposite Film Element Based on GLCM and Adaboost M1

    Directory of Open Access Journals (Sweden)

    Hai Guo

    2015-01-01

    Full Text Available An automatic detection model adopting pattern recognition technology is proposed in this paper; it can realize the measurement to the element of nanocomposite film. The features of gray level cooccurrence matrix (GLCM can be extracted from different types of surface morphology images of film; after that, the dimension reduction of film can be handled by principal component analysis (PCA. So it is possible to identify the element of film according to the Adaboost M1 algorithm of a strong classifier with ten decision tree classifiers. The experimental result shows that this model is superior to the ones of SVM (support vector machine, NN and BayesNet. The method proposed can be widely applied to the automatic detection of not only nanocomposite film element but also other nanocomposite material elements.

  12. Automatic fuzzy object-based analysis of VHSR images for urban objects extraction

    Science.gov (United States)

    Sebari, Imane; He, Dong-Chen

    2013-05-01

    We present an automatic approach for object extraction from very high spatial resolution (VHSR) satellite images based on Object-Based Image Analysis (OBIA). The proposed solution requires no input data other than the studied image. Not input parameters are required. First, an automatic non-parametric cooperative segmentation technique is applied to create object primitives. A fuzzy rule base is developed based on the human knowledge used for image interpretation. The rules integrate spectral, textural, geometric and contextual object proprieties. The classes of interest are: tree, lawn, bare soil and water for natural classes; building, road, parking lot for man made classes. The fuzzy logic is integrated in our approach in order to manage the complexity of the studied subject, to reason with imprecise knowledge and to give information on the precision and certainty of the extracted objects. The proposed approach was applied to extracts of Ikonos images of Sherbrooke city (Canada). An overall total extraction accuracy of 80% was observed. The correctness rates obtained for building, road and parking lot classes are of 81%, 75% and 60%, respectively.

  13. The BUME method: a novel automated chloroform-free 96-well total lipid extraction method for blood plasma[S

    OpenAIRE

    Löfgren, Lars; Ståhlman, Marcus; Forsberg, Gun-Britt; Saarinen, Sinikka; Nilsson, Ralf; Göran I Hansson

    2012-01-01

    Lipid extraction from biological samples is a critical and often tedious preanalytical step in lipid research. Primarily on the basis of automation criteria, we have developed the BUME method, a novel chloroform-free total lipid extraction method for blood plasma compatible with standard 96-well robots. In only 60 min, 96 samples can be automatically extracted with lipid profiles of commonly analyzed lipid classes almost identically and with absolute recoveries similar or better to what is ob...

  14. Evaluation of the accuracy of a method for automatic portal image registration

    International Nuclear Information System (INIS)

    Purpose/Objective: Portal imaging is the most important quality assurance procedure for monitoring the precision of radiation therapy in standard clinical practice and is even more essential for monitoring the highly-customized and technically-demanding fields derived from 3D treatment planning. Unfortunately, traditional methods for acquiring and interpreting portal images suffer from a number of deficiencies which contribute to the well-documented observation that many setup errors go undetected and some persist for a clinically significant portion of the prescribed dose. We have developed a technique called core analysis for the automatic extraction of anatomical structures and have developed robust, automatic means for registering portal images via correspondence of computer-extracted, core-based fiducial curves. Core analysis is a fundamental computer vision method that automatically finds the geometric middles and associated widths of objects in digital gray-scale images. The cores of stable anatomic features (e.g., bones) can in turn serve as fiducial structures for on-line automatic registration of digital portal images with a gold standard reference image (e.g., DRR or digitized simulation film). The robustness of our technique derives from the invariance of core-based extraction of fiducials to translation, rotation and zoom and the insensitivity of cores to noise and blurring. Materials and Methods: A significant shortcoming of the analysis of portal registration techniques reported in the literature is lack of knowledge of truth for clinical images. While careful use of phantoms provides measures of accuracy, lack of realism leaves serious doubts as to the expected clinical reliability. As an extension of our 3D treatment planning system we have developed a method for producing realistic megavoltage portal radiographs with exactly-known setup errors. We have used this software to produce simulated portal radiographs for a number of treatment sites to

  15. Evaluation of left ventricular volume curve by Gd-DTPA enhanced ultrafast cine MR imaging. Clinical application of automatic extraction left ventricular contours on long axis views

    International Nuclear Information System (INIS)

    Contrast-enhanced breath-hold ultrafast cine MR imaging was shown to provide accurate cardiac images with higher success rate (89% in horizontal long axis view and 88% in vertical long axis view). However, the data analysis method still depends on operator's manual tracing of left ventricular (LV) contours which cannot exclude subjectivity, so not only the operator's contributions but also the data analysis results' reproducibility problems remains. We propose an automatic extraction method of LV contours on cine MR images, which needs only 3 manually inputted points at the 1st cardiac frame and require no manual operation for another frames. The automatic LV edge extraction time was 4 second/frame by this method, on the other hand, conventional manual tracing time was 60-90 second/frame. Comparison with LV volumes showed a high correlation (r=0.953 in EDVI, r=0.962 in ESVI) between manual and automatic tracing of LV contours on horizontal long axis view. We have developed an automatic extraction method of LV contours on long axis view in contrast-enhanced ultrafast cine MR images. This is an accurate highly reproducible method of evaluating LV volumetry and volume curve. (author)

  16. Automatic extraction of building boundaries using aerial LiDAR data

    Science.gov (United States)

    Wang, Ruisheng; Hu, Yong; Wu, Huayi; Wang, Jian

    2016-01-01

    Building extraction is one of the main research topics of the photogrammetry community. This paper presents automatic algorithms for building boundary extractions from aerial LiDAR data. First, segmenting height information generated from LiDAR data, the outer boundaries of aboveground objects are expressed as closed chains of oriented edge pixels. Then, building boundaries are distinguished from nonbuilding ones by evaluating their shapes. The candidate building boundaries are reconstructed as rectangles or regular polygons by applying new algorithms, following the hypothesis verification paradigm. These algorithms include constrained searching in Hough space, enhanced Hough transformation, and the sequential linking technique. The experimental results show that the proposed algorithms successfully extract building boundaries at rates of 97%, 85%, and 92% for three LiDAR datasets with varying scene complexities.

  17. An automatic countercurrent liquid-liquid micro-extraction system coupled with atomic absorption spectrometry for metal determination.

    Science.gov (United States)

    Mitani, Constantina; Anthemidis, Aristidis N

    2015-02-01

    A novel and versatile automatic sequential injection countercurrent liquid-liquid microextraction (SI-CC-LLME) system coupled with atomic absorption spectrometry (FAAS) is presented for metal determination. The extraction procedure was based on the countercurrent flow of aqueous and organic phases which takes place into a newly designed lab made microextraction chamber. A noteworthy feature of the extraction chamber is that it can be utilized for organic solvents heavier or lighter than water. The proposed method was successfully demonstrated for on-line lead determination and applied in environmental water samples using an amount of 120 μL of chloroform as extractant and ammonium diethyldithiophosphate as chelating reagent. The effect of the major experimental parameters including the volume of extractant, as well as the flow rate of aqueous and organic phases were studied and optimized. Under the optimum conditions for 6 mL sample consumption an enhancement factor of 130 was obtained. The detection limit was 1.5 μg L(-1) and the precision of the method, expressed as relative standard deviation (RSD) was 2.7% at 40.0 μg L(-1) Pb(II) concentration level. The proposed method was evaluated by analyzing certified reference materials and spiked environmental water samples. PMID:25435230

  18. Automatic analysis of uranium-bearing extracts in amine solvent extraction plants processing sulfate leach liquors

    International Nuclear Information System (INIS)

    Instrumentation based on continuous segmented flow analysis is suggested for the control of uranium loading in the amine phase of solvent extraction processing sulfate leach liquors. It can be installed with relatively little capital outlay and operational costs are expected to be low. The uranium(VI) in up to 60 samples of extract (proportional 0.1 to 5 g l-1 U) per hour can be determined. Application of spectrophotometry to the analysis of various process streams is discussed and it is concluded that it compares favourably in several important respects with the use of alternative techniques. (orig.)

  19. Semi-Automatic Mapping Generation for the DBpedia Information Extraction Framework

    Directory of Open Access Journals (Sweden)

    Arup Sarkar, Ujjal Marjit, Utpal Biswas

    2013-03-01

    Full Text Available DBpedia is one of the very well known live projectsfrom the Semantic Web. It is likeamirror version ofthe Wikipedia site in Semantic Web. Initially itpublishes the information collected from theWikipedia, but only that part which is relevant tothe Semantic Web.Collecting information forSemantic Web from the Wikipedia is demonstratedas the extraction of structured data. DBpedianormally do this by using a specially designedframework called DBpedia Information ExtractionFramework. This extraction framework do itsworks thorough the evaluation of the similarproperties from the DBpedia Ontology and theWikipedia template. This step is known as DBpediamapping.At present mostof the mapping jobs aredone complete manually.In this paper a newframework is introduced considering the issuesrelated to the template to ontology mapping. A semi-automatic mapping tool for the DBpedia projectisproposedwith the capability of automaticsuggestion generation for the end usersso thatusers can identify the similar Ontology and templateproperties.Proposed framework is useful since afterselection of similar properties, the necessary code tomaintain the mapping between Ontology andtemplate is generated automatically.

  20. Automatic numerical integration methods for Feynman integrals through 3-loop

    Science.gov (United States)

    de Doncker, E.; Yuasa, F.; Kato, K.; Ishikawa, T.; Olagbemi, O.

    2015-05-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities.

  1. Automatic extraction of road seeds from high-resolution aerial images

    Directory of Open Access Journals (Sweden)

    Aluir P. Dal-Poz

    2005-09-01

    Full Text Available This article presents an automatic methodology for extraction of road seeds from high-resolution aerial images. The method is based on a set of four road objects and another set of connection rules among road objects. Each road object is a local representation of an approximately straight road fragment and its construction is based on a combination of polygons describing all relevant image edges, according to some rules embodying road knowledge. Each one of the road seeds is composed by a sequence of connected road objects, in which each sequence of this type can be geometrically structured as a chain of contiguous quadrilaterals. Experiments carried out with high-resolution aerial images showed that the proposed methodology is very promising in extracting road seeds. This article presents the fundamentals of the method and the experimental results, as well.Este artigo apresenta uma metodologia automática para extração de sementes de rodovia a partir de imagens aéreas de alta resolução. O método se baseia em um conjunto de quatro objetos de rodovia e em um conjunto de regras de conexão entre tais objetos. Cada objeto de rodovia é uma representação local de um fragmento de rodovia aproximadamente reto e sua construção é baseada na combinação de polígonos que descrevem todas as bordas relevantes da imagem, de acordo com algumas regras que incorporam conhecimento sobre a feição rodovia. Cada uma das sementes de rodovia é composta por uma sucessão de objetos de rodovia conectados, sendo que cada sucessão deste tipo pode ser geometricamente estruturada como uma cadeia de quadriláteros contíguos. Os experimentos realizados com imagens aéreas de alta resolução mostraram que a metodologia proposta é muito promissora na extração de sementes de rodovia. Este artigo apresenta os fundamentos do método, bem como os resultados experimentais.

  2. An Automatically Changing Feature Method based on Chaotic Encryption

    OpenAIRE

    Wang Li; Gang Luo; Lingyun Xiang

    2014-01-01

    In practical applications, in order to extract data from the stego, some data hiding encryption methods need to identify themselves. When performing data hiding, they embed some specific logo for self-identification. However, it is unavoidable to bring themselves the risk of exposure. Suppose each hidden method has a corresponding logo S and the attacker has a logo set Φ which consists of some hidden methods’ logos. Once he find the logo S which matches a l...

  3. Automatic speech recognition (zero crossing method). Automatic recognition of isolated vowels

    International Nuclear Information System (INIS)

    This note describes a recognition method of isolated vowels, using a preprocessing of the vocal signal. The processing extracts the extrema of the vocal signal and the interval time separating them (Zero crossing distances of the first derivative of the signal). The recognition of vowels uses normalized histograms of the values of these intervals. The program determines a distance between the histogram of the sound to be recognized and histograms models built during a learning phase. The results processed on real time by a minicomputer, are relatively independent of the speaker, the fundamental frequency being not allowed to vary too much (i.e. speakers of the same sex). (author)

  4. An automatic detection method to the field wheat based on image processing

    Science.gov (United States)

    Wang, Yu; Cao, Zhiguo; Bai, Xiaodong; Yu, Zhenghong; Li, Yanan

    2013-10-01

    The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.

  5. Review of Automatic Feature Extraction from High-Resolution Optical Sensor Data for UAV-Based Cadastral Mapping

    Directory of Open Access Journals (Sweden)

    Sophie Crommelinck

    2016-08-01

    Full Text Available Unmanned Aerial Vehicles (UAVs have emerged as a rapid, low-cost and flexible acquisition system that appears feasible for application in cadastral mapping: high-resolution imagery, acquired using UAVs, enables a new approach for defining property boundaries. However, UAV-derived data are arguably not exploited to its full potential: based on UAV data, cadastral boundaries are visually detected and manually digitized. A workflow that automatically extracts boundary features from UAV data could increase the pace of current mapping procedures. This review introduces a workflow considered applicable for automated boundary delineation from UAV data. This is done by reviewing approaches for feature extraction from various application fields and synthesizing these into a hypothetical generalized cadastral workflow. The workflow consists of preprocessing, image segmentation, line extraction, contour generation and postprocessing. The review lists example methods per workflow step—including a description, trialed implementation, and a list of case studies applying individual methods. Furthermore, accuracy assessment methods are outlined. Advantages and drawbacks of each approach are discussed in terms of their applicability on UAV data. This review can serve as a basis for future work on the implementation of most suitable methods in a UAV-based cadastral mapping workflow.

  6. Automatic Inspection of Nuclear-Reactor Tubes During Production and Processing, Using Eddy-Current Methods

    International Nuclear Information System (INIS)

    The possibilities of automatic and semi-automatic inspection of tubes using eddy-current methods are described. The paper deals in particular with modem processes, compared to the use of other non-destructive methods. The essence of the paper is that the methods discussed are ideal for objective automatic inspection. Not only are the known methods described, but certain new methods and their application to the detection of flaws in reactor tubes are discussed. (author)

  7. The Automatic Generation of Chinese Outline Font Based on Stroke Extraction

    Institute of Scientific and Technical Information of China (English)

    1995-01-01

    A new method to obtain spline outline description of Chinese font based on stroke extraction is presented.It has two primary advantages:(1)the quality of Chinese output is greatly improved;(2)the memory requirement is reduced.The method for stroke extraction is discussed in detail and experimental results are presented.

  8. Gene Ontology density estimation and discourse analysis for automatic GeneRiF extraction

    Directory of Open Access Journals (Sweden)

    Mottaz Anaïs

    2008-04-01

    Full Text Available Abstract Background This paper describes and evaluates a sentence selection engine that extracts a GeneRiF (Gene Reference into Functions as defined in ENTREZ-Gene based on a MEDLINE record. Inputs for this task include both a gene and a pointer to a MEDLINE reference. In the suggested approach we merge two independent sentence extraction strategies. The first proposed strategy (LASt uses argumentative features, inspired by discourse-analysis models. The second extraction scheme (GOEx uses an automatic text categorizer to estimate the density of Gene Ontology categories in every sentence; thus providing a full ranking of all possible candidate GeneRiFs. A combination of the two approaches is proposed, which also aims at reducing the size of the selected segment by filtering out non-content bearing rhetorical phrases. Results Based on the TREC-2003 Genomics collection for GeneRiF identification, the LASt extraction strategy is already competitive (52.78%. When used in a combined approach, the extraction task clearly shows improvement, achieving a Dice score of over 57% (+10%. Conclusions Argumentative representation levels and conceptual density estimation using Gene Ontology contents appear complementary for functional annotation in proteomics.

  9. Evaluation of information retrieval and text mining tools on automatic named entity extraction. Intelligence and security informatics. Proceedings

    OpenAIRE

    Kumar, Nishant; De Beer, Jan; Vanthienen, Jan; Moens, Marie-Francine

    2006-01-01

    We will report evaluation of Automatic Named Entity Extraction feature of IR tools on Dutch, French, and English text. The aim is to analyze the competency of off-the-shelf information extraction tools in recognizing entity types including person, organization, location, vehicle, time, & currency from unstructured text. Within such an evaluation one can compare the effectiveness of different approaches for identifying named entities.

  10. Automatic detection of microaneurysms using microstructure and wavelet methods

    Indian Academy of Sciences (India)

    M Tamilarasi; K Duraiswamy

    2015-06-01

    Retinal microaneurysm is one of the earliest signs in diabetic retinopathy diagnosis. This paper has developed an approach to automate the detection of microaneurysms using wavelet-based Gaussian mixture model and microstructure texture feature extraction. First, the green channel of the colour retinal fundus image is extracted and pre-processed using various enhancement techniques such as bottom-hat filtering and gamma correction. Second, microstructures are extracted as Gaussian profiles in wavelet domain using the three-level generative model. Multiscale Gaussian kernels are obtained and histogram-based features are extracted from the best kernel. Using the Markov Chain Monte Carlo method, microaneurysms are classified using the optimal feature set. The proposed approach is experimented with DIARETDB0 and DIARETDB1 datasets using a classifier based on multi-layer perceptron procedure. For DIARETDB0 dataset, the proposed algorithm obtains the results with a sensitivity of 98.32 and specificity of 97.59. In the case of DIARETDB1 dataset, the sensitivity and specificity of 98.91 and 97.65 have been achieved. The accuracies achieved by the proposed algorithm are 97.86 and 98.33 using DIARETDB0 and DIARETDB1 datasets respectively. Based on ground truth validation, good segmentation results are achieved when compared to existing algorithms such as local relative entropy-based thresholding, inverse adaptive surface thresholding, inverse segmentation method, and dark object segmentation.

  11. Combining contour detection algorithms for the automatic extraction of the preparation line from a dental 3D measurement

    Science.gov (United States)

    Ahlers, Volker; Weigl, Paul; Schachtzabel, Hartmut

    2005-04-01

    Due to the increasing demand for high-quality ceramic crowns and bridges, the CAD/CAM-based production of dental restorations has been a subject of intensive research during the last fifteen years. A prerequisite for the efficient processing of the 3D measurement of prepared teeth with a minimal amount of user interaction is the automatic determination of the preparation line, which defines the sealing margin between the restoration and the prepared tooth. Current dental CAD/CAM systems mostly require the interactive definition of the preparation line by the user, at least by means of giving a number of start points. Previous approaches to the automatic extraction of the preparation line rely on single contour detection algorithms. In contrast, we use a combination of different contour detection algorithms to find several independent potential preparation lines from a height profile of the measured data. The different algorithms (gradient-based, contour-based, and region-based) show their strengths and weaknesses in different clinical situations. A classifier consisting of three stages (range check, decision tree, support vector machine), which is trained by human experts with real-world data, finally decides which is the correct preparation line. In a test with 101 clinical preparations, a success rate of 92.0% has been achieved. Thus the combination of different contour detection algorithms yields a reliable method for the automatic extraction of the preparation line, which enables the setup of a turn-key dental CAD/CAM process chain with a minimal amount of interactive screen work.

  12. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  13. An automatic and effective parameter optimization method for model tuning

    Science.gov (United States)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  14. Boosting the Coverage of a Semantic Lexicon by Automatically Extracted Event Nominalizations

    OpenAIRE

    Gábor, Kata; Apidianaki, Marianna; Sagot, Benoît; Villemonte De La Clergerie, Éric

    2012-01-01

    An important trend in recent works on lexical semantics has been the development of learning methods capable of extracting semantic information from text corpora. The majority of these methods are based on the distributional hypothesis of meaning and acquire semantic information by identifying distributional patterns in texts. In this article, we present a distributional analysis method for extracting nominalization relations from monolingual corpora. The acquisition method makes use of distr...

  15. An automatic device for sample insertion and extraction to/from reactor irradiation facilities

    International Nuclear Information System (INIS)

    At the previous European Triga Users Conference in Vienna,a paper was given describing a new handling tool for irradiated samples at the L.E.N.A plant. This tool was the first part of an automatic device for the management of samples to be irradiated in the TRIGA MARK ii reactor and successively extracted and stored. So far sample insertion and extraction to/from irradiation facilities available on reactor top (central thimble,rotatory specimen rack and channel f),has been carried out manually by reactor and health-physics operators using the ''traditional'' fishing pole provided by General Atomic, thus exposing reactor personnel to ''unjustified'' radiation doses. The present paper describes the design and the operation of a new device, a ''robot''type machine,which, remotely operated, takes care of sample insertion into the different irradiation facilities,sample extraction after irradiation and connection to the storage pits already described. The extraction of irradiated sample does not require the presence of reactor personnel on the reactor top and,therefore,radiation doses are strongly reduced. All work from design to construction has been carried out by the personnel of the electronic group of the L.E.N.A plant. (orig.)

  16. A Visually Inspired Variational Method for Automatic Image Registration

    Directory of Open Access Journals (Sweden)

    WANG Huixian

    2015-08-01

    Full Text Available A visually inspired variational method for automatic image registration is proposed to solve local deformation which traditional global registration model cannot well satisfy. The variational model considers local transformation, global smoothness and visual constraints. To account for intensity variations, we incorporate change of local contrast and brightness into our model. Firstly, the data entry of registration model is built according to the root-mean-square error of intensity; secondly, adaptive constraint using H1 half norm is used to ensure the global smooth in the model; finally, in order to make sure that the spatial attributes of the image satisfy the visual requirements and without distortion, the linear features are used as priori constraints. During the solution of model parameters, the whole image is used to globally estimate the transformation parameters, and then local estimation of the parameters is taken in a small neighbor. The entire procedure is built upon a multi-level differential framework, and the transformation parameters are calculated iteratively, which can consider both global smoothness and local distortion. To assess the quality of the proposed method, ZY-3 satellite images were used. Visual and quantitative analysis proved that the proposed method can significantly improve the registration precision.

  17. Multi-Stage, Multi-Resolution Method for Automatic Characterization of Epileptic Spikes in EEG

    Directory of Open Access Journals (Sweden)

    Ganesan.M

    2010-06-01

    Full Text Available In this paper, a technique is proposed for the automatic detection of the spikes in long term 18 channel human electroencephalograms (EEG with less number of data set. The scheme for detecting epileptic and non epileptic spikes in EEG is based on a multi resolution, multi-level analysis and Artificial Neural Network(ANN approach. Wavelet Transform (WT is a powerful tool for signal compression, recognition, restoration and multi-resolutionanalysis of non-stationary signal. The signal on each EEG channel is decomposed into six sub bands using a non-decimated WT. Each sub band is analyzed by using a non-linear energy operator, in order to detect spikes. A parameter extraction stage extracts theparameters of the detected spikes that can be given as the input to ANN classifier. A robust system that combines multiple signal-processing methods in a multistage scheme, integratingwavelet transform and artificial neural network is proposed here. This system is experimented on a simulated EEG pattern waveform as well as with real patient data. The system is evaluated on testing data from 81 patients, totaling more than 800 hours of recordings.90.0% of the epileptic events were correctly detected and the detection rate of non epileptic events was 98.0%. We conclude that the proposed system has good performance in detecting epileptic form activities; further the multistage multiresolution approach is an appropriate way of automatic classification problems in EEG.

  18. Automatic 3D Extraction of Buildings, Vegetation and Roads from LIDAR Data

    Science.gov (United States)

    Bellakaout, A.; Cherkaoui, M.; Ettarid, M.; Touzani, A.

    2016-06-01

    Aerial topographic surveys using Light Detection and Ranging (LiDAR) technology collect dense and accurate information from the surface or terrain; it is becoming one of the important tools in the geosciences for studying objects and earth surface. Classification of Lidar data for extracting ground, vegetation, and buildings is a very important step needed in numerous applications such as 3D city modelling, extraction of different derived data for geographical information systems (GIS), mapping, navigation, etc... Regardless of what the scan data will be used for, an automatic process is greatly required to handle the large amount of data collected because the manual process is time consuming and very expensive. This paper is presenting an approach for automatic classification of aerial Lidar data into five groups of items: buildings, trees, roads, linear object and soil using single return Lidar and processing the point cloud without generating DEM. Topological relationship and height variation analysis is adopted to segment, preliminary, the entire point cloud preliminarily into upper and lower contours, uniform and non-uniform surface, non-uniform surfaces, linear objects, and others. This primary classification is used on the one hand to know the upper and lower part of each building in an urban scene, needed to model buildings façades; and on the other hand to extract point cloud of uniform surfaces which contain roofs, roads and ground used in the second phase of classification. A second algorithm is developed to segment the uniform surface into buildings roofs, roads and ground, the second phase of classification based on the topological relationship and height variation analysis, The proposed approach has been tested using two areas : the first is a housing complex and the second is a primary school. The proposed approach led to successful classification results of buildings, vegetation and road classes.

  19. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  20. Detection of fiducial gold markers for automatic on-line megavoltage position verification using a marker extraction kernel (MEK)

    International Nuclear Information System (INIS)

    Purpose: In this study automatic detection of implanted gold markers in megavoltage portal images for on-line position verification was investigated. Methods and Materials: A detection method for fiducial gold markers, consisting of a marker extraction kernel (MEK), was developed. The detection success rate was determined for different markers using this MEK. The localization accuracy was investigated by measuring distances between markers, which were fixed on a perspex template. In order to generate images comparable to images of patients with implanted markers, this template was placed on the skin of patients before the start of the treatment. Portal images were taken of lateral prostate fields at 18 MV within 1-2 monitor units (MU). Results: The detection success rates for markers of 5 mm length and 1.2 and 1.4 mm diameter were 0.95 and 0.99 respectively when placed at the beam entry and 0.39 and 0.86 when placed at the beam exit. The localization accuracy appears to be better than 0.6 mm for all markers. Conclusion: Automatic marker detection with an acceptable accuracy at the start of a radiotherapy fraction is feasible. Further minimization of marker diameters may be achieved with the help of an a-Si flat panel imager and may increase the clinical acceptance of this technique

  1. Method for rare earth extraction from phosphogypsum

    International Nuclear Information System (INIS)

    A method for rare earth extraction from phosphogypsum, permitting to increase the degree of extraction and to simplify the process, has been suggested. Phosphogypsum is treated by a solution of ammonium carbonate, the precipitate of calcium carbonate formed is dissolved in 55-70% of stoichiometry of nitric acid and insoluble residue is dissolved in HNO3. The degree of rare earth extraction into solution reaches 94-98%

  2. A NOVEL METHOD FOR ARABIC MULTI-WORD TERM EXTRACTION

    Directory of Open Access Journals (Sweden)

    Hadni Meryem

    2014-10-01

    Full Text Available Arabic Multiword Terms (AMWTs are relevant strings of words in text documents. Once they are automatically extracted, they can be used to increase the performance of any Arabic Text Mining applications such as Categorization, Clustering, Information Retrieval System, Machine Translation, and Summarization, etc. Mainly the proposed methods for AMWTs extraction can be categorized in three approaches: Linguistic-based, Statistic-based, and hybrid-based approach. These methods present some drawbacks that limit their use. In fact they can only deal with bi-grams terms and their yield not good accuracies. In this paper, to overcome these drawbacks, we propose a new and efficient method for AMWTs Extraction based on a hybrid approach. This latter is composed by two main filtering steps: the Linguistic filter and the Statistical one. The Linguistic Filter uses our proposed Part Of Speech (POS Tagger and the Sequence identifier as patterns in order to extract candidate AMWTs. While the Statistical filter incorporate the contextual information, and a new proposed association measure based on Termhood and Unithood Estimation named NTC-Value. To evaluate and illustrate the efficiency of our proposed method for AMWTs extraction, a comparative study has been conducted based on Kalimat Corpus and using nine experiment schemes: In the linguistic filter, we used three POS Taggers such as Taani’s method based Rule-approach, HMM method based Statistical-approach, and our recently proposed Tagger based Hybrid –approach. While in the Statistical filter, we used three statistical measures such as C-Value, NC-Value, and our proposed NTC-Value. The obtained results demonstrate the efficiency of our proposed method for AMWTs extraction: it outperforms the other ones and can deal correctly with the tri-grams terms.

  3. A Novel and Efficient Method for Iris Automatic Location

    Institute of Scientific and Technical Information of China (English)

    XU Guang-zhu; ZHANG Zai-feng; MA Yi-de

    2007-01-01

    An efficient and robust iris location algorithm plays a very important role in a real iris recognition system. A novel and efficient iris automatic location method is presented in this study. It includes following two steps mainly: pupil location and iris outer boundary location. A digital eye image was divided into many small rectangular blocks with fixed size in the pupil location, and the block with the smallest average intensity was selected as a reference area. Then image binarization was implemented taking the average intensity of the reference area as a threshold. At last the center coordinates and radius of pupil were estimated by extending the reference area to the pupil's boundaries in the binary iris image. In the iris outer location, two local parts of the eye image were selected and transformed into polar coordinates from Cartesian reference. In order to detect the fainter outer boundary of the iris quickly, a novel edge detector was used to locate boundaries of the two parts. The center coordinates and radius of the iris outer boundary can be estimated using the fusion of the locating results of the two local parts and the location information of the pupil. The algorithm was tested on CASIA v1.0 and MMU v1.0 digital eye image databases and experimental results show that the proposed method has satisfying performance and good robustness.

  4. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications.

    Science.gov (United States)

    Lingua, Andrea; Marenchino, Davide; Nex, Francesco

    2009-01-01

    In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc.) and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model) generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A(2) SIFT) has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems. PMID:22412336

  5. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  6. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. PMID:26518205

  7. Automatic Extraction of Open Space Area from High Resolution Urban Satellite Imagery

    Directory of Open Access Journals (Sweden)

    Hiremath P S & Kodge B G

    2010-06-01

    Full Text Available In the 21st century, Aerial and satellite images are information rich. They are alsocomplex to analyze. For GIS systems, many features require fast and reliableextraction of open space area from high resolution satellite imagery. In this paperwe will study efficient and reliable automatic extraction algorithm to find out theopen space area from the high resolution urban satellite imagery. This automaticextraction algorithm uses some filters and segmentations and grouping isapplying on satellite images. And the result images may use to calculate the totalavailable open space area and the built up area. It may also use to compare thedifference between present and past open space area using historical urbansatellite images of that same projection.

  8. Automatic Feature Extraction, Categorization and Detection of Malicious Code in Android Applications

    Directory of Open Access Journals (Sweden)

    Muhammad Zuhair Qadir

    2014-02-01

    Full Text Available Since Android has become a popular software platform for mobile devices recently; they offer almost the same functionality as personal computers. Malwares have also become a big concern. As the number of new Android applications tends to be rapidly increased in the near future, there is a need for automatic malware detection quickly and efficiently. In this paper, we define a simple static analysis approach to first extract the features of the android application based on intents and categories the application into a known major category and later on mapping it with the permissions requested by the application and also comparing it with the most obvious intents of category.  As a result, getting to know which apps are using features which they are not supposed to use or they do not need.

  9. DEM automatic extraction on Rio de Janeiro from WV2 stereo pair images

    International Nuclear Information System (INIS)

    The use of three-dimensional data has become, for a lot of mapping applications, very important. DEM are applied for modelling purposes, i.e. the 3D city model generation, but principally for imagery orthorectification. In aerial photogrammetry is well known the suitable use of stereo imagery to produce an accurate DEM, but the limits of the process (cost, schedule of data collection, highly technical staff) and the new advanced digital image processing algorithms have open the work scenario to the remote sensing data. This research has wanted to investigate the possibility to obtain accurate DEMs by means of automatic terrain extraction algorithms implemented in Leica Photogrammetry Suite (LPS) from stereoscopic remote sensing images collected by DigitalGlobe's WorldView-2 (WV2) satellite. The DEM of Rio de Janeiro (Brazil) and the correspondent digital orthoimages have been the results

  10. Relevant Words Extraction Method for Recommendation System

    Directory of Open Access Journals (Sweden)

    Naw Naw

    2013-09-01

    Full Text Available Nowadays, E-commerce is very popular because of information explosion. Text mining is also important for information extraction.  Users are more preferable to use the convenience system from many sources such as through web pages, email, social network and so on. This system proposed the relevant words extraction method for car recommendation system from user email. In relevant words extraction, this system proposed the Rule-based approach in Compiling Technique. Context- free grammar is the most suitable for relevant words extraction. Recommendation System (RS is a most popular tool that helps users to recommend according to their interests. This system implements efficient recommendation system by using proposed key extraction algorithm, Content-based Filtering (CBF method and Jaccard Coefficient that will help the users who want to buy the car by providing relevant car information.

  11. Comparison of edge detection techniques for the automatic information extraction of Lidar data

    Science.gov (United States)

    Li, H.; di, L.; Huang, X.; Li, D.

    2008-05-01

    In recent years, there has been much interest in information extraction from Lidar point cloud data. Many automatic edge detection algorithms have been applied to extracting information from Lidar data. Generally they can be divided as three major categories: early vision gradient operators, optimal detectors and operators using parametric fitting models. Lidar point cloud includes the intensity information and the geographic information. Thus, traditional edge detectors used in remote sensed images can take advantage with the coordination information provided by point data. However, derivation of complex terrain features from Lidar data points depends on the intensity properties and topographic relief of each scene. Take road for example, in some urban area, road has the alike intensity as buildings, but the topographic relationship of road is distinct. The edge detector for road in urban area is different from the detector for buildings. Therefore, in Lidar extraction, each kind of scene has its own suitable edge detector. This paper compares application of the different edge detectors from the previous paragraph to various terrain areas, in order to figure out the proper algorithm for respective terrain type. The Canny, EDISON and SUSAN algorithms were applied to data points with the intensity character and topographic relationship of Lidar data. The Lidar data for test are over different terrain areas, such as an urban area with a mass of buildings, a rural area with vegetation, an area with slope, or an area with a bridge, etc. Results using these edge detectors are compared to determine which algorithm is suitable for a specific terrain area. Key words: Edge detector, Extraction, Lidar, Point data

  12. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  13. An Automatic Unpacking Method for Computer Virus Effective in the Virus Filter Based on Paul Graham's Bayesian Theorem

    Science.gov (United States)

    Zhang, Dengfeng; Nakaya, Naoshi; Koui, Yuuji; Yoshida, Hitoaki

    Recently, the appearance frequency of computer virus variants has increased. Updates to virus information using the normal pattern matching method are increasingly unable to keep up with the speed at which viruses occur, since it takes time to extract the characteristic patterns for each virus. Therefore, a rapid, automatic virus detection algorithm using static code analysis is necessary. However, recent computer viruses are almost always compressed and obfuscated. It is difficult to determine the characteristics of the binary code from the obfuscated computer viruses. Therefore, this paper proposes a method that unpacks compressed computer viruses automatically independent of the compression format. The proposed method unpacks the common compression formats accurately 80% of the time, while unknown compression formats can also be unpacked. The proposed method is effective against unknown viruses by combining it with the existing known virus detection system like Paul Graham's Bayesian Virus Filter etc.

  14. R and D on automatic modeling methods for Monte Carlo codes FLUKA

    International Nuclear Information System (INIS)

    FLUKA is a fully integrated particle physics Monte Carlo simulation package. It is necessary to create the geometry models before calculation. However, it is time- consuming and error-prone to describe the geometry models manually. This study developed an automatic modeling method which could automatically convert computer-aided design (CAD) geometry models into FLUKA models. The conversion program was integrated into CAD/image-based automatic modeling program for nuclear and radiation transport simulation (MCAM). Its correctness has been demonstrated. (authors)

  15. AUTOMATIC EXTRACTION OF ROAD SURFACE AND CURBSTONE EDGES FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    A. Miraliakbari

    2015-05-01

    Full Text Available We present a procedure for automatic extraction of the road surface from geo-referenced mobile laser scanning data. The basic assumption of the procedure is that the road surface is smooth and limited by curbstones. Two variants of jump detection are investigated for detecting curbstone edges, one based on height differences the other one based on histograms of the height data. Region growing algorithms are proposed which use the irregular laser point cloud. Two- and four-neighbourhood growing strategies utilize the two height criteria for examining the neighborhood. Both height criteria rely on an assumption about the minimum height of a low curbstone. Road boundaries with lower or no jumps will not stop the region growing process. In contrast to this objects on the road can terminate the process. Therefore further processing such as bridging gaps between detected road boundary points and the removal of wrongly detected curbstone edges is necessary. Road boundaries are finally approximated by splines. Experiments are carried out with a ca. 2 km network of smalls streets located in the neighbourhood of University of Applied Sciences in Stuttgart. For accuracy assessment of the extracted road surfaces, ground truth measurements are digitized manually from the laser scanner data. For completeness and correctness of the region growing result values between 92% and 95% are achieved.

  16. Automatic Extraction of Spatio-Temporal Information from Arabic Text Documents

    Directory of Open Access Journals (Sweden)

    Abdelkoui Feriel

    2015-10-01

    Full Text Available Unstructured Arabic text documents are an important source of geographical and temporal information. The possibility of automatically tracking spatio-temporal information, capturing changes relating to events from text documents, is a new challenge in the fields of geographic information retrieval (GIR, temporal information retrieval (TIR and natural language processing (NLP. There was a lot of work on the extraction of information in other languages that use Latin alphabet, such as English,, French, or Spanish, by against the Arabic language is still not well supported in GIR and TIR and it needs to conduct more researches. In this paper, we present an approach that support automated exploration and extraction of spatio-temporal information from Arabic text documents in order to capture and model such information before it can be utilized in search and exploration tasks. The system has been successfully tested on 50 documents that include a mixture of types of Spatial/temporal information. The result achieved 91.01% of recall and of 80% precision. This illustrates that our approach is effective and its performance is satisfactory.

  17. AUTOMATIC ROAD EXTRACTION FROM SATELLITE IMAGES USING EXTENDED KALMAN FILTERING AND EFFICIENT PARTICLE FILTERING

    Directory of Open Access Journals (Sweden)

    Jenita Subash

    2011-12-01

    Full Text Available Users of geospatial data in government, military, industry, research, and other sectors have need foraccurate display of roads and other terrain information in areas where there are ongoing operations orlocations of interest. Hence, road extraction that is significantly more automated than the employment ofcostly and scarce human resources has become a challenging technical issue for the geospatialcommunity. An automatic road extraction based on Extended Kalman Filtering (EKF and variablestructured multiple model particle filter (VS-MMPF from satellite images is addressed. EKF traces themedian axis of a single road segment while VS-MMPF traces all road branches initializing at theintersection. In case of Local Linearization Particle filter (LLPF, a large number of particles are usedand therefore high computational expense is usually required in order to attain certain accuracy androbustness. The basic idea is to reduce the whole sampling space of the multiple model system to the modesubspace by marginalization over the target subspace and choose better importance function for modestate sampling. The core of the system is based on profile matching. During the estimation, new referenceprofiles were generated and stored in the road template memory for future correlation analysis, thuscovering the space of road profiles. .

  18. Automatic Extraction of Small Spatial Plots from Geo-Registered UAS Imagery

    Science.gov (United States)

    Cherkauer, Keith; Hearst, Anthony

    2015-04-01

    Accurate extraction of spatial plots from high-resolution imagery acquired by Unmanned Aircraft Systems (UAS), is a prerequisite for accurate assessment of experimental plots in many geoscience fields. If the imagery is correctly geo-registered, then it may be possible to accurately extract plots from the imagery based on their map coordinates. To test this approach, a UAS was used to acquire visual imagery of 5 ha of soybean fields containing 6.0 m2 plots in a complex planting scheme. Sixteen artificial targets were setup in the fields before flights and different spatial configurations of 0 to 6 targets were used as Ground Control Points (GCPs) for geo-registration, resulting in a total of 175 geo-registered image mosaics with a broad range of geo-registration accuracies. Geo-registration accuracy was quantified based on the horizontal Root Mean Squared Error (RMSE) of targets used as checkpoints. Twenty test plots were extracted from the geo-registered imagery. Plot extraction accuracy was quantified based on the percentage of the desired plot area that was extracted. It was found that using 4 GCPs along the perimeter of the field minimized the horizontal RMSE and enabled a plot extraction accuracy of at least 70%, with a mean plot extraction accuracy of 92%. The methods developed are suitable for work in many fields where replicates across time and space are necessary to quantify variability.

  19. METHOD FOR AUTOMATIC ANALYSIS OF WHEAT STRAW PULP CELL TYPES

    Directory of Open Access Journals (Sweden)

    Mikko Karjalainen,

    2012-01-01

    Full Text Available Agricultural residues are receiving increasing interest when studying renewable raw materials for industrial use. Residues, generally referred to as nonwood materials, are usually complex materials. Wheat straw is one of the most abundant agricultural residues around the world and is therefore available for extensive industrial use. However, more information of its cell types is needed to utilize wheat straw efficiently in pulp and papermaking. The pulp cell types and particle dimensions of wheat straw were studied, using an optical microscope and an automatic optical fibre analyzer. The role of various cell types in wheat straw pulp and papermaking is discussed. Wheat straw pulp components were categorized according to particle morphology and categorization with an automatic optical analyzer was used to determine wheat straw pulp cell types. The results from automatic optical analysis were compared to those with microscopic analysis and a good correlation was found. Automatic optical analysis was found to be a promising tool for the in-depth analysis of wheat straw pulp cell types.

  20. Automatic detecting method of LED signal lamps on fascia based on color image

    Science.gov (United States)

    Peng, Xiaoling; Hou, Wenguang; Ding, Mingyue

    2009-10-01

    Instrument display panel is one of the most important parts of automobiles. Automatic detection of LED signal lamps is critical to ensure the reliability of automobile systems. In this paper, an automatic detection method was developed which is composed of three parts in the automatic detection: the shape of LED lamps, the color of LED lamps, and defect spots inside the lamps. More than hundreds of fascias were detected with the automatic detection algorithm. The speed of the algorithm is quite fast and satisfied with the real-time request of the system. Further, the detection result was demonstrated to be stable and accurate.

  1. The study of automatic brain extraction of basal ganglia based on atlas of Talairach in 18F-FDG PET images

    International Nuclear Information System (INIS)

    Objective: To establish a method which can extract functional areas of the brain basal ganglia automatically. Methods: 18F-fluorodeoxyglucose (FDG) PET images were spatial normalized to Talairach atlas space through two steps, image registration and image deformation. The functional areas were extracted from three dimension PET images based on the coordinate obtained from atlas; caudate and putamen were extracted and rendered, the grey value of the area was normalized by whole brain. Results: The normal ratio of left caudate head, body and tail were 1.02 ± 0.04, 0.92 ± 0.07 and 0.71 ± 0.03, the right were 0.98 ± 0.03, 0.87 ± 0.04 and 0.71 ± 0.01 respectively. The normal ratio of left and right putamen were 1.20 ± 0.06 and 1.20 ± 0.04. The mean grey value between left and right basal ganglia had no significant difference (P>0.05). Conclusion: The automatic functional area extracting method based on atlas of Talairach is feasible. (authors)

  2. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

    OpenAIRE

    Roland Pfister; Katharina Schwarz; Robyn Carson; Markus Jancyzk

    2013-01-01

    Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

  3. Easy methods for extracting individual regression slopes: Comparing SPSS, R, and Excel

    Directory of Open Access Journals (Sweden)

    Roland Pfister

    2013-10-01

    Full Text Available Three different methods for extracting coefficientsof linear regression analyses are presented. The focus is on automatic and easy-to-use approaches for common statistical packages: SPSS, R, and MS Excel / LibreOffice Calc. Hands-on examples are included for each analysis, followed by a brief description of how a subsequent regression coefficient analysis is performed.

  4. Extraction of handwritten areas from colored image of bank checks by an hybrid method

    CERN Document Server

    Haboubi, Sofiene

    2011-01-01

    One of the first step in the realization of an automatic system of check recognition is the extraction of the handwritten area. We propose in this paper an hybrid method to extract these areas. This method is based on digit recognition by Fourier descriptors and different steps of colored image processing . It requires the bank recognition of its code which is located in the check marking band as well as the handwritten color recognition by the method of difference of histograms. The areas extraction is then carried out by the use of some mathematical morphology tools.

  5. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  6. Black extraction method using gamut boundary descriptors

    Science.gov (United States)

    Cho, Min-Ki; Kang, Byoung-Ho; Choh, Heui-Keun

    2006-01-01

    Color data conversion between CMYK and CIEL*a*b* color space is not directly corresponded, that is many CMYK combinations could reproduce the same CIEL*a*b* value. When building a LUT converting from CIEL*a*b* to CMYK for a CMYK color printer, one to one correspondence between CMYK and CIEL*a*b* must be aimed. The proposed method in this paper follows steps: (1) print and measure CIEL*a*b* values of CMYK reference chart, (2) set-up parameters to assign the amount of black extraction, (3) generate gamut boundary descriptors for gamut mapping and for black extraction using CMYK-CIEL*a*b* data under predetermined black extraction parameters, (4) perform gamut mapping for given CIEL*a*b* using the gamut boundary descriptor for gamut mapping, (5) determine K value of the gamut-mapped CIEL*a*b* using the gamut boundary descriptors for black extraction. The suggested method determines K value for given CIEL*a*b* using gamut boundary descriptors in CIEL*a*b color space. As a result, a color printer using this method can make out accurate black amount and reproduces more consistent CMYK images under different black extraction options.

  7. Automatic dynamic mask extraction for PIV images containing an unsteady interface, bubbles, and a moving structure

    Science.gov (United States)

    Dussol, David; Druault, Philippe; Mallat, Bachar; Delacroix, Sylvain; Germain, Grégory

    2016-07-01

    When performing Particle Image Velocimetry (PIV) measurements in complex fluid flows with moving interfaces and a two-phase flow, it is necessary to develop a mask to remove non-physical measurements. This is the case when studying, for example, the complex bubble sweep-down phenomenon observed in oceanographic research vessels. Indeed, in such a configuration, the presence of an unsteady free surface, of a solid-liquid interface and of bubbles in the PIV frame, leads to generate numerous laser reflections and therefore spurious velocity vectors. In this note, an image masking process is developed to successively identify the boundaries of the ship and the free surface interface. As the presence of the solid hull surface induces laser reflections, the hull edge contours are simply detected in the first PIV frame and dynamically estimated for consecutive ones. As for the unsteady surface determination, a specific process is implemented like the following: i) the edge detection of the gradient magnitude in the PIV frame, ii) the extraction of the particles by filtering high-intensity large areas related to the bubbles and/or hull reflections, iii) the extraction of the rough region containing these particles and their reflections, iv) the removal of these reflections. The unsteady surface is finally obtained with a fifth-order polynomial interpolation. The resulted free surface is successfully validated from the Fourier analysis and by visualizing selected PIV images containing numerous spurious high intensity areas. This paper demonstrates how this data analysis process leads to PIV images database without reflections and an automatic detection of both the free surface and the rigid body. An application of this new mask is finally detailed, allowing a preliminary analysis of the hydrodynamic flow.

  8. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    Science.gov (United States)

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building

  9. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    Directory of Open Access Journals (Sweden)

    Fasahat Ullah Siddiqui

    2016-07-01

    Full Text Available Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality. Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state

  10. Automatic Signature Verification: Bridging the Gap between Existing Pattern Recognition Methods and Forensic Science

    OpenAIRE

    Malik, Muhammad Imran

    2015-01-01

    The main goal of this thesis is twofold. First, the thesis aims at bridging the gap between existing Pattern Recognition (PR) methods of automatic signature verification and the requirements for their application in forensic science. This gap, attributed by various factors ranging from system definition to evaluation, prevents automatic methods from being used by Forensic Handwriting Examiners (FHEs). Second, the thesis presents novel signature verification methods developed particularly cons...

  11. Exploring the Potential for Automatic Extraction of Vegetation Phenological Metrics from Traffic Webcams

    Directory of Open Access Journals (Sweden)

    Karon L. Smith

    2013-05-01

    Full Text Available Phenological metrics are of potential value as direct indicators of climate change. Usually they are obtained via either satellite imaging or ground based manual measurements; both are bespoke and therefore costly and have problems associated with scale and quality. An increase in the use of camera networks for monitoring infrastructure offers a means of obtaining images for use in phenological studies, where the only necessary outlay would be for data transfer, storage, processing and display. Here a pilot study is described that uses image data from a traffic monitoring network to demonstrate that it is possible to obtain usable information from the data captured. There are several challenges in using this network of cameras for automatic extraction of phenological metrics, not least, the low quality of the images and frequent camera motion. Although questions remain to be answered concerning the optimal employment of these cameras, this work illustrates that, in principle, image data from camera networks such as these could be used as a means of tracking environmental change in a low cost, highly automated and scalable manner that would require little human involvement.

  12. Calibration of three rainfall simulators with automatic measurement methods

    Science.gov (United States)

    Roldan, Margarita

    2010-05-01

    CALIBRATION OF THREE RAINFALL SIMULATORS WITH AUTOMATIC MEASUREMENT METHODS M. Roldán (1), I. Martín (2), F. Martín (2), S. de Alba(3), M. Alcázar(3), F.I. Cermeño(3) 1 Grupo de Investigación Ecología y Gestión Forestal Sostenible. ECOGESFOR-Universidad Politécnica de Madrid. E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. margarita.roldan@upm.es 2 E.U.I.T. Forestal. Avda. Ramiro de Maeztu s/n. Ciudad Universitaria. 28040 Madrid. 3 Facultad de Ciencias Geológicas. Universidad Complutense de Madrid. Ciudad Universitaria s/n. 28040 Madrid The rainfall erosivity is the potential ability of rain to cause erosion. It is function of the physical characteristics of rainfall (Hudson, 1971). Most expressions describing erosivity are related to kinetic energy or momentum and so with drop mass or size and fall velocity. Therefore, research on factors determining erosivity leds to the necessity to study the relation between fall height and fall velocity for different drop sizes, generated in a rainfall simulator (Epema G.F.and Riezebos H.Th, 1983) Rainfall simulators are one of the most used tools for erosion studies and are used to determine fall velocity and drop size. Rainfall simulators allow repeated and multiple measurements The main reason for use of rainfall simulation as a research tool is to reproduce in a controlled way the behaviour expected in the natural environment. But in many occasions when simulated rain is used in order to compare it with natural rain, there is a lack of correspondence between natural and simulated rain and this can introduce some doubt about validity of data because the characteristics of natural rain are not adequately represented in rainfall simulation research (Dunkerley D., 2008). Many times the rainfall simulations have high rain rates and they do not resemble natural rain events and these measures are not comparables. And besides the intensity is related to the kinetic energy which

  13. Hybrid Search Methods for Automatic Discovery of Computational Agent Schemes

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman

    Vol. 3. Los Alamitos: IEEE Computer Society, 2008 - (Li, Y.; Pasi, G.; Zhang, C.; Cercone, N.; Cao, L.), s. 579-582 ISBN 978-0-7695-3496-1. [WI-IAT 2008 Workshops. IEEE/WIC/ACM 2008 International Conference on Web Intelligence and Intelligent Agent Technology. Sydney (AU), 09.12.2008-12.12.2008] R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : multi-agent systems * intelligent agents * automatic configurations Subject RIV: IN - Informatics, Computer Science

  14. A new method for extracting domain terminology

    Institute of Scientific and Technical Information of China (English)

    PEI Bing-zhen; CHEN Xiao-rong; HU Yi; LU Ru-zhan

    2009-01-01

    This article proposes a new general, highly efficient algorithm for extracting domain terminologies.This domain-independent algorithm with multi-layers of filters is a hybrid of statistic-oriented and rule-oriented methods. Utilizing the features of domain terminologies and the characteristics that are unique to Chinese, this algorithm extracts domain terminologies by generating multi-word unit (MWU) candidates at first and then filtering the candidates through multi-strategies. Our test results show that this algorithm is feasible and effective.

  15. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  16. Automatic fault feature extraction of mechanical anomaly on induction motor bearing using ensemble super-wavelet transform

    Science.gov (United States)

    He, Wangpeng; Zi, Yanyang; Chen, Binqiang; Wu, Feng; He, Zhengjia

    2015-03-01

    Mechanical anomaly is a major failure type of induction motor. It is of great value to detect the resulting fault feature automatically. In this paper, an ensemble super-wavelet transform (ESW) is proposed for investigating vibration features of motor bearing faults. The ESW is put forward based on the combination of tunable Q-factor wavelet transform (TQWT) and Hilbert transform such that fault feature adaptability is enabled. Within ESW, a parametric optimization is performed on the measured signal to obtain a quality TQWT basis that best demonstrate the hidden fault feature. TQWT is introduced as it provides a vast wavelet dictionary with time-frequency localization ability. The parametric optimization is guided according to the maximization of fault feature ratio, which is a new quantitative measure of periodic fault signatures. The fault feature ratio is derived from the digital Hilbert demodulation analysis with an insightful quantitative interpretation. The output of ESW on the measured signal is a selected wavelet scale with indicated fault features. It is verified via numerical simulations that ESW can match the oscillatory behavior of signals without artificially specified. The proposed method is applied to two engineering cases, signals of which were collected from wind turbine and steel temper mill, to verify its effectiveness. The processed results demonstrate that the proposed method is more effective in extracting weak fault features of induction motor bearings compared with Fourier transform, direct Hilbert envelope spectrum, different wavelet transforms and spectral kurtosis.

  17. Automatic Registration Method for Fusion of ZY-1-02C Satellite Images

    Directory of Open Access Journals (Sweden)

    Qi Chen

    2013-12-01

    Full Text Available Automatic image registration (AIR has been widely studied in the fields of medical imaging, computer vision, and remote sensing. In various cases, such as image fusion, high registration accuracy should be achieved to meet application requirements. For satellite images, the large image size and unstable positioning accuracy resulting from the limited manufacturing technology of charge-coupled device, focal plane distortion, and unrecorded spacecraft jitter lead to difficulty in obtaining agreeable corresponding points for registration using only area-based matching or feature-based matching. In this situation, a coarse-to-fine matching strategy integrating two types of algorithms is proven feasible and effective. In this paper, an AIR method for application to the fusion of ZY-1-02C satellite imagery is proposed. First, the images are geometrically corrected. Coarse matching, based on scale invariant feature transform, is performed for the subsampled corrected images, and a rough global estimation is made with the matching results. Harris feature points are then extracted, and the coordinates of the corresponding points are calculated according to the global estimation results. Precise matching is conducted, based on normalized cross correlation and least squares matching. As complex image distortion cannot be precisely estimated, a local estimation using the structure of triangulated irregular network is applied to eliminate the false matches. Finally, image resampling is conducted, based on local affine transformation, to achieve high-precision registration. Experiments with ZY-1-02C datasets demonstrate that the accuracy of the proposed method meets the requirements of fusion application, and its efficiency is also suitable for the commercial operation of the automatic satellite data process system.

  18. Automatic Shape-Based Target Extraction for Close-Range Photogrammetry

    Science.gov (United States)

    Guo, X.; Chen, Y.; Wang, C.; Cheng, M.; Wen, C.; Yu, J.

    2016-06-01

    In order to perform precise identification and location of artificial coded targets in natural scenes, a novel design of circle-based coded target and the corresponding coarse-fine extraction algorithm are presented. The designed target separates the target box and coding box totally and owns an advantage of rotation invariance. Based on the original target, templates are prepared by three geometric transformations and are used as the input of shape-based template matching. Finally, region growing and parity check methods are used to extract the coded targets as final results. No human involvement is required except for the preparation of templates and adjustment of thresholds in the beginning, which is conducive to the automation of close-range photogrammetry. The experimental results show that the proposed recognition method for the designed coded target is robust and accurate.

  19. Automatic extraction of the cingulum bundle in diffusion tensor tract-specific analysis. Feasibility study in Parkinson's disease with and without dementia

    International Nuclear Information System (INIS)

    Tract-specific analysis (TSA) measures diffusion parameters along a specific fiber that has been extracted by fiber tracking using manual regions of interest (ROIs), but TSA is limited by its requirement for manual operation, poor reproducibility, and high time consumption. We aimed to develop a fully automated extraction method for the cingulum bundle (CB) and to apply the method to TSA in neurobehavioral disorders such as Parkinson's disease (PD). We introduce the voxel classification (VC) and auto diffusion tensor fiber-tracking (AFT) methods of extraction. The VC method directly extracts the CB, skipping the fiber-tracking step, whereas the AFT method uses fiber tracking from automatically selected ROIs. We compared the results of VC and AFT to those obtained by manual diffusion tensor fiber tracking (MFT) performed by 3 operators. We quantified the Jaccard similarity index among the 3 methods in data from 20 subjects (10 normal controls [NC] and 10 patients with Parkinson's disease dementia [PDD]). We used all 3 extraction methods (VC, AFT, and MFT) to calculate the fractional anisotropy (FA) values of the anterior and posterior CB for 15 NC subjects, 15 with PD, and 15 with PDD. The Jaccard index between results of AFT and MFT, 0.72, was similar to the inter-operator Jaccard index of MFT. However, the Jaccard indices between VC and MFT and between VC and AFT were lower. Consequently, the VC method classified among 3 different groups (NC, PD, and PDD), whereas the others classified only 2 different groups (NC, PD or PDD). For TSA in Parkinson's disease, the VC method can be more useful than the AFT and MFT methods for extracting the CB. In addition, the results of patient data analysis suggest that a reduction of FA in the posterior CB may represent a useful biological index for monitoring PD and PDD. (author)

  20. Automatic Sleep Staging using Multi-dimensional Feature Extraction and Multi-kernel Fuzzy Support Vector Machine

    OpenAIRE

    Yanjun Zhang; Xiangmin Zhang; Wenhui Liu; Yuxi Luo; Enjia Yu; Keju Zou; Xiaoliang Liu

    2014-01-01

    This paper employed the clinical Polysomnographic (PSG) data, mainly including all-night Electroencephalogram (EEG), Electrooculogram (EOG) and Electromyogram (EMG) signals of subjects, and adopted the American Academy of Sleep Medicine (AASM) clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as cl...

  1. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  2. Automatic optical inspection method for soft contact lenses

    Science.gov (United States)

    Chang, Chun-Li; Wu, Wen-Hong; Hwang, Chi-Chun

    2015-07-01

    In general, the manufacture of contact lenses is conventionally labor intensive, requiring manual handling and inspection of the cast lens during production. This paper is to build an AOI (automatic optical inspection) system, which include suitable light source, camera and image processing algorithms, for contact lenses defect inspection. The mainly defect type are missing lens and surface defect on the contact lenses. An illumination system with fixed focal lens and charge coupled device (CCD) is used to capture the images of contact lenses. After images are captured, an algorithm is employed to check if there are flaws showed on the images. Five kinds of defect can be detected by the designed algorithm. A prototype of the AOI system for contact lenses inspection is implemented. The experimental result shows that the proposed system is robust for in-line inspection.

  3. A system automatic study for the spent fuel rod cutting and simulated fuel pellet extraction device

    International Nuclear Information System (INIS)

    A fuel pellet extraction device of the spent fuel rods is described. The device consists of a cutting device of the spent fuel rods and the decladding device of the fuel pellets. The cutting device is to cut a spent fuel rod to n optimal size for fast decladding operation. To design the device, the fuel rod properties are investigated including the dimension and material of fuel rod tubes and pellets. Also, various methods of existing cutting method are investigated. The design concepts accommodate remote operability for the Hot-Cell(radioactive ) area operation. Also, the modularization of the device structure is considered for the easy maintenance. The decladding device is to extract the fuel pellet from the rod cut. To design this device, the existing method is investigated including the chemical and mechanical decladding methods. From the view point of fuel recovery and feasibility of implementation. it is concluded that the chemical decladding method is not appropriate due to the mass production of radioactive liquid wastes, in spite of its high fuel recovery characteristics. Hence, in this paper, the mechanical decladding method is adopted and the device is designed so as to be applicable to various lengths of rod-cuts. As like the cutting device,the concepts of remote operability and maintainability is considered. Both devices are fabricated and the performance is investigated through a series of experiments. From the experimental result, the optimal operational condition of the devices is established

  4. 10 CFR Appendix J1 to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Automatic and Semi-Automatic Clothes...

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 3 2010-01-01 2010-01-01 false Uniform Test Method for Measuring the Energy Consumption... Energy Consumption of Automatic and Semi-Automatic Clothes Washers The provisions of this appendix J1... means for determining the energy consumption of a clothes washer with an adaptive control...

  5. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    International Nuclear Information System (INIS)

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis

  6. Identifying Structures in Social Conversations in NSCLC Patients through the Semi-Automatic extraction of Topical Taxonomies

    Directory of Open Access Journals (Sweden)

    Giancarlo Crocetti

    2016-01-01

    Full Text Available The exploration of social conversations for addressing patient’s needs is an important analytical task in which many scholarly publications are contributing to fill the knowledge gap in this area. The main difficulty remains the inability to turn such contributions into pragmatic processes the pharmaceutical industry can leverage in order to generate insight from social media data, which can be considered as one of the most challenging source of information available today due to its sheer volume and noise. This study is based on the work by Scott Spangler and Jeffrey Kreulen and applies it to identify structure in social media through the extraction of a topical taxonomy able to capture the latent knowledge in social conversations in health-related sites. The mechanism for automatically identifying and generating a taxonomy from social conversations is developed and pressured tested using public data from media sites focused on the needs of cancer patients and their families. Moreover, a novel method for generating the category’s label and the determination of an optimal number of categories is presented which extends Scott and Jeffrey’s research in a meaningful way. We assume the reader is familiar with taxonomies, what they are and how they are used.

  7. Characterization of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash using different extraction methods

    Energy Technology Data Exchange (ETDEWEB)

    Sun, P.; Weavers, L.K.; Taerakul, P.; Walker, H.W. [Ohio State University, Columbus, OH (United States). Dept. of Civil & Environmental Engineering

    2006-01-01

    In this study, traditional Soxhlet, automatic Soxhlet and ultrasonic extraction techniques were employed to determine the speciation and concentration of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash samples collected from the baghouse of a spreader stoker boiler. To test the efficiencies of different extraction methods, LSD ash samples were doped with a mixture of 16 US EPA specified PAHs to measure the matrix spike recoveries. The results showed that the spike recoveries of PAHs were different using these three extraction methods with dichloromethane (DCM) as the solvent. Traditional Soxhlet extraction achieved slightly higher recoveries than automatic Soxhlet and ultrasonic extraction. Different solvents including toluene, DCM:acetone (1:1 V/V) and hexane:acetone (1:1 V/V) were further examined to optimize the recovery using ultrasonic extraction. Toluene achieved the highest spike recoveries of PAHs at a spike level of 10 {mu}g kg{sup -1}. When the spike level was increased to 50 {mu}g kg{sup -1}, the spike recoveries of PAHs also correspondingly increased. Although the type and concentration of PAHs detected on LSD ash samples by different extraction methods varied, the concentration of each detected PAH was consistently low, at {mu}g kg{sup -1} levels.

  8. Characterization of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash using different extraction methods.

    Science.gov (United States)

    Sun, Ping; Weavers, Linda K; Taerakul, Panuwat; Walker, Harold W

    2006-01-01

    In this study, traditional Soxhlet, automatic Soxhlet and ultrasonic extraction techniques were employed to determine the speciation and concentration of polycyclic aromatic hydrocarbons (PAHs) on lime spray dryer (LSD) ash samples collected from the baghouse of a spreader stoker boiler. To test the efficiencies of different extraction methods, LSD ash samples were doped with a mixture of 16 US EPA specified PAHs to measure the matrix spike recoveries. The results showed that the spike recoveries of PAHs were different using these three extraction methods with dichloromethane (DCM) as the solvent. Traditional Soxhlet extraction achieved slightly higher recoveries than automatic Soxhlet and ultrasonic extraction. Different solvents including toluene, DCM:acetone (1:1 V/V) and hexane:acetone (1:1 V/V) were further examined to optimize the recovery using ultrasonic extraction. Toluene achieved the highest spike recoveries of PAHs at a spike level of 10 microg kg(-1). When the spike level was increased to 50 microg kg(-1), the spike recoveries of PAHs also correspondingly increased. Although the type and concentration of PAHs detected on LSD ash samples by different extraction methods varied, the concentration of each detected PAH was consistently low, at microg kg(-1) levels. PMID:15990154

  9. Automatic extraction of protein point mutations using a graph bigram association.

    Directory of Open Access Journals (Sweden)

    Lawrence C Lee

    2007-02-01

    Full Text Available Protein point mutations are an essential component of the evolutionary and experimental analysis of protein structure and function. While many manually curated databases attempt to index point mutations, most experimentally generated point mutations and the biological impacts of the changes are described in the peer-reviewed published literature. We describe an application, Mutation GraB (Graph Bigram, that identifies, extracts, and verifies point mutations from biomedical literature. The principal problem of point mutation extraction is to link the point mutation with its associated protein and organism of origin. Our algorithm uses a graph-based bigram traversal to identify these relevant associations and exploits the Swiss-Prot protein database to verify this information. The graph bigram method is different from other models for point mutation extraction in that it incorporates frequency and positional data of all terms in an article to drive the point mutation-protein association. Our method was tested on 589 articles describing point mutations from the G protein-coupled receptor (GPCR, tyrosine kinase, and ion channel protein families. We evaluated our graph bigram metric against a word-proximity metric for term association on datasets of full-text literature in these three different protein families. Our testing shows that the graph bigram metric achieves a higher F-measure for the GPCRs (0.79 versus 0.76, protein tyrosine kinases (0.72 versus 0.69, and ion channel transporters (0.76 versus 0.74. Importantly, in situations where more than one protein can be assigned to a point mutation and disambiguation is required, the graph bigram metric achieves a precision of 0.84 compared with the word distance metric precision of 0.73. We believe the graph bigram search metric to be a significant improvement over previous search metrics for point mutation extraction and to be applicable to text-mining application requiring the association of words.

  10. A Self-adaptive Threshold Method for Automatic Sleep Stage Classification Using EOG and EMG

    Directory of Open Access Journals (Sweden)

    Li Jie

    2015-01-01

    Full Text Available Sleep stages are generally divided into three stages including Wake, REM and NRME. The standard sleep monitoring technology is Polysomnography (PSG. The inconvenience for PSG monitoring limits the usage of PSG in some areas. In this study, we developed a new method to classify sleep stage using electrooculogram (EOG and electromyography (EMG automatically. We extracted right and left EOG features and EMG feature in time domain, and classified them into strong, weak and none types through calculating self-adaptive threshold. Combination of the time features of EOG and EMG signals, we classified sleep stages into Wake, REM and NREM stages. The time domain features utilized in the method were Integrate Value, variance and energy. The experiment of 30 datasets showed a satisfactory result with the accuracy of 82.93% for Wake, NREM and REM stages classification, and the average accuracy of Wake stage classification was 83.29%, 82.11% for NREM stage and 76.73% for REM stage.

  11. Automatic and robust method for registration of optical imagery with point cloud data

    Science.gov (United States)

    Wu, Yingdan; Ming, Yang

    2015-12-01

    Aim to the difficulty of automatic and robust registration of optical imagery with point cloud data, this paper propose a new method based on SIFT and Mutual Information (MI). The SIFT features are firstly extracted and matched, whose result is used to derive the coarse geometric relationship between the optical imagery and the point cloud data. Secondly, the MI-based similarity measure is used to derive the conjugate points. And then the RANSAC algorithm is adopted to eliminate the erroneous matching points. Repeating the procedure of MI matching and mismatching points deletion until the finest pyramid image level. Using the matching results, the transform model is determined. The experiments have been made and they demonstrate the potential of the MI-based measure for the registration of optical imagery with the point cloud data, and this highlight the feasibility and robustness of the method proposed in this paper to automated registration of multi-modal, multi-temporal remote sensing data for a wide range of applications.

  12. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    OpenAIRE

    Dang, H; Otake, Y.; Schafer, S.; Stayman, J. W.; Kleinszig, G.; Siewerdsen, J. H.

    2012-01-01

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model re...

  13. Purging Musical Instrument Sample Databases Using Automatic Musical Instrument Recognition Methods

    OpenAIRE

    Livshin, Arie; Rodet, Xavier

    2009-01-01

    cote interne IRCAM: Livshin09a None / None National audience Compilation of musical instrument sample databases requires careful elimination of badly recorded samples and validation of sample classification into correct categories. This paper introduces algorithms for automatic removal of bad instrument samples using Automatic Musical Instrument Recognition and Outlier Detection techniques. Best evaluation results on a methodically contaminated sound database are achieved using the i...

  14. Explodet Project:. Methods of Automatic Data Processing and Analysis for the Detection of Hidden Explosive

    Science.gov (United States)

    Lecca, Paola

    2003-12-01

    The research of the INFN Gruppo Collegato di Trento in the ambit of EXPLODET project for the humanitarian demining, is devoted to the development of a software procedure for the automatization of data analysis and decision taking about the presence of hidden explosive. Innovative algorithms of likely background calculation, a system based on neural networks for energy calibration and simple statistical methods for the qualitative consistency check of the signals are the main parts of the software performing the automatic data elaboration.

  15. Method for Extracting and Sequestering Carbon Dioxide

    Energy Technology Data Exchange (ETDEWEB)

    Rau, Gregory H.; Caldeira, Kenneth G.

    2005-05-10

    A method and apparatus to extract and sequester carbon dioxide (CO2) from a stream or volume of gas wherein said method and apparatus hydrates CO2, and reacts the resulting carbonic acid with carbonate. Suitable carbonates include, but are not limited to, carbonates of alkali metals and alkaline earth metals, preferably carbonates of calcium and magnesium. Waste products are metal cations and bicarbonate in solution or dehydrated metal salts, which when disposed of in a large body of water provide an effective way of sequestering CO2 from a gaseous environment.

  16. Principles and methods for automatic and semi-automatic tissue segmentation in MRI data.

    Science.gov (United States)

    Wang, Lei; Chitiboi, Teodora; Meine, Hans; Günther, Matthias; Hahn, Horst K

    2016-04-01

    The development of magnetic resonance imaging (MRI) revolutionized both the medical and scientific worlds. A large variety of MRI options have generated a huge amount of image data to interpret. The investigation of a specific tissue in 3D or 4D MR images can be facilitated by image processing techniques, such as segmentation and registration. In this work, we provide a brief review of the principles and methods that are commonly applied to achieve superior tissue segmentation results in MRI. The impacts of MR image acquisition on segmentation outcome and the principles of selecting and exploiting segmentation techniques tailored for specific tissue identification tasks are discussed. In the end, two exemplary applications, breast and fibroglandular tissue segmentation in MRI and myocardium segmentation in short-axis cine and real-time MRI, are discussed to explain the typical challenges that can be posed in practical segmentation tasks in MRI data. The corresponding solutions that are adopted to deal with these challenges of the two practical segmentation tasks are thoroughly reviewed. PMID:26755062

  17. A cell extraction method for oily sediments

    Directory of Open Access Journals (Sweden)

    MichaelLappé

    2011-11-01

    Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification and enumeration much easier. Consequently, the resulting cell counts from samples treated according to our new protocol are significantly higher than those treated according to Kallmeyer et al. (2008. We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and – in samples containing more biodegraded oils – methanol, delivered the best results. However, as solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which hydrocarbon extraction is maximised and cell lysis minimized. A ratio between slurry and solvent of 1:2 to 1:5 delivered the highest cell counts without lysing too many cells. The method provided reproducibly good results on samples from very different environments, both marine and terrestrial.

  18. Method of drill-worm coal extraction

    Energy Technology Data Exchange (ETDEWEB)

    Levkovich, P.Ye.; Bratishcheva, L.L.; Savich, N.S.

    1982-09-01

    The purpose of the invention is to increase extraction productivity. This goal is achieved because according to the method of drill-worm coal extraction which includes drilling from one preparatory shaft to the second of wells by paired worm shaft on a guide which is pulled into the drilled well during reverse course of the shaft, and reinforcement of the drilled well, the drilled well is reinforced by a wedge timbering which bulges out during drilling of the next well. According to the proposed method, coal is extracted by drilling wells from the preparatory shaft 1 (a haulage gallery is shown in the example). Drilling of the wells is done with the help of a sectional worm shaft equipped with a drilling crown and guide device, equipped with a cantilever used to attach the guide device to the main section of the worm shaft. The guide device also includes two horizontally installed, freely rotating cylinders located in front of the drilling crowns in the previously drilled well and the guide ski. During drilling of the well in the second preparatory shaft (a ventilation gallery is indicated in the example) on the guide platform sets of wedge timbering are installed connected with the help of flexible ties, for example chain segments. The wedge timbering (including the main set) consists of wedge elements made of inexpensive material, for example slag-concrete.

  19. Recent developments in automatic solid-phase extraction with renewable surfaces exploiting flow-based approaches

    DEFF Research Database (Denmark)

    Miró, Manuel; Hartwell, Supaporn Kradtap; Jakmunee, Jaroon; Grudpan, Kate; Hansen, Elo Harald

    2008-01-01

    Solid-phase extraction (SPE) is the most versatile sample-processing method for removal of interfering species and/or analyte enrichment. Although significant advances have been made over the past two decades in automating the entire analytical protocol involving SPE via flow-injection approaches...... overcoming the above shortcomings, so-called bead-injection (BI) analysis, based on automated renewal of the sorbent material per assay exploiting the various generations of flow-injection analysis. It addresses novel instrumental developments for implementing BI and a number of alternatives for online...

  20. Computer Vision Based Automatic Extraction and Thickness Measurement of Deep Cervical Flexor from Ultrasonic Images

    OpenAIRE

    Kwang Baek Kim; Doo Heon Song; Hyun Jun Park

    2016-01-01

    Deep Cervical Flexor (DCF) muscles are important in monitoring and controlling neck pain. While ultrasonographic analysis is useful in this area, it has intrinsic subjectivity problem. In this paper, we propose automatic DCF extractor/analyzer software based on computer vision. One of the major difficulties in developing such an automatic analyzer is to detect important organs and their boundaries under very low brightness contrast environment. Our fuzzy sigma binarization process is one of t...

  1. Method for automatic control rod operation using rule-based control

    International Nuclear Information System (INIS)

    An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

  2. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    OpenAIRE

    Miguel García-Remesal; Alejandro García-Ruiz; David Pérez-Rey; Diana de la Iglesia; Víctor Maojo

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from t...

  3. Automatic control logics to eliminate xenon oscillation based on Axial Offsets Trajectory Method

    Energy Technology Data Exchange (ETDEWEB)

    Shimazu, Yoichiro [Mitsubishi Heavy Industries Ltd., Yokohama (Japan). Nuclear Energy Systems Engineering Center

    1996-06-01

    We have proposed Axial Offsets (AO) Trajectory Method for xenon oscillation control in pressurized water reactors. The features of this method are described as such that it can clearly give necessary control operations to eliminate xenon oscillations. It is expected that using the features automatic control logics for xenon oscillations can be simple and be realized easily. We investigated automatic control logics. The AO Trajectory Method could realize a very simple logic only for eliminating xenon oscillations. However it was necessary to give another considerations to eliminate the xenon oscillation with a given axial power distribution. The other control logic based on the modern control theory was also studied for comparison of the control performance of the new control logic. As the results, it is presented that the automatic control logics based on the AO Trajectory Method are very simple and effective. (author).

  4. Automatic control logics to eliminate xenon oscillation based on Axial Offsets Trajectory Method

    International Nuclear Information System (INIS)

    We have proposed Axial Offsets (AO) Trajectory Method for xenon oscillation control in pressurized water reactors. The features of this method are described as such that it can clearly give necessary control operations to eliminate xenon oscillations. It is expected that using the features automatic control logics for xenon oscillations can be simple and be realized easily. We investigated automatic control logics. The AO Trajectory Method could realize a very simple logic only for eliminating xenon oscillations. However it was necessary to give another considerations to eliminate the xenon oscillation with a given axial power distribution. The other control logic based on the modern control theory was also studied for comparison of the control performance of the new control logic. As the results, it is presented that the automatic control logics based on the AO Trajectory Method are very simple and effective. (author)

  5. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    Institute of Scientific and Technical Information of China (English)

    MA Min; CHEN Guang-ju

    2005-01-01

    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  6. Method of energy efficiency of residential house by implementing of automatic controlled heat metering system

    Directory of Open Access Journals (Sweden)

    Taisiya Olegovna Zadvinskaya

    2014-08-01

    Full Text Available The method of increasing the efficiency of heat energy describes in this article. The method is based on installation of heat metering system and automatic controlled domestic heating plant in residential building. An example of comparative calculation of the heat input and estimation for heat energy in a typical residential building, according to different methods which are used for the calculation of extra charge by the energy supplier, in the presence of the heat metering system and automatic controlled domestic heating plant and without. Payback period of the proposed activities was calculated.

  7. A cell extraction method for oily sediments

    Science.gov (United States)

    Lappé, M.; Kallmeyer, J.

    2012-04-01

    Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are an important economical resource and, through natural seepage or accidental release, they can be major pollutants. Oil sands from Alberta, Canada, and samples from the seafloor of the Gulf of Mexico represent typical examples of either natural or anthropogenically affected oily sediments. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence and thereby massively hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008) separates the cells from the sediment matrix, producing a sediment free cell extract that can then be used for subsequent staining and cell enumeration under a fluorescence microscope. In principle, this technique can also be used to separate cells from oily sediments, but it was not originally optimized for this application and does not provide satisfactory results. Here we present a modified extraction method in which the hydrocarbons are removed prior to cell extraction by a solvent treatment. Due to the reduced background fluorescence the microscopic image becomes clearer, making cell identification and enumeration much easier. Consequently, the resulting cell counts from oily samples treated according to our new protocol were significantly higher than those treated according to Kallmeyer et al. (2008). We tested different amounts of a variety of solvents for their ability to remove hydrocarbons and found that n-hexane and - in samples containing more biodegraded oils - methanol, delivered the best results. Because solvents also tend to lyse cells, it was important to find the optimum solvent to sample ratio, at which the positive effect of hydrocarbon extraction overcomes the negative effect of cell lysis. A volumetric ratio of 1:2 to 1:5 between a formalin-fixed sediment slurry and solvent delivered highest cell counts. Extraction

  8. A method for improving the accuracy of automatic indexing of Chinese-English mixed documents

    Institute of Scientific and Technical Information of China (English)

    Yan; ZHAO; Hui; SHI

    2012-01-01

    Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.

  9. Automatic Registration Method for Optical Remote Sensing Images with Large Background Variations Using Line Segments

    Directory of Open Access Journals (Sweden)

    Xiaolong Shi

    2016-05-01

    Full Text Available Image registration is an essential step in the process of image fusion, environment surveillance and change detection. Finding correct feature matches during the registration process proves to be difficult, especially for remote sensing images with large background variations (e.g., images taken pre and post an earthquake or flood. Traditional registration methods based on local intensity probably cannot maintain steady performances, as differences are significant in the same area of the corresponding images, and ground control points are not always available in many disaster images. In this paper, an automatic image registration method based on the line segments on the main shape contours (e.g., coastal lines, long roads and mountain ridges is proposed for remote sensing images with large background variations because the main shape contours can hold relatively more invariant information. First, a line segment detector called EDLines (Edge Drawing Lines, which was proposed by Akinlar et al. in 2011, is used to extract line segments from two corresponding images, and a line validation step is performed to remove meaningless and fragmented line segments. Then, a novel line segment descriptor with a new histogram binning strategy, which is robust to global geometrical distortions, is generated for each line segment based on the geometrical relationships,including both the locations and orientations of theremaining line segments relative to it. As a result of the invariance of the main shape contours, correct line segment matches will have similar descriptors and can be obtained by cross-matching among the descriptors. Finally, a spatial consistency measure is used to remove incorrect matches, and transformation parameters between the reference and sensed images can be figured out. Experiments with images from different types of satellite datasets, such as Landsat7, QuickBird, WorldView, and so on, demonstrate that the proposed algorithm is

  10. Unsupervised Threshold for Automatic Extraction of Dolphin Dorsal Fin Outlines from Digital Photographs in DARWIN (Digital Analysis and Recognition of Whale Images on a Network)

    CERN Document Server

    Hale, Scott A

    2012-01-01

    At least two software packages---DARWIN, Eckerd College, and FinScan, Texas A&M---exist to facilitate the identification of cetaceans---whales, dolphins, porpoises---based upon the naturally occurring features along the edges of their dorsal fins. Such identification is useful for biological studies of population, social interaction, migration, etc. The process whereby fin outlines are extracted in current fin-recognition software packages is manually intensive and represents a major user input bottleneck: it is both time consuming and visually fatiguing. This research aims to develop automated methods (employing unsupervised thresholding and morphological processing techniques) to extract cetacean dorsal fin outlines from digital photographs thereby reducing manual user input. Ideally, automatic outline generation will improve the overall user experience and improve the ability of the software to correctly identify cetaceans. Various transformations from color to gray space were examined to determine whi...

  11. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    Directory of Open Access Journals (Sweden)

    Shrivastav Rahul

    2005-01-01

    Full Text Available Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW and the Itakura-Saito (IS distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  12. Automatic diagnostic methods of nuclear reactor collected signals

    International Nuclear Information System (INIS)

    This work is the first phase of an opwall study of diagnosis limited to problems of monitoring the operating state; this allows to show all what the pattern recognition methods bring at the processing level. The present problem is the research of the control operations. The analysis of the state of the reactor gives a decision which is compared with the history of the control operations, and if there is not correspondence, the state subjected to the analysis will be said 'abnormal''. The system subjected to the analysis is described and the problem to solve is defined. Then, one deals with the gaussian parametric approach and the methods to evaluate the error probability. After one deals with non parametric methods and an on-line detection has been tested experimentally. Finally a non linear transformation has been studied to reduce the error probability previously obtained. All the methods presented have been tested and compared to a quality index: the error probability

  13. Automatable Evaluation Method Oriented toward Behaviour Believability for Video Games

    CERN Document Server

    Tencé, Fabien

    2010-01-01

    Classic evaluation methods of believable agents are time-consuming because they involve many human to judge agents. They are well suited to validate work on new believable behaviours models. However, during the implementation, numerous experiments can help to improve agents' believability. We propose a method which aim at assessing how much an agent's behaviour looks like humans' behaviours. By representing behaviours with vectors, we can store data computed for humans and then evaluate as many agents as needed without further need of humans. We present a test experiment which shows that even a simple evaluation following our method can reveal differences between quite believable agents and humans. This method seems promising although, as shown in our experiment, results' analysis can be difficult.

  14. Automatic extraction of pectoral muscle in the MLO view of mammograms

    International Nuclear Information System (INIS)

    A mammogram is the standard modality used for breast cancer screening. Computer-aided detection (CAD) approaches are helpful for improving breast cancer detection rates when applied to mammograms. However, automated analysis of a mammogram often leads to inaccurate results in the presence of the pectoral muscle. Therefore, it is necessary to first handle pectoral muscle segmentation separately before any further analysis of a mammogram. One difficulty to overcome when segmenting out pectoral muscle is its strong overlapping with dense glandular tissue which tampers with its extraction. This paper introduces an automated two-step approach for pectoral muscle extraction. The pectoral region is firstly estimated through segmentation by mean of a modified Fuzzy C-Means clustering algorithm. After contour validation, the final boundary is delineated through iterative refinement of edge point using average gradient. The proposed method is quite simple in implementation and yields accurate results. It was tested on a set of images from the MIAS database and yielded results which, compared to those of some state-of-the-art approaches, were better. (paper)

  15. Method to extract oil from oil shale

    International Nuclear Information System (INIS)

    Oil is extracted from grinded hot oil shale by the treatment with an organic liquid, e.g. gas oil, at 350 to 4100C and elevated pressure in the presence of hydrogen. The admixed organic liquid is separated from the oil contained in the oil shake in an extraction vessel with benzine as the extracting agent. The mixture from the extracted components of the oil-shake and the extracting agent is dried in a drying vessel with low pressure steam. (HGOE)

  16. Automatic ECG wave extraction in long-term recordings using Gaussian mesa function models and nonlinear probability estimators.

    Science.gov (United States)

    Dubois, Rémi; Maison-Blanche, Pierre; Quenet, Brigitte; Dreyfus, Gérard

    2007-12-01

    This paper describes the automatic extraction of the P, Q, R, S and T waves of electrocardiographic recordings (ECGs), through the combined use of a new machine-learning algorithm termed generalized orthogonal forward regression (GOFR) and of a specific parameterized function termed Gaussian mesa function (GMF). GOFR breaks up the heartbeat signal into Gaussian mesa functions, in such a way that each wave is modeled by a single GMF; the model thus generated is easily interpretable by the physician. GOFR is an essential ingredient in a global procedure that locates the R wave after some simple pre-processing, extracts the characteristic shape of each heart beat, assigns P, Q, R, S and T labels through automatic classification, discriminates normal beats (NB) from abnormal beats (AB), and extracts features for diagnosis. The efficiency of the detection of the QRS complex, and of the discrimination of NB from AB, is assessed on the MIT and AHA databases; the labeling of the P and T wave is validated on the QTDB database. PMID:17997186

  17. A method for the automatic reconstruction of fetal cardiac signals from magnetocardiographic recordings

    Energy Technology Data Exchange (ETDEWEB)

    Mantini, D [Department of Informatics and Automation Engineering, Marche Polytechnic University, Ancona (Italy); Alleva, G [Department of Clinical Sciences and Bio-imaging, Chieti University, Chieti (Italy); Comani, S [Department of Clinical Sciences and Bio-imaging, Chieti University, Chieti (Italy); ITAB-Institute of Advanced Biomedical Technologies, University Foundation ' G. D' Annunzio, Chieti University, Chieti (Italy)

    2005-10-21

    Fetal magnetocardiography (fMCG) allows monitoring the fetal heart function through algorithms able to retrieve the fetal cardiac signal, but no standardized automatic model has become available so far. In this paper, we describe an automatic method that restores the fetal cardiac trace from fMCG recordings by means of a weighted summation of fetal components separated with independent component analysis (ICA) and identified through dedicated algorithms that analyse the frequency content and temporal structure of each source signal. Multichannel fMCG datasets of 66 healthy and 4 arrhythmic fetuses were used to validate the automatic method with respect to a classical procedure requiring the manual classification of fetal components by an expert investigator. ICA was run with input clusters of different dimensions to simulate various MCG systems. Detection rates, true negative and false positive component categorization, QRS amplitude, standard deviation and signal-to-noise ratio of reconstructed fetal signals, and real and per cent QRS differences between paired fetal traces retrieved automatically and manually were calculated to quantify the performances of the automatic method. Its robustness and reliability, particularly evident with the use of large input clusters, might increase the diagnostic role of fMCG during the prenatal period.

  18. Combining C-value and Keyword Extraction Methods for Biomedical Terms Extraction

    OpenAIRE

    Lossio-Ventura, Juan Antonio; Jonquet, Clement; Roche, Mathieu; Teisseire, Maguelonne

    2013-01-01

    The objective of this work is to extract and to rank biomedical terms from free text. We present new extraction methods that use linguistic patterns specialized for the biomedical field, and use term extraction measures, such as C-value, and keyword extraction measures, such as Okapi BM25, and TFIDF. We propose several combinations of these measures to improve the extraction and ranking process. Our experiments show that an appropriate harmonic mean of C-value used with keyword extraction mea...

  19. An unsupervised text mining method for relation extraction from biomedical literature.

    Directory of Open Access Journals (Sweden)

    Changqin Quan

    Full Text Available The wealth of interaction information provided in biomedical articles motivated the implementation of text mining approaches to automatically extract biomedical relations. This paper presents an unsupervised method based on pattern clustering and sentence parsing to deal with biomedical relation extraction. Pattern clustering algorithm is based on Polynomial Kernel method, which identifies interaction words from unlabeled data; these interaction words are then used in relation extraction between entity pairs. Dependency parsing and phrase structure parsing are combined for relation extraction. Based on the semi-supervised KNN algorithm, we extend the proposed unsupervised approach to a semi-supervised approach by combining pattern clustering, dependency parsing and phrase structure parsing rules. We evaluated the approaches on two different tasks: (1 Protein-protein interactions extraction, and (2 Gene-suicide association extraction. The evaluation of task (1 on the benchmark dataset (AImed corpus showed that our proposed unsupervised approach outperformed three supervised methods. The three supervised methods are rule based, SVM based, and Kernel based separately. The proposed semi-supervised approach is superior to the existing semi-supervised methods. The evaluation on gene-suicide association extraction on a smaller dataset from Genetic Association Database and a larger dataset from publicly available PubMed showed that the proposed unsupervised and semi-supervised methods achieved much higher F-scores than co-occurrence based method.

  20. Automatic extraction analysis of the anatomical functional area for normal brain 18F-FDG PET imaging

    International Nuclear Information System (INIS)

    Using self-designed automatic extraction software of brain functional area, the grey scale distribution of 18F-FDG imaging and the relationship between the 18F-FDG accumulation of brain anatomic function area and the 18F-FDG injected dose, the level of glucose, the age, etc., were studied. According to the Talairach coordinate system, after rotation, drift and plastic deformation, the 18F-FDG PET imaging was registered into the Talairach coordinate atlas, and then the average gray value scale ratios between individual brain anatomic functional area and whole brain area was calculated. Further more the statistics of the relationship between the 18F-FDG accumulation of every brain anatomic function area and the 18F-FDG injected dose, the level of glucose and the age were tested by using multiple stepwise regression model. After images' registration, smoothing and extraction, main cerebral cortex of the 18F-FDG PET brain imaging can be successfully localized and extracted, such as frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain ventricle, thalamus and hippocampus. The average ratios to the inner reference of every brain anatomic functional area were 1.01 ± 0.15. By multiple stepwise regression with the exception of thalamus and hippocampus, the grey scale of all the brain functional area was negatively correlated to the ages, but with no correlation to blood sugar and dose in all areas. To the 18F-FDG PET imaging, the brain functional area extraction program could automatically delineate most of the cerebral cortical area, and also successfully reflect the brain blood and metabolic study, but extraction of the more detailed area needs further investigation

  1. Statistical and neural net methods for automatic glaucoma diagnosis determination

    Czech Academy of Sciences Publication Activity Database

    Pluháček, F.; Pospíšil, Jaroslav

    2004-01-01

    Roč. 1, č. 2 (2004), s. 12-24. ISSN 1644-3608 Institutional research plan: CEZ:AV0Z1010921 Keywords : glaucoma * diagnostic methods * pallor * image analysis * statistical evaluation Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.375, year: 2004

  2. A semi-automatic method for peak and valley detection in free-breathing respiratory waveforms

    International Nuclear Information System (INIS)

    The existing commercial software often inadequately determines respiratory peaks for patients in respiration correlated computed tomography. A semi-automatic method was developed for peak and valley detection in free-breathing respiratory waveforms. First the waveform is separated into breath cycles by identifying intercepts of a moving average curve with the inspiration and expiration branches of the waveform. Peaks and valleys were then defined, respectively, as the maximum and minimum between pairs of alternating inspiration and expiration intercepts. Finally, automatic corrections and manual user interventions were employed. On average for each of the 20 patients, 99% of 307 peaks and valleys were automatically detected in 2.8 s. This method was robust for bellows waveforms with large variations

  3. Automatic teleaudiometry: a low cost method to auditory screening

    Directory of Open Access Journals (Sweden)

    Campelo, Victor Eulálio Sousa

    2010-03-01

    Full Text Available Introduction: The auditory screening' benefits has been demonstrated, however these programs has been restricted to the big centers. Objectives: (a Developing a auditory screening method to distance; (b Testing its accuracy and comparing to the screening audiometry test (AV. Method: The teleaudiometry (TA, consists in a developed software, installed in a computer with phone TDH39. It was realized a study in series in 73 individuals between 17 and 50 years, being 57,%% of the female sex, they were randomly selected between patients and companions of the Hospital das Clínicas. Before were subjected to a symptom questionnaire and otoscopy, the individuals realized the tests of TA AV, with scanning in 20dB in the frequencies of 1,2 and 4kHz following the ASHA (1997 protocol and to the gold standard test of audiometry of pure tones in soundproof booth in aleatory order. Results: the TA has lasted average 125+11s and the AV 65+18s. 69 individuals (94,5% declaring to be found difficult or very easy to performing the TA and 61 (83,6% have considered easy or very easy the AV. The accuracy results of TA and AV were respectively: sensibility (86,7% / 86,7%, specificity (75,9%/ 72,4% and negative predictive value (95,7% / 95,5%, positive predictive value (48,1% / 55,2%. Conclusion: The teleaudiometry has showed a good option as an auditory screening method, presenting accuracy next to screening audiometry. In comparison with this method, the teleaudiometry has presented a similar sensibility, major specificity, negative predictive value and endurance time and, under positive predictive value.

  4. Development of automatic extraction of the corpus callosum from magnetic resonance imaging of the head and examination of the early dementia objective diagnostic technique in feature analysis

    International Nuclear Information System (INIS)

    We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)

  5. A new method for the automatic calculation of prosody

    International Nuclear Information System (INIS)

    An algorithm is presented for the calculation of the prosodic parameters for speech synthesis. It uses the melodic patterns, composed of rising and falling slopes, suggested by G. CAELEN, and rests on: 1. An analysis into units of meaning to determine a melodic pattern 2. the calculation of the numeric values for the prosodic variations of each syllable; 3. The use of a table of vocalic values for the three parameters for each vowel according to the consonantal environment and of a table of standard duration for consonants. This method was applied in the 'SARA' program of synthesis with satisfactory results. (author)

  6. Comparison of Two Methods for Automatic Brain Morphometry Analysis

    Directory of Open Access Journals (Sweden)

    D. Schwarz

    2011-12-01

    Full Text Available The methods of computational neuroanatomy are widely used; the data on their individual strengths and limitations from direct comparisons are, however, scarce. The aim of the present study was direct comparison of DBM based on high-resolution spatial transforms with widely used VBM analysis based on segmented high-resolution images. We performed DBM and VBM analyses on simulated volume changes in a set of 20 3-D MR images, compared to 30 MR images, where only random spatial transforms were introduced. The ability of the two methods to detect regions with the simulated volume changes was determined using overlay index together with the ground truth regions of the simulations; the precision of the detection in space was determined using the distance measures between the centers of detected and simulated regions. DBM was able to detect all the regions with simulated local volume changes with high spatial precision. On the other hand, VBM detected only changes in vicinity of the largest simulated change, with a poor overlap of the detected changes and the ground truth. Taken together we suggest that the analysis of high-resolution deformation fields is more convenient, sensitive, and precise than voxel-wise analysis of tissue-segmented images.

  7. An automatic form error evaluation method for characterizing micro-structured surfaces

    Science.gov (United States)

    Yu, D. P.; Zhong, X.; Wong, Y. S.; Hong, G. S.; Lu, W. F.; Cheng, H. L.

    2011-01-01

    Ultra-precision micro-structured surfaces are becoming increasingly important in a range of application areas, including engineering optics, biological products, metrology artifacts, data storage, etc. However, there is a lack of surface characterization methods for the micro-structured surfaces with sub-nanometer accuracy. Although some research studies have been conducted on 3D surface characterization, most of them are on freeform surfaces, which are difficult to be applied on the micro-structured surfaces because of their limited characterization accuracy and the repeated surface feature patterns in the micro-structured surfaces. In this paper, an automatic form error evaluation method (AFEEM) is presented to characterize the form accuracy of the micro-structured surfaces. The machined micro-structured surface can be measured by any 3D high resolution measurement instrument. The measurement data are converted and pre-processed for the AFEEM, which mainly consists of a coarse registration and a fine registration process. The coarse registration estimates an initial position of the measured surface for the fine registration by extracting the most perceptually salient points in the surfaces, computing the integral volume descriptor for each salient point, searching for the best triplet-point correspondence and calculating the coarse registration matrix. The fine registration aligns the measured surface to the designed surface by a proposed adaptive iterative closest point algorithm to guarantee sub-nanometer accuracy for surface characterization. A series of computer simulations and experimental studies were conducted to verify the AFEEM. Results demonstrate the accuracy and effectiveness of the AFEEM for characterizing the micro-structured surfaces.

  8. An automatic form error evaluation method for characterizing micro-structured surfaces

    International Nuclear Information System (INIS)

    Ultra-precision micro-structured surfaces are becoming increasingly important in a range of application areas, including engineering optics, biological products, metrology artifacts, data storage, etc. However, there is a lack of surface characterization methods for the micro-structured surfaces with sub-nanometer accuracy. Although some research studies have been conducted on 3D surface characterization, most of them are on freeform surfaces, which are difficult to be applied on the micro-structured surfaces because of their limited characterization accuracy and the repeated surface feature patterns in the micro-structured surfaces. In this paper, an automatic form error evaluation method (AFEEM) is presented to characterize the form accuracy of the micro-structured surfaces. The machined micro-structured surface can be measured by any 3D high resolution measurement instrument. The measurement data are converted and pre-processed for the AFEEM, which mainly consists of a coarse registration and a fine registration process. The coarse registration estimates an initial position of the measured surface for the fine registration by extracting the most perceptually salient points in the surfaces, computing the integral volume descriptor for each salient point, searching for the best triplet-point correspondence and calculating the coarse registration matrix. The fine registration aligns the measured surface to the designed surface by a proposed adaptive iterative closest point algorithm to guarantee sub-nanometer accuracy for surface characterization. A series of computer simulations and experimental studies were conducted to verify the AFEEM. Results demonstrate the accuracy and effectiveness of the AFEEM for characterizing the micro-structured surfaces

  9. A new uranium geochemical exploration method. Humic acid extraction method

    International Nuclear Information System (INIS)

    The humic acid extraction method is a new one to carry out ore prospecting by using the element content in existing form or the so-called phase content. This paper expounds the association between metallic elements such as uranium that is combined with humic acid and uranium mineralization at depth. The adsorption mechanism of heavy metallic elements by humic acid is further analysed. The technical method of testing is presented and detailed introduction to the case histories on concealed uranium deposit exploration is given

  10. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  11. Green technology approach towards herbal extraction method

    Science.gov (United States)

    Mutalib, Tengku Nur Atiqah Tengku Ab; Hamzah, Zainab; Hashim, Othman; Mat, Hishamudin Che

    2015-05-01

    The aim of present study was to compare maceration method of selected herbs using green and non-green solvents. Water and d-limonene are a type of green solvents while non-green solvents are chloroform and ethanol. The selected herbs were Clinacanthus nutans leaf and stem, Orthosiphon stamineus leaf and stem, Sesbania grandiflora leaf, Pluchea indica leaf, Morinda citrifolia leaf and Citrus hystrix leaf. The extracts were compared with the determination of total phenolic content. Total phenols were analyzed using a spectrophotometric technique, based on Follin-ciocalteau reagent. Gallic acid was used as standard compound and the total phenols were expressed as mg/g gallic acid equivalent (GAE). The most suitable and effective solvent is water which produced highest total phenol contents compared to other solvents. Among the selected herbs, Orthosiphon stamineus leaves contain high total phenols at 9.087mg/g.

  12. On-line dynamic fractionation and automatic determination of inorganic phosphorus in environmental solid substrates exploiting sequential injection microcolumn extraction and flow injection analysis

    International Nuclear Information System (INIS)

    Sequential injection microcolumn extraction (SI-MCE) based on the implementation of a soil-containing microcartridge as external reactor in a sequential injection network is, for the first time, proposed for dynamic fractionation of macronutrients in environmental solids, as exemplified by the partitioning of inorganic phosphorus in agricultural soils. The on-line fractionation method capitalises on the accurate metering and sequential exposure of the various extractants to the solid sample by application of programmable flow as precisely coordinated by a syringe pump. Three different soil phase associations for phosphorus, that is, exchangeable, Al- and Fe-bound, and Ca-bound fractions, were elucidated by accommodation in the flow manifold of the three steps of the Hieltjes-Lijklema (HL) scheme involving the use of 1.0 M NH4Cl, 0.1 M NaOH and 0.5 M HCl, respectively, as sequential leaching reagents. The precise timing and versatility of SI for tailoring various operational extraction modes were utilized for investigating the extractability and the extent of phosphorus re-distribution for variable partitioning times. Automatic spectrophotometric determination of soluble reactive phosphorus in soil extracts was performed by a flow injection (FI) analyser based on the Molybdenum Blue (MB) chemistry. The 3σ detection limit was 0.02 mg P L-1 while the linear dynamic range extended up to 20 mg P L-1 regardless of the extracting media. Despite the variable chemical composition of the HL extracts, a single FI set-up was assembled with no need for either manifold re-configuration or modification of chemical composition of reagents. The mobilization of trace elements, such as Cd, often present in grazed pastures as a result of the application of phosphate fertilizers, was also explored in the HL fractions by electrothermal atomic absorption spectrometry

  13. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    CERN Document Server

    Jonnalagadda, Siddhartha

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  14. Method and apparatus for automatically tracking a workpiece surface. [Patents

    Science.gov (United States)

    Not Available

    1981-02-03

    Laser cutting concepts and apparatus have been developed for cutting the shroud of the core fuel subassemblies. However, much care must be taken in the accuracy of the cutting since the fuel rods within the shroud often become warped and are forced into direct contact with the shroud in random regions. Thus, in order to cut the nuclear fuel rod shroud accurately so as not to puncture the cladding of the fuel rods, and to insure optimal cutting efficiency and performance, the focal point of beam need be maintained accurately at the workpiece surface. It becomes necessary to detect deviations in the level of the workpiece surface accurately in connection with the cutting process. Therefore, a method and apparatus for tracking the surface of a workpiece being cut by a laser beam coming from a focus head assembly is disclosed which includes two collimated laser beams directed onto the work-piece surface at spaced points by beam directing optics in generally parallel planes of incidence. A shift in spacing between the two points is detected by means of a video camera system and processed by a computer to yield a workpiece surface displacement signal which is input to a motor which raises or lowers the beam focus head accordingly.

  15. Method and apparatus for automatic control of a humanoid robot

    Science.gov (United States)

    Abdallah, Muhammad E (Inventor); Platt, Robert (Inventor); Wampler, II, Charles W. (Inventor); Reiland, Matthew J (Inventor); Sanders, Adam M (Inventor)

    2013-01-01

    A robotic system includes a humanoid robot having a plurality of joints adapted for force control with respect to an object acted upon by the robot, a graphical user interface (GUI) for receiving an input signal from a user, and a controller. The GUI provides the user with intuitive programming access to the controller. The controller controls the joints using an impedance-based control framework, which provides object level, end-effector level, and/or joint space-level control of the robot in response to the input signal. A method for controlling the robotic system includes receiving the input signal via the GUI, e.g., a desired force, and then processing the input signal using a host machine to control the joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the robot, and allows for functional-based GUI to simplify implementation of a myriad of operating modes.

  16. System and method for automatically fabricating multiple memory holographic elements

    Science.gov (United States)

    Leib, Kenneth G.; Peck, Alexander N., II; Jue, Suey

    1989-05-01

    A method is described for recording multiple holograms on an individual recording medium. The recording signal beam is passed through a gate to expose a plurality of beams to the signal beam to spatially modulate the beam. A matrix of beams is generated from the spatially modulated signal beam, each of the matrix beams converging on a different area on the recording medium. The matrix beam is passed through a mask and onto the recording medium to record a plurality of diffraction patterns on different areas of the recording medium. This system may be used in a number of other ways. A plurality of views of a single object may be exposed one at a time to the signal beams, without moving the mask, to form a plurality of non-coherent holograms on a particular area of the recording medium. The procedure may be repeated a number of times moving the mask so that an array of such non-coherent holograms are formed on the recording medium. Also, a different hologram may be made on a different area of the recording medium for each object presented to the signal beam. This system can be used to construct an array of many holograms, which may represent many targets or many views of the same target. A matched filter memory having a library of holograms can form the basis of an optical memory system for robot vision systems.

  17. Automatic ultrasonic image analysis method for defect detection

    International Nuclear Information System (INIS)

    Ultrasonic examination of austenitic steel weld seams raises well known problems of interpreting signals perturbed by this type of material. The JUKEBOX ultrasonic imaging system developed at the Cadarache Nuclear Research Center provides a major improvement in the general area of defect localization and characterization, based on processing overall images obtained by (X, Y) scanning. (X, time) images are formed by juxtaposing input signals. A series of parallel images shifted on the Y-axis is also available. The authors present a novel defect detection method based on analysing the timeline positions of the maxima and minima recorded on (X, time) images. This position is statistically stable when a defect is encountered, and is random enough under spurious noise conditions to constitute a discriminating parameter. The investigation involves calculating the trace variance: this parameters is then taken into account for detection purposes. Correlation with parallel images enhances detection reliability. A significant increase in the signal-to-noise ratio during tests on artificial defects is shown

  18. Concept of automatic programming of NC machine for metal plate cutting by genetic algorithm method

    Directory of Open Access Journals (Sweden)

    B. Vaupotic

    2005-12-01

    Full Text Available Purpose: In this paper the concept of automatic programs of the NC machine for metal plate cutting by genetic algorithm method has been presented.Design/methodology/approach: The paper was limited to automatic creation of NC programs for two-dimensional cutting of material by means of adaptive heuristic search algorithms.Findings: Automatic creation of NC programs in laser cutting of materials combines the CAD concepts, the recognition of features and creation and optimization of NC programs. The proposed intelligent system is capable to recognize automatically the nesting of products in the layout, to determine the incisions and sequences of cuts forming the laid out products. Position of incisions is determined at the relevant places on the cut. The system is capable to find the shortest path between individual cuts and to record the NC program.Research limitations/implications: It would be appropriate to orient future researches towards conceiving an improved system for three-dimensional cutting with optional determination of positions of incisions, with the capability to sense collisions and with optimization of the speed and acceleration during cutting.Practical implications: The proposed system assures automatic preparation of NC program without NC programer.Originality/value: The proposed concept shows a high degree of universality, efficiency and reliability and it can be simply adapted to other NC-machines.

  19. Carotid stenosis assessment with multi-detector CT angiography: comparison between manual and automatic segmentation methods.

    Science.gov (United States)

    Zhu, Chengcheng; Patterson, Andrew J; Thomas, Owen M; Sadat, Umar; Graves, Martin J; Gillard, Jonathan H

    2013-04-01

    Luminal stenosis is used for selecting the optimal management strategy for patients with carotid artery disease. The aim of this study is to evaluate the reproducibility of carotid stenosis quantification using manual and automated segmentation methods using submillimeter through-plane resolution Multi-Detector CT angiography (MDCTA). 35 patients having carotid artery disease with >30 % luminal stenosis as identified by carotid duplex imaging underwent contrast enhanced MDCTA. Two experienced CT readers quantified carotid stenosis from axial source images, reconstructed maximum intensity projection (MIP) and 3D-carotid geometry which was automatically segmented by an open-source toolkit (Vascular Modelling Toolkit, VMTK) using NASCET criteria. Good agreement among the measurement using axial images, MIP and automatic segmentation was observed. Automatic segmentation methods show better inter-observer agreement between the readers (intra-class correlation coefficient (ICC): 0.99 for diameter stenosis measurement) than manual measurement of axial (ICC = 0.82) and MIP (ICC = 0.86) images. Carotid stenosis quantification using an automatic segmentation method has higher reproducibility compared with manual methods. PMID:23135615

  20. On the Methodology of Nematode Extraction from Field Samples: Comparison of Methods for Soil Extraction

    OpenAIRE

    Viglierchio, David R.; Schmitt, Richard V.

    1983-01-01

    The commonly used nematode extraction methods were compared using three soil types and four nematode species. The comparison was repeated in three trials by the same operator to estimate operator reproducibility. Extraction efficiency was dependent upon method, soil type, and nematode species, and reproducibility was not particularly satisfactory for routine analyses. Extraction by any method tested was less than 50% efficient. Quantitative nematode extraction methodology needs serious attent...

  1. Automatic Extraction of Building Roof Planes from Airborne LIDAR Data Applying AN Extended 3d Randomized Hough Transform

    Science.gov (United States)

    Maltezos, Evangelos; Ioannidis, Charalabos

    2016-06-01

    This study aims to extract automatically building roof planes from airborne LIDAR data applying an extended 3D Randomized Hough Transform (RHT). The proposed methodology consists of three main steps, namely detection of building points, plane detection and refinement. For the detection of the building points, the vegetative areas are first segmented from the scene content and the bare earth is extracted afterwards. The automatic plane detection of each building is performed applying extensions of the RHT associated with additional constraint criteria during the random selection of the 3 points aiming at the optimum adaptation to the building rooftops as well as using a simple design of the accumulator that efficiently detects the prominent planes. The refinement of the plane detection is conducted based on the relationship between neighbouring planes, the locality of the point and the use of additional information. An indicative experimental comparison to verify the advantages of the extended RHT compared to the 3D Standard Hough Transform (SHT) is implemented as well as the sensitivity of the proposed extensions and accumulator design is examined in the view of quality and computational time compared to the default RHT. Further, a comparison between the extended RHT and the RANSAC is carried out. The plane detection results illustrate the potential of the proposed extended RHT in terms of robustness and efficiency for several applications.

  2. Automatic electricity markets data extraction for realistic multi-agent simulations

    DEFF Research Database (Denmark)

    Pereira, Ivo F.; Sousa, Tiago M.; Praca, Isabel;

    2014-01-01

    different market sources, even including different market types; machine learning approach for automatic definition of downloads periodicity of new information available on-line. This is a crucial tool to go a step forward in electricity markets simulation, since the integration of this database with a...... scenarios generation tool, based on knowledge discovery techniques, provides a framework to study real market scenarios allowing simulators improvement and validation....

  3. A method for automatically constructing the initial contour of the common carotid artery

    OpenAIRE

    Yara Omran; Kamil Riha

    2013-01-01

    In this article we propose a novel method to automatically set the initial contour that is used by the Active contours algorithm.The proposed method exploits the accumulative intensity profiles to locate the points on the arterial wall. The intensity profiles of sections that intersect the artery show distinguishable characterstics that make it possible to recognize them from the profiles of sections that do not intersect the artery walls. The proposed method is applied on ultrasound images o...

  4. Effect of Temperature on the Color of Natural Dyes Extracted Using Pressurized Hot Water Extraction Method

    OpenAIRE

    Nursyamirah A. Razak; Siti M. Tumin; Ruziyati Tajuddin

    2011-01-01

    Problem statement: Traditionally, extraction of natural dyes with boiling method produced only one single tone of colorant/dyes which involved plenty of water in several hours of extraction time. A new modern extraction technique should be introduced especially to textile dyers so that a variety of tone of colorants can be produced in a shorter time with less consumption of water. Approach: This study demonstrated Pressurized Hot Water Extraction (PHWE) as a new technique to extract colorants...

  5. Automatic registration method for multisensor datasets adopted for dimensional measurements on cutting tools

    International Nuclear Information System (INIS)

    Multisensor systems with optical 3D sensors are frequently employed to capture complete surface information by measuring workpieces from different views. During coarse and fine registration the resulting datasets are afterward transformed into one common coordinate system. Automatic fine registration methods are well established in dimensional metrology, whereas there is a deficit in automatic coarse registration methods. The advantage of a fully automatic registration procedure is twofold: it enables a fast and contact-free alignment and further a flexible application to datasets of any kind of optical 3D sensor. In this paper, an algorithm adapted for a robust automatic coarse registration is presented. The method was originally developed for the field of object reconstruction or localization. It is based on a segmentation of planes in the datasets to calculate the transformation parameters. The rotation is defined by the normals of three corresponding segmented planes of two overlapping datasets, while the translation is calculated via the intersection point of the segmented planes. First results have shown that the translation is strongly shape dependent: 3D data of objects with non-orthogonal planar flanks cannot be registered with the current method. In the novel supplement for the algorithm, the translation is additionally calculated via the distance between centroids of corresponding segmented planes, which results in more than one option for the transformation. A newly introduced measure considering the distance between the datasets after coarse registration evaluates the best possible transformation. Results of the robust automatic registration method are presented on the example of datasets taken from a cutting tool with a fringe-projection system and a focus-variation system. The successful application in dimensional metrology is proven with evaluations of shape parameters based on the registered datasets of a calibrated workpiece. (paper)

  6. Automatic extraction of the mid-sagittal plane using an ICP variant

    Science.gov (United States)

    Fieten, Lorenz; Eschweiler, Jörg; de la Fuente, Matías; Gravius, Sascha; Radermacher, Klaus

    2008-03-01

    Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities. Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms. Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull datasets our method showed a better ability to match homologous areas.

  7. Development of automatic blood extraction device with a micro-needle for blood-sugar level measurement

    Science.gov (United States)

    Kawanaka, Kaichiro; Uetsuji, Yasutomo; Tsuchiya, Kazuyoshi; Nakamachi, Eiji

    2008-12-01

    In this study, a portable type HMS (Health Monitoring System) device is newly developed. It has features 1) puncturing a blood vessel by using a minimally invasive micro-needle, 2) extracting and transferring human blood and 3) measuring blood glucose level. This miniature SMBG (Self-Monitoring of Blood Glucose) device employs a syringe reciprocal blood extraction system equipped with an electro-mechanical control unit for accurate and steady operations. The device consists of a) a disposable syringe unit, b) a non-disposable body unit, and c) a glucose enzyme sensor. The syringe unit consists of a syringe itself, its cover, a piston and a titanium alloy micro-needle, whose inner diameter is about 100µm. The body unit consists of a linear driven-type stepping motor, a piston jig, which connects directly to the shaft of the stepping motor, and a syringe jig, which is driven by combining with the piston jig and slider, which fixes the syringe jig. The required thrust to drive the slider is designed to be greater than the value of the blood extraction force. Because of this driving mechanism, the automatic blood extraction and discharging processes are completed by only one linear driven-type stepping motor. The experimental results using our miniature SMBG device was confirmed to output more than 90% volumetric efficiency under the driving speed of the piston, 1.0mm/s. Further, the blood sugar level was measured successfully by using the glucose enzyme sensor.

  8. 7 CFR 51.1179 - Method of juice extraction.

    Science.gov (United States)

    2010-01-01

    ... of Common Sweet Oranges (citrus Sinensis (l) Osbeck) § 51.1179 Method of juice extraction. The juice used in the determining of solids, acids and juice content shall be extracted from representative... 7 Agriculture 2 2010-01-01 2010-01-01 false Method of juice extraction. 51.1179 Section...

  9. MiDas: automatic extraction of a common domain of discourse in sleep medicine for multi-center data integration.

    Science.gov (United States)

    Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry. PMID:22195180

  10. Combination of automatic HPLC-RIA method for determination of estrone and estradiol in serum.

    Science.gov (United States)

    Yasui, T; Yamada, M; Kinoshita, H; Uemura, H; Yoneda, N; Irahara, M; Aono, T; Sunahara, S; Mito, Y; Kurimoto, F; Hata, K

    1999-01-01

    We developed a highly sensitive assay for estrone and 17 beta-estradiol in serum. Estrone and 17 beta-estradiol, obtained by solid-phase extraction using a Sep pak tC18 cartridge, were purified by high-performance liquid chromatography (HPLC). Quantitation of estrone and 17 beta-estradiol were carried out by radioimmunoassay. Not insignificantly, this automatic system of extraction and HPLC succeeded in analyzing 80 samples a week. Intra-assay coefficients of variation (CV) for estrone and 17 beta-estradiol ranged from 19.5 to 28.7%, and from 8.5 to 13.7%, respectively. The minimum detectable dose for estrone and 17 beta-estradiol were 1.04 pg/ml and 0.64 pg/ml, respectively. The serum levels of 17 beta-estradiol using our method strongly correlated with those by Gas chromatography mass spectrometry (GC-MS). The serum levels of estrone and 17 beta-estradiol in 154 peri- and postmenopausal women were estimated to be between 15 and 27 pg/ml and between 3.5 and 24.0 pg/ml, respectively, while the serum level of 17 beta-estradiol in postmenopausal women, in particular, was estimated to be from 3.5 to 6.3 pg/ml. For postmenopausal women who suffered from vasomotor symptoms, the mean levels of estrone and 17 beta-estradiol at 12 to 18 hours after treatment with daily 0.625 mg conjugated equine estrogen (CEE) and 2.5 mg medroxyprogesterone acetate (MPA) were 135.0 and 21.3 pg/ml at 12 months, respectively. On the other hand, levels of estrone and 17 beta-estradiol at 12 to 18 hours after treatment with CEE and MPA every other day, were 73.4 and 15.3 pg/ml, respectively. These highly sensitive assays for estrone and 17 beta-estradiol are useful in measuring low levels of estrogen in postmenopausal women, and monitoring estrogen levels in women receiving CEE as hormone replacement therapy. PMID:10633293

  11. Antioxidant and Antibacterial Assays on Polygonum minus Extracts: Different Extraction Methods

    OpenAIRE

    Norsyamimi Hassim; Masturah Markom; Nurina Anuar; Kurnia Harlina Dewi; Syarul Nataqain Baharum; Normah Mohd Noor

    2015-01-01

    The effect of solvent type and extraction method was investigated to study the antioxidant and antibacterial activity of Polygonum minus. Two extraction methods were used: a solvent extraction using Soxhlet apparatus and supercritical fluid extraction (SFE). The antioxidant capacity was evaluated using the ferric reducing/antioxidant power (FRAP) assay and the free radical-scavenging capacity of 2,2-diphenyl-1-picrylhydrazyl (DPPH) assay. The highest polyphenol content was obtained from the m...

  12. First results of dose to patient in CT extracted with an automatic registration system

    International Nuclear Information System (INIS)

    Radiation protection of the patient in computed tomography (CT) is a priority for several reasons: the dose received during a scan is relatively high, is the diagnostic mode with greater contribution to dose to patient collective and the frequency of completion of TC is increasing rapidly the past few years. On the other hand, are currently beginning to commercially offer automated registration of dose to patient receiving dosimetric parameters of all scans performed on the equipment connected to the system. In this communication the first results are presented from two TC connected to an automatic system of this kind recently installed at our Center. (Author)

  13. Automatic extraction of myocardial mass and volumes using parametric images from dynamic non-gated PET

    DEFF Research Database (Denmark)

    Harms, Hans; Hansson, Nils Henrik Stubkjær; Tolbod, Lars Poulsen;

    2016-01-01

    Dynamic cardiac positron emission tomography (PET) is used to quantify molecular processes in vivo. However, measurements of left-ventricular (LV) mass and volumes require electrocardiogram (ECG)-gated PET data. The aim of this study was to explore the feasibility of measuring LV geometry using non...... generated from non-gated dynamic data. Using software-based structure recognition the LV wall was automatically segmented from K1 images to derive mLV and wall thickness (WT). End-systolic (ESV) and end-diastolic (EDV) volumes were calculated using blood pool images and used to obtain stroke volume (SV) and...

  14. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds.

    Science.gov (United States)

    Cristiano, Bárbara F G; Delgado, José Ubiratan; da Silva, José Wanderley S; de Barros, Pedro D; de Araújo, Radier M S; Dias, Fábio C; Lopes, Ricardo T

    2012-09-01

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. PMID:22406220

  15. Novel methods for 3-D semi-automatic mapping of fracture geometry at exposed rock faces

    OpenAIRE

    Feng, Quanhong

    2001-01-01

    To analyse the influence of fractures on hydraulic andmechanical behaviour of fractured rock masses, it is essentialto characterise fracture geometry at exposed rock faces. Thisthesis describes three semi-automatic methods for measuring andquantifying geometrical parameters of fractures, and aims tooffer a novel approach to the traditional mapping methods. Three techniques, i.e. geodetic total station, close-rangephotogrammetry and 3-D laser scanner, are used in this studyfor measurement of f...

  16. Review of Road Extraction Methods from SAR Image

    International Nuclear Information System (INIS)

    Road extraction methods from SAR Image are important in the field of SAR image recognition and detection. In the past few decades, scholars at home and abroad have done a lot of experiments and researches. Through the analysis of the current situation, it firstly introduces the road characteristics of SAR image and basic strategies of road extraction. Then, the existing road extraction methods from SAR image are summarized. Finally, the prospective road extraction researches from SAR image are put forward

  17. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    OpenAIRE

    M.L. Khodra; D.H. Widyantoro; E.A. Aziz; B.R. Trilaksono

    2011-01-01

    This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM). The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best perfor...

  18. Automatic extraction of semantic relations between medical entities: a rule based approach

    OpenAIRE

    Ben Abacha Asma; Zweigenbaum Pierre

    2011-01-01

    Abstract Background Information extraction is a complex task which is necessary to develop high-precision information retrieval tools. In this paper, we present the platform MeTAE (Medical Texts Annotation and Exploration). MeTAE allows (i) to extract and annotate medical entities and relationships from medical texts and (ii) to explore semantically the produced RDF annotations. Results Our annotation approach relies on linguistic patterns and domain knowledge and consists in two steps: (i) r...

  19. Semi-automatic version of the potentiometric titration method for characterization of uranium compounds

    International Nuclear Information System (INIS)

    The potentiometric titration method was used for characterization of uranium compounds to be applied in intercomparison programs. The method is applied with traceability assured using a potassium dichromate primary standard. A semi-automatic version was developed to reduce the analysis time and the operator variation. The standard uncertainty in determining the total concentration of uranium was around 0.01%, which is suitable for uranium characterization and compatible with those obtained by manual techniques. - Highlights: ► A semi-automatic potentiometric titration method was developed for U charaterization. ► K2Cr2O7 was the only certified reference material used. ► Values obtained for U3O8 samples were consistent with certified. ► Uncertainty of 0.01% was useful for characterization and intercomparison program.

  20. Improvement in the performance of CAD for the Alzheimer-type dementia based on automatic extraction of temporal lobe from coronal MR images

    International Nuclear Information System (INIS)

    In this study, we extracted whole brain and temporal lobe images from MR images (26 healthy elderly controls and 34 Alzheimer-type dementia patients) by means of binarize, mask processing, template matching, Hough transformation, and boundary tracing etc. We assessed the extraction accuracy by comparing the extracted images to images extracts by a radiological technologist. The results of assessment by consistent rate; brain images 91.3±4.3%, right temporal lobe 83.3±6.9%, left temporal lobe 83.7±7.6%. Furthermore discriminant analysis using 6 textural features demonstrated sensitivity and specificity of 100% when the healthy elderly controls were compared to the Alzheimer-type dementia patients. Our research showed the possibility of automatic objective diagnosis of temporal lobe abnormalities by automatic extracted images of the temporal lobes. (author)

  1. An atlas-based fuzzy connectedness method for automatic tissue classification in brain MRI

    Institute of Scientific and Technical Information of China (English)

    ZHOU Yongxin; BAI Jing

    2006-01-01

    A framework incorporating a subject-registered atlas into the fuzzy connectedness (FC) method is proposed for the automatic tissue classification of 3D images of brain MRI. The pre-labeled atlas is first registered onto the subject to provide an initial approximate segmentation. The initial segmentation is used to estimate the intensity histograms of gray matter and white matter. Based on the estimated intensity histograms, multiple seed voxels are assigned to each tissue automatically. The normalized intensity histograms are utilized in the FC method as the intensity probability density function (PDF) directly. Relative fuzzy connectedness technique is adopted in the final classification of gray matter and white matter. Experimental results based on the 20 data sets from IBSR are included, as well as comparisons of the performance of our method with that of other published methods. This method is fully automatic and operator-independent. Therefore, it is expected to find wide applications, such as 3D visualization, radiation therapy planning, and medical database construction.

  2. Quality assurance using outlier detection on an automatic segmentation method for the cerebellar peduncles

    Science.gov (United States)

    Li, Ke; Ye, Chuyang; Yang, Zhen; Carass, Aaron; Ying, Sarah H.; Prince, Jerry L.

    2016-03-01

    Cerebellar peduncles (CPs) are white matter tracts connecting the cerebellum to other brain regions. Automatic segmentation methods of the CPs have been proposed for studying their structure and function. Usually the performance of these methods is evaluated by comparing segmentation results with manual delineations (ground truth). However, when a segmentation method is run on new data (for which no ground truth exists) it is highly desirable to efficiently detect and assess algorithm failures so that these cases can be excluded from scientific analysis. In this work, two outlier detection methods aimed to assess the performance of an automatic CP segmentation algorithm are presented. The first one is a univariate non-parametric method using a box-whisker plot. We first categorize automatic segmentation results of a dataset of diffusion tensor imaging (DTI) scans from 48 subjects as either a success or a failure. We then design three groups of features from the image data of nine categorized failures for failure detection. Results show that most of these features can efficiently detect the true failures. The second method—supervised classification—was employed on a larger DTI dataset of 249 manually categorized subjects. Four classifiers—linear discriminant analysis (LDA), logistic regression (LR), support vector machine (SVM), and random forest classification (RFC)—were trained using the designed features and evaluated using a leave-one-out cross validation. Results show that the LR performs worst among the four classifiers and the other three perform comparably, which demonstrates the feasibility of automatically detecting segmentation failures using classification methods.

  3. NeurphologyJ: An automatic neuronal morphology quantification method and its application in pharmacological discovery

    Directory of Open Access Journals (Sweden)

    Huang Hui-Ling

    2011-06-01

    Full Text Available Abstract Background Automatic quantification of neuronal morphology from images of fluorescence microscopy plays an increasingly important role in high-content screenings. However, there exist very few freeware tools and methods which provide automatic neuronal morphology quantification for pharmacological discovery. Results This study proposes an effective quantification method, called NeurphologyJ, capable of automatically quantifying neuronal morphologies such as soma number and size, neurite length, and neurite branching complexity (which is highly related to the numbers of attachment points and ending points. NeurphologyJ is implemented as a plugin to ImageJ, an open-source Java-based image processing and analysis platform. The high performance of NeurphologyJ arises mainly from an elegant image enhancement method. Consequently, some morphology operations of image processing can be efficiently applied. We evaluated NeurphologyJ by comparing it with both the computer-aided manual tracing method NeuronJ and an existing ImageJ-based plugin method NeuriteTracer. Our results reveal that NeurphologyJ is comparable to NeuronJ, that the coefficient correlation between the estimated neurite lengths is as high as 0.992. NeurphologyJ can accurately measure neurite length, soma number, neurite attachment points, and neurite ending points from a single image. Furthermore, the quantification result of nocodazole perturbation is consistent with its known inhibitory effect on neurite outgrowth. We were also able to calculate the IC50 of nocodazole using NeurphologyJ. This reveals that NeurphologyJ is effective enough to be utilized in applications of pharmacological discoveries. Conclusions This study proposes an automatic and fast neuronal quantification method NeurphologyJ. The ImageJ plugin with supports of batch processing is easily customized for dealing with high-content screening applications. The source codes of NeurphologyJ (interactive and high

  4. Evaluation of urinary cortisol excretion by radioimmunoassay through two methods (extracted and non-extracted)

    International Nuclear Information System (INIS)

    The objective of this paper is to compare the feasibility, sensitivity and specificity of both methods (extracted versus non-extracted) in the hypercortisolism diagnosis. It used Gamma Coat 125 cortisol Kit provided by Clinical Assays, Incstar, USA, for both methods extracting it with methylene chloride in order to measure the extracted cortisol. It was performed 32 assays from which it was obtained from 0.1 to 0.47 u g/d l of sensitivity. The intra-run precision was varied from 8.29 +- 3.38% and 8.19 +-4.72% for high and low levels, respectively for non-extracted cortisol, and 9.72 +- 1.94% and 9.54 +- 44% for high and low levels, respectively, for extracted cortisol. The inter-run precision was 15.98% and 16.15% for high level of non-extracted cortisol, respectively. For the low level it obtained 17.25% and 18.59% for non-extracted and extracted cortisol respectively. It was evaluated 24-hour urine basal samples from 43 normal subjects, and 53 obese (body mass index > 30) and 53 Cushing's syndrome patients. The sensitivity of the methods were similar (100% and 98.1% for non-extracted and extracted methods, respectively) and the specificity was the same for both methods (100%). It was noticed a positive correlation between the two methods in all the groups studied (p s syndrome. (author)

  5. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    Directory of Open Access Journals (Sweden)

    M.L. Khodra

    2011-04-01

    Full Text Available This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM. The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best performer (80.68% is SVM classifier with all features.

  6. CAD-based automatic modeling method for Geant4 geometry model through MCAM

    International Nuclear Information System (INIS)

    The full text of publication follows. Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problems that exist in most present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics and Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling. (authors)

  7. Automatic Extraction of IndoorGML Core Model from OpenStreetMap

    Science.gov (United States)

    Mirvahabi, S. S.; Abbaspour, R. A.

    2015-12-01

    Navigation has become an essential component of human life and a necessary component in many fields. Because of the increasing size and complexity of buildings, a unified data model for navigation analysis and exchange of information. IndoorGML describes an appropriate data model and XML schema of indoor spatial information that focuses on modelling indoor spaces. Collecting spatial data by professional and commercial providers often need to spend high cost and time, which is the major reason that VGI emerged. One of the most popular VGI projects is OpenStreetMap (OSM). In this paper, a new approach is proposed for the automatic generation of IndoorGML data core file from OSM data file. The output of this approach is the file of core data model that can be used alongside the navigation data model for navigation application of indoor space.

  8. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    Science.gov (United States)

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren

    2008-12-01

    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  9. Automatic segmentation method which divides a cerebral artery tree in time-of-flight MR-angiography into artery segments

    Science.gov (United States)

    Takemura, Akihiro; Suzuki, Masayuki; Harauchi, Hajime; Okumura, Yusuke; Umeda, Tokuo

    2006-03-01

    To achieve sufficient accuracy and robustness, 2D/3D registration methods between DSA and MRA of the cerebral artery require an automatic extraction method that can isolate wanted segments from the cerebral artery tree. Here, we described an automatic segmentation method that divides the cerebral artery tree in time-of-flight magnetic resonance angiography (TOF-MRA) into each artery. This method requires a 3D dataset of the cerebral artery tree obtained by TOF-MRA. The processes of this method are: 1) every branch in the cerebral artery tree is labeled with a unique index number, 2) the 3D center of the Circle of Willis is determined using 2D and 3D templates, and 3) the labeled branches are classified with reference to the 3D territory map of cerebral arteries centered on the Circle of Willis. This method classifies all branches into internal carotid arteries (ICA), basilar artery (BA), middle cerebral artery (MCA), a1 segment of anterior cerebral artery (ACA(A1)), other segments of the anterior cerebral artery (ACA), posterior communication artery (PcomA), and posterior cerebral artery (PCA). In the eleven cases examined, the numbers of correctly segmented pixels in each branch were counted and the percentages based on the total number of pixels of the artery were calculated. Manually classified arteries of each case were used as references. Mean percentages were: ACA, 87.6%; R-ACA(A1), 44.9%; L-ACA(A1), 30.4%; R-MC, 82.4%; L-MC, 79.0%; R-PcomA, 0.5%; L-PcomA, 0.0%; R-PCA, 77.2%; L-PCA, 80.0%; R-ICA, 78.6%; L-ICA, 93.05; BA, 77.1%; and total arteries, 78.9%.

  10. Method of semi-automatic high precision potentiometric titration for characterization of uranium compounds

    International Nuclear Information System (INIS)

    The method of high precision potentiometric titration is widely used in the certification and characterization of uranium compounds. In order to reduce the analysis and diminish the influence if the annalist, a semi-automatic version of the method was developed at the safeguards laboratory of the CNEN-RJ, Brazil. The method was applied with traceability guaranteed by use of primary standard of potassium dichromate. The standard uncertainty combined in the determination of concentration of total uranium was of the order of 0.01%, which is better related to traditionally methods used by the nuclear installations which is of the order of 0.1%

  11. Optimization-based Method for Automated Road Network Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Xiong, D

    2001-09-18

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction.

  12. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  13. COMPARISON OF DNA EXTRACTION METHODS ON DAIRY CONSTRUCTED WETLAND WASTEWATER

    Science.gov (United States)

    Direct DNA extraction from environmental samples is a useful and culture-independent method for the examination of microbial diversity. To date, there is little information on the effectiveness of commercial DNA extraction kits on wastewater. We compared two commercial DNA extraction kits for amount...

  14. A new quantitative automatic method for the measurement of non-rapid eye movement sleep electroencephalographic amplitude variability.

    Science.gov (United States)

    Ferri, Raffaele; Rundo, Francesco; Novelli, Luana; Terzano, Mario G; Parrino, Liborio; Bruni, Oliviero

    2012-04-01

    The aim of this study was to arrange an automatic quantitative measure of the electroencephalographic (EEG) signal amplitude variability during non-rapid eye movement (NREM) sleep, correlated with the visually extracted cyclic alternating pattern (CAP) parameters. Ninety-eight polysomnographic EEG recordings of normal controls were used. A new algorithm based on the analysis of the EEG amplitude variability during NREM sleep was designed and applied to all recordings, which were also scored visually for CAP. All measurements obtained with the new algorithm correlated positively with corresponding CAP parameters. In particular, total CAP time correlated with total NREM variability time (r = 0.596; P CAP time with light sleep variability time (r = 0.597; P CAP time with slow wave sleep variability time (r = 0.809; P CAP A phases showed a low correlation with the duration of variability events. Finally, the age-related modifications of CAP time and of NREM variability time were found to be very similar. The new method for the automatic analysis of NREM sleep amplitude variability presented here correlates significantly with visual CAP parameters; its application requires a minimum work time, compared to CAP analysis, and might be used in large studies involving numerous recordings in which NREM sleep EEG amplitude variability needs to be assessed. PMID:22084833

  15. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik;

    Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study was to...

  16. An automatic segmentation method for building facades from vehicle-borne LiDAR point cloud data based on fundamental geographical data

    Science.gov (United States)

    Li, Yongqiang; Mao, Jie; Cai, Lailiang; Zhang, Xitong; Li, Lixue

    2016-03-01

    In this paper, the author proposed a segmentation method based on the fundamental geographic data, the algorithm describes as following: Firstly, convert the coordinate system of fundamental geographic data to that of vehicle- borne LiDAR point cloud though some data preprocessing work, and realize the coordinate system between them; Secondly, simplify the feature of fundamental geographic data, extract effective contour information of the buildings, then set a suitable buffer threshold value for building contour, and segment out point cloud data of building facades automatically; Thirdly, take a reasonable quality assessment mechanism, check and evaluate of the segmentation results, control the quality of segmentation result. Experiment shows that the proposed method is simple and effective. The method also has reference value for the automatic segmentation for surface features of other types of point cloud.

  17. Method of automatic image registration of three-dimensional range of archaeological restoration

    International Nuclear Information System (INIS)

    We propose an automatic registration system for reconstruction of various positions of a large object based on a static structured light pattern. The system combines the technology of stereo vision, structured light pattern, the positioning system of the vision sensor and an algorithm that simplifies the process of finding correspondence for the modeling of large objects. A new structured light pattern based on Kautz sequence is proposed, using this pattern as static implement a proposed new registration method. (Author)

  18. A Method for Automatic Identification of Reliable Heart Rates Calculated from ECG and PPG Waveforms

    OpenAIRE

    Yu, Chenggang; Liu, Zhenqiu; McKenna, Thomas; Reisner, Andrew T.; Reifman, Jaques

    2006-01-01

    Objective: The development and application of data-driven decision-support systems for medical triage, diagnostics, and prognostics pose special requirements on physiologic data. In particular, that data are reliable in order to produce meaningful results. The authors describe a method that automatically estimates the reliability of reference heart rates (HRr) derived from electrocardiogram (ECG) waveforms and photoplethysmogram (PPG) waveforms recorded by vital-signs monitors. The reliabilit...

  19. Automatic meshing method for optimisation of the fusion zone dimensions in Finite Element models of welds

    OpenAIRE

    DECROOS Koenraad; OHMS Carsten; Petrov, Roumen; Seefeldt, Marc; Verhaeghe, Frederik; Kestens, Leo

    2013-01-01

    A new method has been designed to automatically adapt the geometry of the fusion zone of a weld according to the temperature calculations when the thermal welding heat source parameters are known. In the material definition in a Finite Element code for welding stress calculations, the fusion zone material has different properties than the base material since, among others, the temperature at which the material is stress free is the melting temperature instead of room temperature. In this work...

  20. AUTOMR: An automatic processing program system for the molecular replacement method

    International Nuclear Information System (INIS)

    An automatic processing program system of the molecular replacement method AUTMR is presented. The program solves the initial model of the target crystal structure using a homologous molecule as the search model. It processes the structure-factor calculation of the model molecule, the rotation function, the translation function and the rigid-group refinement successively in one computer job. Test calculations were performed for six protein crystals and the structures were solved in all of these cases. (orig.)

  1. Automatically classifying sentences in full-text biomedical articles into Introduction, Methods, Results and Discussion

    OpenAIRE

    Agarwal, Shashank; Yu, Hong

    2009-01-01

    Biomedical texts can be typically represented by four rhetorical categories: Introduction, Methods, Results and Discussion (IMRAD). Classifying sentences into these categories can benefit many other text-mining tasks. Although many studies have applied different approaches for automatically classifying sentences in MEDLINE abstracts into the IMRAD categories, few have explored the classification of sentences that appear in full-text biomedical articles. We first evaluated whether sentences in...

  2. Technical characterization by image analysis: an automatic method of mineralogical studies

    International Nuclear Information System (INIS)

    The application of a modern method of image analysis fully automated for the study of grain size distribution modal assays, degree of liberation and mineralogical associations is discussed. The image analyser is interfaced with a scanning electron microscope and an energy dispersive X-rays analyser. The image generated by backscattered electrons is analysed automatically and the system has been used in accessment studies of applied mineralogy as well as in process control in the mining industry. (author)

  3. Manual versus automatic bladder wall thickness measurements: a method comparison study

    OpenAIRE

    Oelke, M.; Mamoulakis, C; Ubbink, D T; Rosette, de la, J.J.M.C.H.; Wijkstra, H.

    2009-01-01

    Purpose To compare repeatability and agreement of conventional ultrasound bladder wall thickness (BWT) measurements with automatically obtained BWT measurements by the BVM 6500 device. Methods Adult patients with lower urinary tract symptoms, urinary incontinence, or postvoid residual urine were urodynamically assessed. During two subsequent cystometry sessions the infusion pump was temporarily stopped at 150 and 250 ml bladder filling to measure BWT with conventional ultrasound and the BVM 6...

  4. Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment

    OpenAIRE

    Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang

    2012-01-01

    Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utiliz...

  5. Adleman-Manders-Miller Root Extraction Method Revisited

    OpenAIRE

    Cao, Zhengjun; Sha, Qian; Fan, Xiao

    2011-01-01

    In 1977, Adleman, Manders and Miller had briefly described how to extend their square root extraction method to the general $r$th root extraction over finite fields, but not shown enough details. Actually, there is a dramatic difference between the square root extraction and the general $r$th root extraction because one has to solve discrete logarithms for $r$th root extraction. In this paper, we clarify their method and analyze its complexity. Our heuristic presentation is helpful to grasp t...

  6. A new method for the automatic interpretation of Schlumberger and Wenner sounding curves

    Science.gov (United States)

    Zohdy, A.A.R.

    1989-01-01

    A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author

  7. Comparative exergy analyses of Jatropha curcas oil extraction methods: Solvent and mechanical extraction processes

    International Nuclear Information System (INIS)

    Highlights: ► Exergy analysis detects locations of resource degradation within a process. ► Solvent extraction is six times exergetically destructive than mechanical extraction. ► Mechanical extraction of jatropha oil is 95.93% exergetically efficient. ► Solvent extraction of jatropha oil is 79.35% exergetically efficient. ► Exergy analysis of oil extraction processes allow room for improvements. - Abstract: Vegetable oil extraction processes are found to be energy intensive. Thermodynamically, any energy intensive process is considered to degrade the most useful part of energy that is available to produce work. This study uses literature values to compare the efficiencies and degradation of the useful energy within Jatropha curcas oil during oil extraction taking into account solvent and mechanical extraction methods. According to this study, J. curcas seeds on processing into J. curcas oil is upgraded with mechanical extraction but degraded with solvent extraction processes. For mechanical extraction, the total internal exergy destroyed is 3006 MJ which is about six times less than that for solvent extraction (18,072 MJ) for 1 ton J. curcas oil produced. The pretreatment processes of the J. curcas seeds recorded a total internal exergy destructions of 5768 MJ accounting for 24% of the total internal exergy destroyed for solvent extraction processes and 66% for mechanical extraction. The exergetic efficiencies recorded are 79.35% and 95.93% for solvent and mechanical extraction processes of J. curcas oil respectively. Hence, mechanical oil extraction processes are exergetically efficient than solvent extraction processes. Possible improvement methods are also elaborated in this study.

  8. GDRMS: a system for automatic extraction of the disease-centre relation

    Science.gov (United States)

    Yang, Ronggen; Zhang, Yue; Gong, Lejun

    2012-01-01

    With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload. Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge. GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical literatures using text mining technology. It is a ruled-based system which also provides disease-centre network visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and gene for the research community about etiology of disease.

  9. Automatic fuzzy contouring and parameter extraction of the left ventricle from multi-slice MR images

    International Nuclear Information System (INIS)

    Cardiac MR Imaging is a non invasive technique that allows the acquisition of a series of short-axis slices of the heart. These images encompass the entire left ventricle in the different phases of the cardiac cycle. The principal physiological parameters extracted from this series are the ejection fraction and the wall thickness. To this end, the determination of both the endocardial and the epicardial contour is required. Following the extraction of three parameters for each pixel, the fuzzy set of the cardiac contour points is defined. The first parameter depends on the pixel grey level value, the second on the presence of an edge and the third on the information retrieved on the previous slice. The calculation of the membership degree to the fuzzy set of the cardiac contour points for each pixel involves the creation of a matrix of membership degrees. The cardiac contours are determined on this matrix with the aid of a dynamic programming technique, graph searching. (authors)

  10. A semi-automatic method of generating subject-specific pediatric head finite element models for impact dynamic responses to head injury.

    Science.gov (United States)

    Li, Zhigang; Han, Xiaoqiang; Ge, Hao; Ma, Chunsheng

    2016-07-01

    To account for the effects of head realistic morphological feature variation on the impact dynamic responses to head injury, it is necessary to develop multiple subject-specific pediatric head finite element (FE) models based on computed tomography (CT) or magnetic resonance imaging (MRI) scans. However, traditional manual model development is very time-consuming. In this study, a new automatic method was developed to extract anatomical points from pediatric head CT scans to represent pediatric head morphological features (head size/shape, skull thickness, and suture/fontanel width). Subsequently, a geometry-adaptive mesh morphing method based on radial basis function was developed that can automatically morph a baseline pediatric head FE model into target FE models with geometries corresponding to the extracted head morphological features. In the end, five subject-specific head FE models of approximately 6-month-old (6MO) were automatically generated using the developed method. These validated models were employed to investigate differences in the head dynamic responses among subjects with different head morphologies. The results show that variations in head morphological features have a relatively large effect on pediatric head dynamic response. The results of this study indicate that pediatric head morphological variation had better be taken into account when reconstructing pediatric head injury due to traffic/fall accidents or child abuses using computational models as well as predicting head injury risk for children with obvious difference in head size and morphologies. PMID:27058003

  11. Automatic detection of hidden dimensions to obtain appropriate reaction coordinates in the Outlier FLOODing (OFLOOD) method

    Science.gov (United States)

    Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru

    2015-10-01

    As a strategy for reproducing rare, biologically important events, we previously developed the Outlier FLOODing (OFLOOD) method [J. Comput. Chem. 36 (2015) 97-102]. This method utilizes conformational resampling from rarely occurring states, detected as outliers, to promote conformational transitions relevant to the rare events. However, to perform OFLOOD efficiently requires specifying a set of appropriate reaction coordinates (RCs) with non-trivial specifications. Therefore, in this paper, we propose a strategy to obtain a set of appropriate RCs using a method where the best set of RCs are automatically searched from the initially given RCs, via clustering the states of biomolecules.

  12. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    OpenAIRE

    J. Del Rio Vera; Coiras, E.; Groen, J; Evans, B.

    2009-01-01

    This paper presents a new supervised classification approach for automated target recognition (ATR) in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving...

  13. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Science.gov (United States)

    Del Rio Vera, J.; Coiras, E.; Groen, J.; Evans, B.

    2009-12-01

    This paper presents a new supervised classification approach for automated target recognition (ATR) in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  14. Automatic extraction of PIOPED interpretations from ventilation/perfusion lung scan reports.

    OpenAIRE

    Fiszman, M.; Haug, P. J.; Frederick, P. R.

    1998-01-01

    Free-text documents are the main type of data produced by a radiology department in a hospital information system. While this type of data is readily accessible for clinical data review it can not be accessed by other applications to perform medical decision support, quality assurance, and outcome studies. In an attempt to solve this problem, natural language processing systems have been developed and tested against chest x-rays reports to extract relevant clinical information and make it acc...

  15. Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.

    Science.gov (United States)

    Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L

    2010-07-01

    to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable. PMID:20718244

  16. A new automatic design method to develop multilayer thin film devices for high power laser applications

    International Nuclear Information System (INIS)

    Optical thin film devices play a major role in many areas of frontier technology like development of various laser systems to the designing of complex and precision optical systems. Design and development of these devices are really challenging when they are meant for high power laser applications. In these cases besides desired optical characteristics, the devices are expected to satisfy a whole range of different needs like high damage threshold, durability etc. In the present work a novel completely automatic design method based on Modified Complex Method has been developed for designing of high power thin film devices. Unlike most of the other methods it does not need any suitable starting design. A quarterwave design is sufficient to start with. If required, it is capable of generating its own starting design. The computer code of the method is very simple to implement. This report discusses this novel automatic design method and presents various practicable output designs generated by it. The relative efficiency of the method along with other powerful methods has been presented while designing a broadband IR antireflection coating. The method is also incorporated with 2D and 3D electric field analysis programmes to produce high damage threshold designs. Some experimental devices developed using such designs are also presented in the report. (author). 36 refs., 41 figs

  17. Brazil nut sorting for aflatoxin prevention: a comparison between automatic and manual shelling methods

    Directory of Open Access Journals (Sweden)

    Ariane Mendonça Pacheco

    2013-06-01

    Full Text Available The impact of automatic and manual shelling methods during manual/visual sorting of different batches of Brazil nuts from the 2010 and 2011 harvests was evaluated in order to investigate aflatoxin prevention.The samples were tested as follows: in-shell, shell, shelled, and pieces in order to evaluate the moisture content (mc, water activity (Aw, and total aflatoxin (LOD = 0.3 µg/kg and LOQ 0.85 µg/kg at the Brazil nut processing plant. The results of aflatoxins obtained for the manually shelled nut samples ranged from 3.0 to 60.3 µg/g and from 2.0 to 31.0 µg/g for the automatically shelled samples. All samples showed levels of mc below the limit of 15%; on the other hand, shelled samples from both harvests showed levels of Aw above the limit. There were no significant differences concerning the manual or automatic shelling results during the sorting stages. On the other hand, the visual sorting was effective in decreasing the aflatoxin contamination in both methods.

  18. An automatic method to generate domain-specific investigator networks using PubMed abstracts

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2007-06-01

    Full Text Available Abstract Background Collaboration among investigators has become critical to scientific research. This includes ad hoc collaboration established through personal contacts as well as formal consortia established by funding agencies. Continued growth in online resources for scientific research and communication has promoted the development of highly networked research communities. Extending these networks globally requires identifying additional investigators in a given domain, profiling their research interests, and collecting current contact information. We present a novel strategy for building investigator networks dynamically and producing detailed investigator profiles using data available in PubMed abstracts. Results We developed a novel strategy to obtain detailed investigator information by automatically parsing the affiliation string in PubMed records. We illustrated the results by using a published literature database in human genome epidemiology (HuGE Pub Lit as a test case. Our parsing strategy extracted country information from 92.1% of the affiliation strings in a random sample of PubMed records and in 97.0% of HuGE records, with accuracies of 94.0% and 91.0%, respectively. Institution information was parsed from 91.3% of the general PubMed records (accuracy 86.8% and from 94.2% of HuGE PubMed records (accuracy 87.0. We demonstrated the application of our approach to dynamic creation of investigator networks by creating a prototype information system containing a large database of PubMed abstracts relevant to human genome epidemiology (HuGE Pub Lit, indexed using PubMed medical subject headings converted to Unified Medical Language System concepts. Our method was able to identify 70–90% of the investigators/collaborators in three different human genetics fields; it also successfully identified 9 of 10 genetics investigators within the PREBIC network, an existing preterm birth research network. Conclusion We successfully created a

  19. Extraction of Ashwagandha by conventional extraction methods and evaluation of its anti-stress activity

    Directory of Open Access Journals (Sweden)

    Jain H

    2010-01-01

    Full Text Available The present study was conducted to compare the yield and the antistress activity of Ashwagandha (Withania somnifera extracts, using two extraction methods: hot continuous percolation and maceration. Various parameters like temperature, extraction time (10 hours solvents (water, alcohol, hydroalcohol and drug-solvent ratios (1:6, 1:8, 1:10 were fixed. The highest yield was found to be 16.96% w/w by maceration process using water (1:8. The activity of different extracts was done by Plus-Maze model using alprazolam as the standard drug. Significant result was found in water extract and hydroalcoholic (1:8 extract prepared by maceration method and also in hydro-alcoholic extract (1:8 prepared by soxhlet process.

  20. The effect of extraction method on antioxidant activity of Atractylis babelii Hochr. leaves and flowers extracts

    OpenAIRE

    Khadidja Boudebaz; Samira Nia, Malika; Trabelsi Ayadi; Jamila Kalthoum Cherif

    2015-01-01

    In this study, leaves and flowers of Atractylis babelii were chosen to investigate their antioxidant activities. Thus, a comparison between the antioxidant properties of ethanolic crude extracts obtained by two extraction methods, maceration and soxhlet extraction, was performed using two different tests; DPPH and ABTS radical assays. Besides, total polyphenol, flavonoid and condensed tannin contents were determined in leaves and flowers of Atractylis babelii by colorimetric methods. The resu...

  1. Effect of Temperature on the Color of Natural Dyes Extracted Using Pressurized Hot Water Extraction Method

    Directory of Open Access Journals (Sweden)

    Nursyamirah A. Razak

    2011-01-01

    Full Text Available Problem statement: Traditionally, extraction of natural dyes with boiling method produced only one single tone of colorant/dyes which involved plenty of water in several hours of extraction time. A new modern extraction technique should be introduced especially to textile dyers so that a variety of tone of colorants can be produced in a shorter time with less consumption of water. Approach: This study demonstrated Pressurized Hot Water Extraction (PHWE as a new technique to extract colorants from a selected plant, i.e., Xylocarpus moluccensis species which can be found abundantly in Peninsular Malaysia. Colorant from the heartwood of Xylocarpus moluccensis was extracted at different elevated temperatures, from 50°C up to 150°C using PHWE technique and the extracts obtained were compared to those obtained via boiling method at 100°C. The color strength of dye extracts was then analyzed using UV-Visible spectrophotometer and Video Spectral Comparator (VSC 5000. The effect of the extraction temperatures on the color of extracts obtained by PHWE was also investigated. Results: Results show that the colorimetric data obtained from VSC reading exhibited the exact tone of colors found in anthraquinone. UV-Visible spectrum also shows that higher absorbance of natural dyes extracted via PHWE compared to those obtained by boiling method. Conclusion: By using PHWE employed at different elevated temperatures, different tones of colorants can be produced from one single source in a shorter time with less consumption of water.

  2. Methods for microbial DNA extraction from soil for PCR amplification.

    Science.gov (United States)

    Yeates, C; Gillings, M R; Davison, A D; Altavilla, N; Veal, D A

    1998-05-14

    Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1). DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol precipitation. This method was compared to other DNA extraction methods with regard to DNA purity and size. PMID:12734590

  3. Methods for microbial DNA extraction from soil for PCR amplification

    Directory of Open Access Journals (Sweden)

    Yeates C

    1998-01-01

    Full Text Available Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1. DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol precipitation. This method was compared to other DNA extraction methods with regard to DNA purity and size.

  4. Can Automatic Abstracting Improve on Current Extracting Techniques in Aiding Users to Judge the Relevance of Pages in Search Engine Results?

    OpenAIRE

    Liang, SF

    2004-01-01

    Current search engines use sentence extraction techniques to produce snippet result summaries, which users may find less than ideal for determining the relevance of pages. Unlike extracting, abstracting programs analyse the context of documents and rewrite them into informative summaries. Our project aims to produce abstracting summaries which are coherent and easy to read thereby lessening users’ time in judging the relevance of pages. However, automatic abstracting technique has its domain ...

  5. DEVELOPMENT AND METHOD VALIDATION OF AESCULUS HIPPOCASTANUM EXTRACT

    OpenAIRE

    Biradar sanjivkumar; Dhumansure Rajkumar; Patil Mallikarjun; Biradar Karankumar; K Sreenivasa Rao

    2012-01-01

    Aesculus hippocastanum is highly regarded for their medicinal properties in the indigenous system of medicine. The objectives of the present study include the validation of Aesculus hippocastanum extract. Authenticated extract of seeds of the plant was collected and the method was developed for the validation. In this the extract was subjected to check the Accuracy, Precision, Linearity and Specificity. For the validation UV spectrophotometer was used. The proposed UV validation method for ...

  6. A Robust Visual-Feature-Extraction Method in Public Environment

    OpenAIRE

    カ, ゴセー; Hua, Gangchen

    2015-01-01

    In this study we describe a new feature extracting method that can extract robust features from a sequence of images and also performs satisfactorily in a highly dynamic environment. This method is based on the geometric structure of matched local feature points. When compared with other previous methods, the proposed method is more accurate in appearance-only simultaneous localization and mapping (SLAM). When compared to position-invariant robust features, the proposed method is more suitabl...

  7. Automatic Target Recognition in Synthetic Aperture Sonar Images Based on Geometrical Feature Extraction

    Directory of Open Access Journals (Sweden)

    J. Del Rio Vera

    2009-01-01

    Full Text Available This paper presents a new supervised classification approach for automated target recognition (ATR in SAS images. The recognition procedure starts with a novel segmentation stage based on the Hilbert transform. A number of geometrical features are then extracted and used to classify observed objects against a previously compiled database of target and non-target features. The proposed approach has been tested on a set of 1528 simulated images created by the NURC SIGMAS sonar model, achieving up to 95% classification accuracy.

  8. Terrain-driven unstructured mesh development through semi-automatic vertical feature extraction

    Science.gov (United States)

    Bilskie, Matthew V.; Coggin, David; Hagen, Scott C.; Medeiros, Stephen C.

    2015-12-01

    A semi-automated vertical feature terrain extraction algorithm is described and applied to a two-dimensional, depth-integrated, shallow water equation inundation model. The extracted features describe what are commonly sub-mesh scale elevation details (ridge and valleys), which may be ignored in standard practice because adequate mesh resolution cannot be afforded. The extraction algorithm is semi-automated, requires minimal human intervention, and is reproducible. A lidar-derived digital elevation model (DEM) of coastal Mississippi and Alabama serves as the source data for the vertical feature extraction. Unstructured mesh nodes and element edges are aligned to the vertical features and an interpolation algorithm aimed at minimizing topographic elevation error assigns elevations to mesh nodes via the DEM. The end result is a mesh that accurately represents the bare earth surface as derived from lidar with element resolution in the floodplain ranging from 15 m to 200 m. To examine the influence of the inclusion of vertical features on overland flooding, two additional meshes were developed, one without crest elevations of the features and another with vertical features withheld. All three meshes were incorporated into a SWAN+ADCIRC model simulation of Hurricane Katrina. Each of the three models resulted in similar validation statistics when compared to observed time-series water levels at gages and post-storm collected high water marks. Simulated water level peaks yielded an R2 of 0.97 and upper and lower 95% confidence interval of ∼ ± 0.60 m. From the validation at the gages and HWM locations, it was not clear which of the three model experiments performed best in terms of accuracy. Examination of inundation extent among the three model results were compared to debris lines derived from NOAA post-event aerial imagery, and the mesh including vertical features showed higher accuracy. The comparison of model results to debris lines demonstrates that additional

  9. Automatic Estimation of Artemia Hatching Rate Using an Object Discrimination Method

    Directory of Open Access Journals (Sweden)

    Sung Kim

    2013-09-01

    Full Text Available Digital image processing is a process to analyze a large volume of information on digital images. In this study, Artemia hatching rate was measured by automatically classifying and counting cysts and larvae based on color imaging data from cyst hatching experiments using an image processing technique. The Artemia hatching rate estimation consists of a series of processes; a step to convert the scanned image data to a binary image data, a process to detect objects and to extract their shape information in the converted image data, an analysis step to choose an optimal discriminant function, and a step to recognize and classify the objects using the function. The function to classify Artemia cysts and larvae is optimally estimated based on the classification performance using the areas and the plan-form factors of the detected objects. The hatching rate using the image data obtained under the different experimental conditions was estimated in the range of 34-48%. It was shown that the maximum difference is about 19.7% and the average root-mean squared difference is about 10.9% as the difference between the results using an automatic counting (this study and a manual counting were compared. This technique can be applied to biological specimen analysis using similar imaging information.

  10. Automatic Crack Detection and Classification Method for Subway Tunnel Safety Monitoring

    Directory of Open Access Journals (Sweden)

    Wenyu Zhang

    2014-10-01

    Full Text Available Cracks are an important indicator reflecting the safety status of infrastructures. This paper presents an automatic crack detection and classification methodology for subway tunnel safety monitoring. With the application of high-speed complementary metal-oxide-semiconductor (CMOS industrial cameras, the tunnel surface can be captured and stored in digital images. In a next step, the local dark regions with potential crack defects are segmented from the original gray-scale images by utilizing morphological image processing techniques and thresholding operations. In the feature extraction process, we present a distance histogram based shape descriptor that effectively describes the spatial shape difference between cracks and other irrelevant objects. Along with other features, the classification results successfully remove over 90% misidentified objects. Also, compared with the original gray-scale images, over 90% of the crack length is preserved in the last output binary images. The proposed approach was tested on the safety monitoring for Beijing Subway Line 1. The experimental results revealed the rules of parameter settings and also proved that the proposed approach is effective and efficient for automatic crack detection and classification.

  11. Vernix caseosa lipid extraction: Comparison of methods

    Czech Academy of Sciences Publication Activity Database

    Míková, Radka; Vrkoslav, Vladimír; Horká, Petra; Zábranská, Marie; Doležal, A.; Plavka, R.; Cvačka, Josef

    Cracow : -, 2012. s. 352-352. [Euro Fed Lipid Congress. Fats , Oils and Lipids: from Science and Technology to Health /10./. 23.09.2012-26.09.2012, Cracow] R&D Projects: GA ČR GAP206/12/0750 Grant ostatní: GA UK(CZ) SVV 2012-265201 Institutional support: RVO:61388963 Keywords : vernix caseosa * lipids * extraction Subject RIV: CB - Analytical Chemistry, Separation

  12. A Cell Extraction Method for Oily Sediments

    OpenAIRE

    MichaelLappé

    2011-01-01

    Hydrocarbons can be found in many different habitats and represent an important carbon source for microbes. As fossil fuels, they are also an important economical resource and through natural seepage or accidental release they can be major pollutants. DNA-specific stains and molecular probes bind to hydrocarbons, causing massive background fluorescence, thereby hampering cell enumeration. The cell extraction procedure of Kallmeyer et al. (2008) separates the cells from the sediment matrix. In...

  13. Automatic Extraction and Recognition of Nmnbers in Topographic Maps%地形图数字注记的自动提取与识别

    Institute of Scientific and Technical Information of China (English)

    徐战武; 张涛; 刘肖琳

    2001-01-01

    地形图的自动扫描矢量化是GIS领域亟待解决的一个重要难题。地形图中包含了大量的字体丰富的数字注记,用以表示地物地貌的属性等特征,正确提取并识别这些数字是图纸处理中的重要组成部分。本文分析了现有的提取方法的不足,提出了一种新的数字注记自动提取与识别算法,首先根据先验的尺寸大小确定候选数字,再采用OCON结构的BP神经网络识别出真正的数字,然后利用近邻关系提取出扩展数字。实验表明,该算法是快速、高效、可靠的。%Automatic vectorization of scanned topographic maps is an important and difficult problem that needs to be solved urgently. Atopographic map includes plenty of numbers with various fonts which indicate properties and other features of general configuration. Extracting and recognizing these numbers correctly is an important part in map disposal. Many present methods of extraction are analyzed on their disadvantages and a new extraction and recognition algorithm of numbers is presented in this paper. The algorithm first fixes on candidates according to transcendental sizes, and then recognizes real numbers with BP neural network of OCON structure. At last, it extracts extended numbers using relation of neighborhood. Experiments have proved it is fast, efficient and reliable.

  14. A Multi-stage Method to Extract Road from High Resolution Satellite Image

    International Nuclear Information System (INIS)

    Extracting road information from high-resolution satellite images is complex and hardly achieves by exploiting only one or two modules. This paper presents a multi-stage method, consisting of automatic information extraction and semi-automatic post-processing. The Multi-scale Enhancement algorithm enlarges the contrast of human-made structures with the background. The Statistical Region Merging segments images into regions, whose skeletons are extracted and pruned according to geometry shape information. Setting the start and the end skeleton points, the shortest skeleton path is constructed as a road centre line. The Bidirectional Adaptive Smoothing technique smoothens the road centre line and adjusts it to right position. With the smoothed line and its average width, a Buffer algorithm reconstructs the road region easily. Seen from the last results, the proposed method eliminates redundant non-road regions, repairs incomplete occlusions, jumps over complete occlusions, and reserves accurate road centre lines and neat road regions. During the whole process, only a few interactions are needed

  15. Histogram of Intensity Feature Extraction for Automatic Plastic Bottle Recycling System Using Machine Vision

    Directory of Open Access Journals (Sweden)

    Suzaimah Ramli

    2008-01-01

    Full Text Available Currently, many recycling activities adopt manual sorting for plastic recycling that relies on plant personnel who visually identify and pick plastic bottles as they travel along the conveyor belt. These bottles are then sorted into the respective containers. Manual sorting may not be a suitable option for recycling facilities of high throughput. It has also been noted that the high turnover among sorting line workers had caused difficulties in achieving consistency in the plastic separation process. As a result, an intelligent system for automated sorting is greatly needed to replace manual sorting system. The core components of machine vision for this intelligent sorting system is the image recognition and classification. In this research, the overall plastic bottle sorting system is described. Additionally, the feature extraction algorithm used is discussed in detail since it is the core component of the overall system that determines the success rate. The performance of the proposed feature extractions were evaluated in terms of classification accuracy and result obtained showed an accuracy of more than 80%.

  16. Design of automatic control system for the precipitation of bromelain from the extract of pineapple wastes

    Directory of Open Access Journals (Sweden)

    Flavio Vasconcelos da Silva

    2010-12-01

    Full Text Available In this work, bromelain was recovered from ground pineapple stem and rind by means of precipitation with alcohol at low temperature. Bromelain is the name of a group of powerful protein-digesting, or proteolytic, enzymes that are particularly useful for reducing muscle and tissue inflammation and as a digestive aid. Temperature control is crucial to avoid irreversible protein denaturation and consequently to improve the quality of the enzyme recovered. The process was carried out alternatively in two fed-batch pilot tanks: a glass tank and a stainless steel tank. Aliquots containing 100 mL of pineapple aqueous extract were fed into the tank. Inside the jacketed tank, the protein was exposed to unsteady operating conditions during the addition of the precipitating agent (ethanol 99.5% because the dilution ratio "aqueous extract to ethanol" and heat transfer area changed. The coolant flow rate was manipulated through a variable speed pump. Fine tuned conventional and adaptive PID controllers were on-line implemented using a fieldbus digital control system. The processing performance efficiency was enhanced and so was the quality (enzyme activity of the product.

  17. Study of Automatic Extraction, Classification, and Ranking of Product Aspects Based on Sentiment Analysis of Reviews

    Directory of Open Access Journals (Sweden)

    Muhammad Rafi

    2015-10-01

    Full Text Available It is very common for a customer to read reviews about the product before making a final decision to buy it. Customers are always eager to get the best and the most objective information about the product theywish to purchase and reviews are the major source to obtain this information. Although reviews are easily accessible from the web, but since most of them carry ambiguous opinion and different structure, it is often very difficult for a customer to filter the information he actually needs. This paper suggests a framework, which provides a single user interface solution to this problem based on sentiment analysis of reviews. First, it extracts all the reviews from different websites carrying varying structure, and gathers information about relevant aspects of that product. Next, it does sentiment analysis around those aspects and gives them sentiment scores. Finally, it ranks all extracted aspects and clusters them into positive and negative class. The final output is a graphical visualization of all positive and negative aspects, which provide the customer easy, comparable, and visual information about the important aspects of the product. The experimental results on five different products carrying 5000 reviewsshow 78% accuracy. Moreover, the paper also explained the effect of Negation, Valence Shifter, and Diminisher with sentiment lexiconon sentiment analysis, andconcluded that they all are independent of the case problem , and have no effect on the accuracy of sentiment analysis.

  18. Evaluation of in vitro antioxidant potential of different polarities stem crude extracts by different extraction methods of Adenium obesum

    OpenAIRE

    Mohammad Amzad Hossain; Tahiya Hilal Ali Alabri; Amira Hamood Salim Al Musalami; Md. Sohail Akhtar; Sadri Said

    2014-01-01

    Objective: To select best extraction method for the isolated antioxidant compounds from the stems of Adenium obesum. Methods: Two methods used for the extraction are Soxhlet and maceration methods. Methanol solvent was used for both extraction method. The methanol crude extract was defatted with water and extracted successively with hexane, chloroform, ethyl acetate and butanol solvents. The antioxidant potential for all crude extracts were determined by using 1, 1-diphenyl...

  19. A new automatic image analysis method for assessing estrogen receptors' status in breast tissue specimens.

    Science.gov (United States)

    Mouelhi, Aymen; Sayadi, Mounir; Fnaiech, Farhat; Mrad, Karima; Ben Romdhane, Khaled

    2013-12-01

    Manual assessment of estrogen receptors' (ER) status from breast tissue microscopy images is a subjective, time consuming and error prone process. Automatic image analysis methods offer the possibility to obtain consistent, objective and rapid diagnoses of histopathology specimens. In breast cancer biopsies immunohistochemically (IHC) stained for ER, cancer cell nuclei present a large variety in their characteristics that bring various difficulties for traditional image analysis methods. In this paper, we propose a new automatic method to perform both segmentation and classification of breast cell nuclei in order to give quantitative assessment and uniform indicators of IHC staining that will help pathologists in their diagnostic. Firstly, a color geometric active contour model incorporating a spatial fuzzy clustering algorithm is proposed to detect the contours of all cell nuclei in the image. Secondly, overlapping and touching nuclei are separated using an improved watershed algorithm based on a concave vertex graph. Finally, to identify positive and negative stained nuclei, all the segmented nuclei are classified into five categories according to their staining intensity and morphological features using a trained multilayer neural network combined with Fisher's linear discriminant preprocessing. The proposed method is tested on a large dataset containing several breast tissue images with different levels of malignancy. The experimental results show high agreement between the results of the method and ground-truth from the pathologist panel. Furthermore, a comparative study versus existing techniques is presented in order to demonstrate the efficiency and the superiority of the proposed method. PMID:24290943

  20. Comparison of methods for extracting thylakoid membranes of Arabidopsis plants.

    Science.gov (United States)

    Chen, Yang-Er; Yuan, Shu; Schröder, Wolfgang P

    2016-01-01

    Robust and reproducible methods for extracting thylakoid membranes are required for the analysis of photosynthetic processes in higher plants such as Arabidopsis. Here, we compare three methods for thylakoid extraction using two different buffers. Method I involves homogenizing the plant material with a metal/glass blender; method II involves manually grinding the plant material in ice-cold grinding buffer with a mortar and method III entails snap-freezing followed by manual grinding with a mortar, after which the frozen powder is thawed in isolation buffer. Thylakoid membrane samples extracted using each method were analyzed with respect to protein and chlorophyll content, yields relative to starting material, oxygen-evolving activity, protein complex content and phosphorylation. We also examined how the use of fresh and frozen thylakoid material affected the extracts' contents of protein complexes. The use of different extraction buffers did not significantly alter the protein content of the extracts in any case. Method I yielded thylakoid membranes with the highest purity and oxygen-evolving activity. Method III used low amounts of starting material and was capable of capturing rapid phosphorylation changes in the sample at the cost of higher levels of contamination. Method II yielded thylakoid membrane extracts with properties intermediate between those obtained with the other two methods. Finally, frozen and freshly isolated thylakoid membranes performed identically in blue native-polyacrylamide gel electrophoresis experiments conducted in order to separate multimeric protein supracomplexes. PMID:26337850

  1. Creation of voxel-based models for paediatric dosimetry from automatic segmentation methods

    International Nuclear Information System (INIS)

    Full text: The first computational models representing human anatomy were mathematical phantoms, but still far from accurate representations of human body. These models have been used with radiation transport codes (Monte Carlo) to estimate organ doses from radiological procedures. Although new medical imaging techniques have recently allowed the construction of voxel-based models based on the real anatomy, few children models from individual CT or MRI data have been reported [1,3]. For pediatric dosimetry purposes, a large range of voxel models by ages is required since scaling the anatomy from existing models is not sufficiently accurate. The small number of models available arises from the small number of CT or MRI data sets of children available and the long amount of time required to segment the data sets. The existing models have been constructed by manual segmentation slice by slice and using simple thresholding techniques. In medical image segmentation, considerable difficulties appear when applying classical techniques like thresholding or simple edge detection. Until now, any evidence of more accurate or near-automatic methods used in construction of child voxel models exists. We aim to construct a range of pediatric voxel models, integrating automatic or semi-automatic 3D segmentation techniques. In this paper we present the first stage of this work using pediatric CT data.

  2. Research on large spatial coordinate automatic measuring system based on multilateral method

    Science.gov (United States)

    Miao, Dongjing; Li, Jianshuan; Li, Lianfu; Jiang, Yuanlin; Kang, Yao; He, Mingzhao; Deng, Xiangrui

    2015-10-01

    To measure the spatial coordinate accurately and efficiently in large size range, a manipulator automatic measurement system which based on multilateral method is developed. This system is divided into two parts: The coordinate measurement subsystem is consists of four laser tracers, and the trajectory generation subsystem is composed by a manipulator and a rail. To ensure that there is no laser beam break during the measurement process, an optimization function is constructed by using the vectors between the laser tracers measuring center and the cat's eye reflector measuring center, then an orientation automatically adjust algorithm for the reflector is proposed, with this algorithm, the laser tracers are always been able to track the reflector during the entire measurement process. Finally, the proposed algorithm is validated by taking the calibration of laser tracker for instance: the actual experiment is conducted in 5m × 3m × 3.2m range, the algorithm is used to plan the orientations of the reflector corresponding to the given 24 points automatically. After improving orientations of some minority points with adverse angles, the final results are used to control the manipulator's motion. During the actual movement, there are no beam break occurs. The result shows that the proposed algorithm help the developed system to measure the spatial coordinates over a large range with efficiency.

  3. Development of Automatic 3D Blood Vessel Search and Automatic Blood Sampling System by Using Hybrid Stereo-Autofocus Method

    OpenAIRE

    Eiji Nakamachi; Yusuke Morita; Yoshifumi Mizuno

    2012-01-01

    We developed an accurate three-dimensional blood vessel search (3D BVS) system and an automatic blood sampling system. They were implemented into a point-of-care system designed for medical care, installed in a portable self-monitoring blood glucose (SMBG) device. The system solves problems of human error caused by complicated manual operations of conventional SMBG devices. We evaluated its accuracy of blood-vessel position detection. The 3D BVS system uses near-infrared (NIR) light imaging a...

  4. An automated and simple method for brain MR image extraction

    OpenAIRE

    Zhu Zixin; Liu Jiafeng; Zhang Haiyan; Li Haiyun

    2011-01-01

    Abstract Background The extraction of brain tissue from magnetic resonance head images, is an important image processing step for the analyses of neuroimage data. The authors have developed an automated and simple brain extraction method using an improved geometric active contour model. Methods The method uses an improved geometric active contour model which can not only solve the boundary leakage problem but also is less sensitive to intensity inhomogeneity. The method defines the initial fu...

  5. DEVELOPMENT AND METHOD VALIDATION OF AESCULUS HIPPOCASTANUM EXTRACT

    Directory of Open Access Journals (Sweden)

    Biradar sanjivkumar

    2012-07-01

    Full Text Available Aesculus hippocastanum is highly regarded for their medicinal properties in the indigenous system of medicine. The objectives of the present study include the validation of Aesculus hippocastanum extract. Authenticated extract of seeds of the plant was collected and the method was developed for the validation. In this the extract was subjected to check the Accuracy, Precision, Linearity and Specificity. For the validation UV spectrophotometer was used. The proposed UV validation method for the extract is accurate, linear, precise, linear, specific and within the range. Further isolation and in-vitro studies are needed.

  6. Advanced method for automatic processing of seismic and infra-sound data

    International Nuclear Information System (INIS)

    Governmental organizations have manifested their need for rapid and precise information in the two main fields covered by operational seismology, i.e.: major earthquake alerts and the detection of nuclear explosions. To satisfy both of these constraints, it is necessary to implement increasingly elaborate automation methods for processing the data. Automatic processing methods are mainly based on the flowing elementary steps: detection of a seismic signal on a recording; identification of the type of wave associated with the signal; linking of the different detected arrivals to the same seismic event; localization of the source, which also determines the characteristics of the event. Otherwise, two main categories of processing may be distinguished: methods suitable for large aperture networks, which are characterized by single-channel treatment for detection and identification, and antenna-type methods which are based on searching for consistent signals on the scale of the net work. Within the two main fields of research mentioned here, our effort has focused on regional-scale seismic waves in relation to large-aperture networks as well as on detection techniques using a mini-network (antenna). We have taken advantage of the extensive set of examples in order to implement an automatic procedure for identifying regional seismic waves on single-channel recordings. With the mini-networks, we have developed a novel method universally applicable and successfully applied to various different types of recording (e.g. seismic, micro-barometric, etc) and networks adapted to different wavelength bands. (authors)

  7. Automatic optimized reload and depletion method for a pressurized water reactor

    International Nuclear Information System (INIS)

    A new method has been developed to automatically reload and deplete a pressurized water reactor (PWR) so that both the enriched inventory requirements during the reactor cycle and the cost of reloading the core are minimized. This is achieved through four stepwise optimization calculations: (a) determination of the minimum fuel requirement for an equivalent three-region core model, (b) optimal selection and allocation of fuel assemblies for each of the three regions to minimize the reload cost, (c) optimal placement of fuel assemblies to conserve regionwise optimal conditions, and (d) optimal control through poison management to deplete individual fuel assemblies to maximize end-of-cycle k /SUB eff/ . The new method differs from previous methods in that the optimization process automatically performs all tasks required to reload and deplete a PWR. In addition, the previous work that developed optimization methods principally for the initial reactor cycle was modified to handle subsequent cycles with fuel assemblies having burnup at beginning of cycle. Application of the method to the fourth reactor cycle at Three Mile Island Unit 1 has shown that both the enrichment and the number of fresh reload fuel assemblies can be decreased and fully amortized fuel assemblies can be reused to minimize the fuel cost of the reactor

  8. Effects of different extraction methods and conditions on the phenolic composition of mate tea extracts.

    Science.gov (United States)

    Grujic, Nevena; Lepojevic, Zika; Srdjenovic, Branislava; Vladic, Jelena; Sudji, Jan

    2012-01-01

    A simple and rapid HPLC method for determination of chlorogenic acid (5-O-caffeoylquinic acid) in mate tea extracts was developed and validated. The chromatography used isocratic elution with a mobile phase of aqueous 1.5% acetic acid-methanol (85:15, v/v). The flow rate was 0.8 mL/min and detection by UV at 325 nm. The method showed good selectivity, accuracy, repeatability and robustness, with detection limit of 0.26 mg/L and recovery of 97.76%. The developed method was applied for the determination of chlorogenic acid in mate tea extracts obtained by ethanol extraction and liquid carbon dioxide extraction with ethanol as co-solvent. Different ethanol concentrations were used (40, 50 and 60%, v/v) and liquid CO₂ extraction was performed at different pressures (50 and 100 bar) and constant temperature (27 ± 1 °C). Significant influence of extraction methods, conditions and solvent polarity on chlorogenic acid content, antioxidant activity and total phenolic and flavonoid content of mate tea extracts was established. The most efficient extraction solvent was liquid CO₂ with aqueous ethanol (40%) as co-solvent using an extraction pressure of 100 bar. PMID:22388965

  9. Effects of Different Extraction Methods and Conditions on the Phenolic Composition of Mate Tea Extracts

    Directory of Open Access Journals (Sweden)

    Jelena Vladic

    2012-03-01

    Full Text Available A simple and rapid HPLC method for determination of chlorogenic acid (5-O-caffeoylquinic acid in mate tea extracts was developed and validated. The chromatography used isocratic elution with a mobile phase of aqueous 1.5% acetic acid-methanol (85:15, v/v. The flow rate was 0.8 mL/min and detection by UV at 325 nm. The method showed good selectivity, accuracy, repeatability and robustness, with detection limit of 0.26 mg/L and recovery of 97.76%. The developed method was applied for the determination of chlorogenic acid in mate tea extracts obtained by ethanol extraction and liquid carbon dioxide extraction with ethanol as co-solvent. Different ethanol concentrations were used (40, 50 and 60%, v/v and liquid CO2 extraction was performed at different pressures (50 and 100 bar and constant temperature (27 ± 1 °C. Significant influence of extraction methods, conditions and solvent polarity on chlorogenic acid content, antioxidant activity and total phenolic and flavonoid content of mate tea extracts was established. The most efficient extraction solvent was liquid CO2 with aqueous ethanol (40% as co-solvent using an extraction pressure of 100 bar.

  10. Comparison of Methods for Protein Extraction from Pine Needles

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    Extraction of proteins from pine needles for proteomic analysis has long been a challenge for scientists. We compared three different protein extraction methods including sucrose, Tris-HCl and trichloroacetic acid (TCA)/acetone (TCA method) to determine their efficiency in separating pine needle proteins by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and two-dimensional PAGE (2D-PAGE). Proteins were then separated by SDS-PAGE. Among three methods the method using sucrose extraction buffer showed the highest efficiency and highest quality in separating proteins. In addition, clearer and more stable strips were detected by SDS-PAGE using sucrose extraction buffer. When the proteins extracted using sucrose extraction buffer were separated by 2D-PAGE, more than 300 protein spots, with isoelectric points (PI) ranging from 4.0 to 7.0 and molecular weights (MW) from 6.5 to 97.4 kD, were observed. This confirmed that the method with sucrose extraction buffer was an efficient and reliable method for extracting proteins from pine needles.

  11. Method for Real Time Text Extraction of Digital Manga Comic

    Directory of Open Access Journals (Sweden)

    Kohei Arai, Herman Tolle

    2011-08-01

    Full Text Available Manga is one of popular item in Japan and also in the rest of the world.Hundreds of manga printed everyday in Japan and some of printed manga bookwas digitized into web manga. People then make translation of Japaneselanguage on manga into other language -in conventional way- to share thepleasure of reading manga through the internet. In this paper, we propose anautomatic method for detect and extract Japanese character within a mangacomic page for online language translation process. Japanese character textextraction method is based on our comic frame content extraction method usingblob extraction function. Experimental results from 15 comic pages show that ourproposed method has 100% accuracy of flat comic frame extraction and comicballoon detection, and 93.75% accuracy of Japanese character text extraction.

  12. Scale parameter-estimating method for adaptive fingerprint pore extraction model

    Science.gov (United States)

    Yi, Yao; Cao, Liangcai; Guo, Wei; Luo, Yaping; He, Qingsheng; Jin, Guofan

    2011-11-01

    Sweat pores and other level 3 features have been proven to provide more discriminatory information about fingerprint characteristics, which is useful for personal identification especially in law enforcement applications. With the advent of high resolution (>=1000 ppi) fingerprint scanning equipment, sweat pores are attracting increasing attention in automatic fingerprint identification system (AFIS), where the extraction of pores is a critical step. This paper presents a scale parameter-estimating method in filtering-based pore extraction procedure. Pores are manually extracted from a 1000 ppi grey-level fingerprint image. The size and orientation of each detected pore are extracted together with local ridge width and orientation. The quantitative relation between the pore parameters (size and orientation) and local image parameters (ridge width and orientation) is statistically obtained. The pores are extracted by filtering fingerprint image with the new pore model, whose parameters are determined by local image parameters and the statistically established relation. Experiments conducted on high resolution fingerprints indicate that the new pore model gives good performance in pore extraction.

  13. An Automatic Method for Geometric Segmentation of Masonry Arch Bridges for Structural Engineering Purposes

    Science.gov (United States)

    Riveiro, B.; DeJong, M.; Conde, B.

    2016-06-01

    Despite the tremendous advantages of the laser scanning technology for the geometric characterization of built constructions, there are important limitations preventing more widespread implementation in the structural engineering domain. Even though the technology provides extensive and accurate information to perform structural assessment and health monitoring, many people are resistant to the technology due to the processing times involved. Thus, new methods that can automatically process LiDAR data and subsequently provide an automatic and organized interpretation are required. This paper presents a new method for fully automated point cloud segmentation of masonry arch bridges. The method efficiently creates segmented, spatially related and organized point clouds, which each contain the relevant geometric data for a particular component (pier, arch, spandrel wall, etc.) of the structure. The segmentation procedure comprises a heuristic approach for the separation of different vertical walls, and later image processing tools adapted to voxel structures allows the efficient segmentation of the main structural elements of the bridge. The proposed methodology provides the essential processed data required for structural assessment of masonry arch bridges based on geometric anomalies. The method is validated using a representative sample of masonry arch bridges in Spain.

  14. Comparative Research on EPS Extraction from Mechanical Dewatered Sludge with Different Methods

    OpenAIRE

    Weiyun Wang; Wanyu Liu; Lingyun Wang

    2015-01-01

    In order to find a suitable extracellular polymer substance (EPS) extraction method for mechanical dewatered sludge, four different methods including EDTA extraction, alkali extraction, acid extraction, ultrasonic extraction method have been used in extracting EPS from belt filter dewatered sludge. The contents of polysaccharide and proteins extracted from the dewatered sludge by different extraction methods are also analyzed. The results indicated that EDTA method and alkali extraction metho...

  15. Method and apparatus for continuous flow injection extraction analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hartenstein, Steven D. (Idaho Falls, ID); Siemer, Darryl D. (Idaho Falls, ID)

    1992-01-01

    A method and apparatus for a continuous flow injection batch extraction aysis system is disclosed employing extraction of a component of a first liquid into a second liquid which is a solvent for a component of the first liquid, and is immiscible with the first liquid, and for separating the first liquid from the second liquid subsequent to extraction of the component of the first liquid.

  16. Methods for microbial DNA extraction from soil for PCR amplification

    OpenAIRE

    Yeates C; Gillings, MR; Davison AD; Altavilla N; Veal DA

    1998-01-01

    Amplification of DNA from soil is often inhibited by co-purified contaminants. A rapid, inexpensive, large-scale DNA extraction method involving minimal purification has been developed that is applicable to various soil types (1). DNA is also suitable for PCR amplification using various DNA targets. DNA was extracted from 100g of soil using direct lysis with glass beads and SDS followed by potassium acetate precipitation, polyethylene glycol precipitation, phenol extraction and isopropanol pr...

  17. Improved method for the feature extraction of laser scanner using genetic clustering

    Institute of Scientific and Technical Information of China (English)

    Yu Jinxia; Cai Zixing; Duan Zhuohua

    2008-01-01

    Feature extraction of range images provided by ranging sensor is a key issue of pattern recognition. To automatically extract the environmental feature sensed by a 2D ranging sensor laser scanner, an improved method based on genetic clustering VGA-clustering is presented. By integrating the spatial neighbouring information of range data into fuzzy clustering algorithm, a weighted fuzzy clustering algorithm (WFCA) instead of standard clustering algorithm is introduced to realize feature extraction of laser scanner. Aimed at the unknown clustering number in advance, several validation index functions are used to estimate the validity of different clustering al-gorithms and one validation index is selected as the fitness function of genetic algorithm so as to determine the accurate clustering number automatically. At the same time, an improved genetic algorithm IVGA on the basis of VGA is proposed to solve the local optimum of clustering algorithm, which is implemented by increasing the population diversity and improving the genetic operators of elitist rule to enhance the local search capacity and to quicken the convergence speed. By the comparison with other algorithms, the effectiveness of the algorithm introduced is demonstrated.

  18. Method of Measuring Fixture Automatic Design and Assembly for Auto-Body Part

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    A method of 3-D measuring fixture automatic assembly for auto-body part is presented. Locating constraint mapping technique and assembly rule-based reasoning are applied. Calculating algorithm of the position and pose for the part model, fixture configuration and fixture elements in virtual auto-body assembly space are given. Transforming fixture element from itself coordinate system space to assembly space with homogeneous transformation matrix is realized. Based on the second development technique of unigraphics(UG), the automated assembly is implemented with application program interface (API) function. Lastly the automated assembly of measuring fixture for rear longeron as a case is implemented.

  19. The method of measurement system software automatic validation using business rules management system

    Science.gov (United States)

    Zawistowski, Piotr

    2015-09-01

    The method of measurement system software automatic validation using business rules management system (BRMS) is discussed in this paper. The article contains a description of the new approach to measurement systems execution validation, a description of the implementation of the system that supports mentioned validation and examples documenting the correctness of the approach. In the new approach BRMS are used for measurement systems execution validation. Such systems have not been used for software execution validation nor for measurement systems. The benefits of using them for the listed purposes are discussed as well.

  20. Comparison of landmark-based and automatic methods for cortical surface registration.

    Science.gov (United States)

    Pantazis, Dimitrios; Joshi, Anand; Jiang, Jintao; Shattuck, David W; Bernstein, Lynne E; Damasio, Hanna; Leahy, Richard M

    2010-02-01

    Group analysis of structure or function in cerebral cortex typically involves, as a first step, the alignment of cortices. A surface-based approach to this problem treats the cortex as a convoluted surface and coregisters across subjects so that cortical landmarks or features are aligned. This registration can be performed using curves representing sulcal fundi and gyral crowns to constrain the mapping. Alternatively, registration can be based on the alignment of curvature metrics computed over the entire cortical surface. The former approach typically involves some degree of user interaction in defining the sulcal and gyral landmarks while the latter methods can be completely automated. Here we introduce a cortical delineation protocol consisting of 26 consistent landmarks spanning the entire cortical surface. We then compare the performance of a landmark-based registration method that uses this protocol with that of two automatic methods implemented in the software packages FreeSurfer and BrainVoyager. We compare performance in terms of discrepancy maps between the different methods, the accuracy with which regions of interest are aligned, and the ability of the automated methods to correctly align standard cortical landmarks. Our results show similar performance for ROIs in the perisylvian region for the landmark-based method and FreeSurfer. However, the discrepancy maps showed larger variability between methods in occipital and frontal cortex and automated methods often produce misalignment of standard cortical landmarks. Consequently, selection of the registration approach should consider the importance of accurate sulcal alignment for the specific task for which coregistration is being performed. When automatic methods are used, the users should ensure that sulci in regions of interest in their studies are adequately aligned before proceeding with subsequent analysis. PMID:19796696

  1. Microscale extraction method for HPLC carotenoid analysis in vegetable matrices

    OpenAIRE

    Sidney Pacheco; Fernanda Marques Peixoto; Renata Galhardo Borguini; Luzimar da Silva de Mattos do Nascimento; Claudio Roberto Ribeiro Bobeda; Manuela Cristina Pessanha de Araújo Santiago; Ronoel Luiz de Oliveira Godoy

    2014-01-01

    In order to generate simple, efficient analytical methods that are also fast, clean, and economical, and are capable of producing reliable results for a large number of samples, a micro scale extraction method for analysis of carotenoids in vegetable matrices was developed. The efficiency of this adapted method was checked by comparing the results obtained from vegetable matrices, based on extraction equivalence, time required and reagents. Six matrices were used: tomato (Solanum lycopersicum...

  2. A CAD based automatic modeling method for primitive solid based Monte Carlo calculation geometry

    International Nuclear Information System (INIS)

    The Multi-Physics Coupling Analysis Modeling Program (MCAM), developed by FDS Team, China, is an advanced modeling tool aiming to solve the modeling challenges for multi-physics coupling simulation. The automatic modeling method for SuperMC, the Super Monte Carlo Calculation Program for Nuclear and Radiation Process, was recently developed and integrated in MCAM5.2. This method could bi-convert between CAD model and SuperMC input file. While converting from CAD model to SuperMC model, the CAD model was decomposed into several convex solids set, and then corresponding SuperMC convex basic solids were generated and output. While inverting from SuperMC model to CAD model, the basic primitive solids was created and related operation was done to according the SuperMC model. This method was benchmarked with ITER Benchmark model. The results showed that the method was correct and effective. (author)

  3. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    International Nuclear Information System (INIS)

    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight

  4. Automatic Single-Flux-Quantum (SFQ) Logic Synthesis Method for Top-Down Circuit Design

    International Nuclear Information System (INIS)

    Single-flux-quantum (SFQ) logic circuits provide faster operations with lower power consumption, using Josephson junctions as the switching devices. In the top-down flow of SFQ circuit design, we have already developed a place-and-route tool that covers backend circuit design. In this paper, we present an automatic SFQ logic synthesis method that covers front-end circuit design. The logic synthesis is a process that generates a gate-level logic circuit from a functional specification written in hardware description languages. In our SFQ synthesis method, after we generate an intermediate circuit with the help of a synthesis tool for semiconductor circuits, we convert it into a gate-level pipelined SFQ circuit. To do this, an automatic synthesis tool was implemented. To evaluate the effectiveness of the method and the tool, we synthesized arithmetic and logic units (ALUs). It took only two and half minutes to synthesize a 64-bit-width ALU that consisted of about 18, 000 gates

  5. Linking attentional processes and conceptual problem solving: Visual cues facilitate the automaticity of extracting relevant information from diagrams

    Directory of Open Access Journals (Sweden)

    Amy eRouinfar

    2014-09-01

    Full Text Available This study investigated links between lower-level visual attention processes and higher-level problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80 individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. The study produced two major findings. First, short duration visual cues can improve problem solving performance on a variety of insight physics problems, including transfer problems not sharing the surface features of the training problems, but instead sharing the underlying solution path. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem. Instead, the cueing effects were caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, these short duration visual cues when administered repeatedly over multiple training problems resulted in participants becoming more efficient at extracting the relevant information on the transfer problem, showing that such cues can improve the automaticity with which solvers extract relevant information from a problem. Both of these results converge on the conclusion that lower-order visual processes driven by attentional cues can influence higher-order cognitive processes

  6. Extracting natural dyes from wool—an evaluation of extraction methods

    OpenAIRE

    Manhita, Ana; Ferreira, Teresa; Candeias, António; Barrocas Dias, Cristina

    2011-01-01

    The efficiency of eight different procedures used for the extraction of natural dyes was evaluated using contemporary wool samples dyed with cochineal, madder, woad, weld, brazilwood and logwood. Comparison was made based on the LC-DAD peak areas of the natural dye’s main components which had been extracted from the wool samples. Among the tested methods, an extraction procedure with Na2EDTA in water/DMF (1:1, v/v) proved to be the most suitable for the extraction of the studied dyes, ...

  7. An efficient method for DNA extraction from Cladosporioid fungi

    NARCIS (Netherlands)

    Moslem, M.A.; Bahkali, A.H.; Abd-Elsalam, K.A.; Wit, de P.J.G.M.

    2010-01-01

    We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on

  8. An Improved Method for Extraction and Separation of Photosynthetic Pigments

    Science.gov (United States)

    Katayama, Nobuyasu; Kanaizuka, Yasuhiro; Sudarmi, Rini; Yokohama, Yasutsugu

    2003-01-01

    The method for extracting and separating hydrophobic photosynthetic pigments proposed by Katayama "et al." ("Japanese Journal of Phycology," 42, 71-77, 1994) has been improved to introduce it to student laboratories at the senior high school level. Silica gel powder was used for removing water from fresh materials prior to extracting pigments by a…

  9. A RAPID PCR-QUALITY DNA EXTRACTION METHOD IN FISH

    Institute of Scientific and Technical Information of China (English)

    LI Zhong; LIANG Hong-Wei; ZOU Gui-Wei

    2012-01-01

    PCR has been a general preferred method for biological research in fish, and previous research have enabled us to extract and purify PCR-quality DNA templates in laboratories[1-4]. The same problem among these procedures is waiting for tissue digesting for a long time. The overabundance time spent on PCR-quality DNA extraction restricts the efficiency of PCR assay, especially in large-scale PCR amplification, such as SSR-based genetic-mapping construction [5,6], identification of germ plasm resource[7,8] and evolution research [9,10], etc. In this study, a stable and rapid PCR-quality DNA extraction method was explored, using a modified alkaline lysis protocol. Extracting DNA for PCR only takes approximately 25 minutes. This stable and rapid DNA extraction method could save much laboratory time and promotes.%PCR has been a general preferred method for biological research in fish,and previous research have enabled us to extract and purify PCR-quality DNA templates in laboratories [1-4].The same problem among these procedures is waiting for tissue digesting for a long time.The overabundance time spent on PCR-quality DNA extraction restricts the efficiency of PCR assay,especially in large-scale PCR amplification,such as SSR-based genetic-mapping construction [5,6],identification of germ plasm resource[7,8] and evolution research [9,10],etc.In this study,a stable and rapid PCR-quality DNA extraction method was explored,using a modified alkaline lysis protocol.Extracting DNA for PCR only takes approximately 25 minutes.This stable and rapid DNA extraction method could save much laboratory time and promotes.

  10. [An automatic non-invasive method for the measurement of systolic, diastolic and mean blood pressure].

    Science.gov (United States)

    Morel, D; Suter, P

    1981-01-01

    A new automatic apparatus for the measurement of arterial pressure by a non-invasive technique was compared with direct intra-arterial measurement in 20 adult patients in a surgical intensive care unit. The apparatus works on the basis of the principle of oscillometry. Blood pressure is determined with a microprocessor by analysis of the amplitude of the oscillations produced by a cuff which is inflated then deflated automatically. Thus mean arterial pressure corresponds to the maximum amplitude. Systolic and diastolic pressures are deduced by extrapolation to zero of the amplitudes on either side of the maximum reading. Mean arterial pressure (AP) proved to be very reliable within the limits studied: 8.0 - 14.7 kPa (60 - 110 mmHg) with a difference in mean direct AP and indirect AP of 0,09 +/- 0.9 kPa SD (0.71 +/- 7 mmHg) and a coefficient of linear correlation between the two methods of r = 0.82. This non-invasive technique determined systolic arterial pressure (sAP) in a less reliable fashion than AP when compared with the invasive technique, with a tendency to flatten the extreme values. The correlation coefficient here was 0.68. Finally, diastolic arterial pressure (dAP) showed a better degree of agreement through with a difference in mean indirect AP and mean direct AP of 1.0 +/- 0.8 kPa (7.6 +/- 6.0 mmHg). These results indicate a good degree of agreement for measurements of mean arterial pressure, clinically the most important, between the two methods used. Measurements of diastolic pressure and above all of diastolic pressure seemed to be less in agreement. This difference could be due to an error in determination of the automatic apparatus tested or to the peripheral site (radial artery) of the intra-arterial catheter used, itself falsifying the humeral arterial pressure. PMID:6113805

  11. Automatic Sleep Staging using Multi-dimensional Feature Extraction and Multi-kernel Fuzzy Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2014-01-01

    Full Text Available This paper employed the clinical Polysomnographic (PSG data, mainly including all-night Electroencephalogram (EEG, Electrooculogram (EOG and Electromyogram (EMG signals of subjects, and adopted the American Academy of Sleep Medicine (AASM clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM were learned and the multi-kernel FSVM (MK-FSVM was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.

  12. Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

    International Nuclear Information System (INIS)

    Purpose: Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. Methods: Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. Results: The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all

  13. A comparison of DNA extraction methods using Petunia hybrida tissues.

    Science.gov (United States)

    Tamari, Farshad; Hinkley, Craig S; Ramprashad, Naderia

    2013-09-01

    Extraction of DNA from plant tissue is often problematic, as many plants contain high levels of secondary metabolites that can interfere with downstream applications, such as the PCR. Removal of these secondary metabolites usually requires further purification of the DNA using organic solvents or other toxic substances. In this study, we have compared two methods of DNA purification: the cetyltrimethylammonium bromide (CTAB) method that uses the ionic detergent hexadecyltrimethylammonium bromide and chloroform-isoamyl alcohol and the Edwards method that uses the anionic detergent SDS and isopropyl alcohol. Our results show that the Edwards method works better than the CTAB method for extracting DNA from tissues of Petunia hybrida. For six of the eight tissues, the Edwards method yielded more DNA than the CTAB method. In four of the tissues, this difference was statistically significant, and the Edwards method yielded 27-80% more DNA than the CTAB method. Among the different tissues tested, we found that buds, 4 days before anthesis, had the highest DNA concentrations and that buds and reproductive tissue, in general, yielded higher DNA concentrations than other tissues. In addition, DNA extracted using the Edwards method was more consistently PCR-amplified than that of CTAB-extracted DNA. Based on these results, we recommend using the Edwards method to extract DNA from plant tissues and to use buds and reproductive structures for highest DNA yields. PMID:23997658

  14. A New Automatic Method to Identify Galaxy Mergers I. Description and Application to the STAGES Survey

    CERN Document Server

    Hoyos, Carlos; Gray, Meghan E; Maltby, David T; Bell, Eric F; Barazza, Fabio D; Boehm, Asmus; Haussler, Boris; Jahnke, Knud; Jogee, Sharda; Lane, Kyle P; McIntosh, Daniel H; Wolf, Christian

    2011-01-01

    We present an automatic method to identify galaxy mergers using the morphological information contained in the residual images of galaxies after the subtraction of a Sersic model. The removal of the bulk signal from the host galaxy light is done with the aim of detecting the fainter minor mergers. The specific morphological parameters that are used in the merger diagnostic suggested here are the Residual Flux Fraction and the asymmetry of the residuals. The new diagnostic has been calibrated and optimized so that the resulting merger sample is very complete. However, the contamination by non-mergers is also high. If the same optimization method is adopted for combinations of other structural parameters such as the CAS system, the merger indicator we introduce yields merger samples of equal or higher statistical quality than the samples obtained through the use of other structural parameters. We explore the ability of the method presented here to select minor mergers by identifying a sample of visually classif...

  15. Automatic inspection of electron beam weld for stainless steel using phased array method

    International Nuclear Information System (INIS)

    The CEA laboratory of Non destructive testing of Valduc implements various techniques of controls (radiography, sealing by tracer gas helium, ultrasounds...) to check the quality of the welding and health matter of materials. To have a perfect command of the manufacture of the welding and to detect any anomaly during the manufacturing process (lacks of penetration, defects of joining, porosities...), it developed in partnership with company METALSCAN an ultrasonic technique of imagery phased array designed to the complete and automatic control of homogeneous stainless steel welding carried out by electron beam. To achieve this goal, an acoustic study by simulation with software CIVA was undertaken in order to determine the optimal characteristics of the phased array probes (their number and their site). Finally, the developed method allows, on the one hand, to locate lacks of fusion of welding equivalents to flat holes with bottom 0,5 mms in diameter, and on the other hand, to detect lacks of penetration of 0,1 mm. In order to ensure a perfect reproducibility of controls, a mechanical system ensuring the setting in rotation of the part, allows to inspect the whole of the welding. The results are then analyzed automatically using application software ensuring the traceability of controls. The method was first of all validated using parts spread out, then it was brought into service after confrontation of the results obtained on real defects with other techniques (metallographic radiography and characterizations). (authors)

  16. Automatic Extraction of Appendix from Ultrasonography with Self-Organizing Map and Shape-Brightness Pattern Learning

    Science.gov (United States)

    Kim, Kwang Baek; Song, Doo Heon; Park, Hyun Jun

    2016-01-01

    Accurate diagnosis of acute appendicitis is a difficult problem in practice especially when the patient is too young or women in pregnancy. In this paper, we propose a fully automatic appendix extractor from ultrasonography by applying a series of image processing algorithms and an unsupervised neural learning algorithm, self-organizing map. From the suggestions of clinical practitioners, we define four shape patterns of appendix and self-organizing map learns those patterns in pixel clustering phase. In the experiment designed to test the performance for those four frequently found shape patterns, our method is successful in 3 types (1 failure out of 45 cases) but leaves a question for one shape pattern (80% correct).

  17. Comparison of four methods of DNA extraction from rice

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    @@ Polyphenols, teroens, and resins make it difficult to obtain high quality genomic DNA from rice. Four extraction methods were compared in our study, and CTAB precipitation was the most practical one.

  18. Towards Automatic Extraction of Social Networks of Organizations in PubMed Abstracts

    CERN Document Server

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-01-01

    Social Network Analysis (SNA) of organizations can attract great interest from government agencies and scientists for its ability to boost translational research and accelerate the process of converting research to care. For SNA of a particular disease area, we need to identify the key research groups in that area by mining the affiliation information from PubMed. This not only involves recognizing the organization names in the affiliation string, but also resolving ambiguities to identify the article with a unique organization. We present here a process of normalization that involves clustering based on local sequence alignment metrics and local learning based on finding connected components. We demonstrate the application of the method by analyzing organizations involved in angiogenensis treatment, and demonstrating the utility of the results for researchers in the pharmaceutical and biotechnology industries or national funding agencies.

  19. Scandium separation by the method of solvent extraction and extraction chromatography

    International Nuclear Information System (INIS)

    The conditions of scandium extraction from ferruginous technological solutions by tributyl phosphate have been studied. The purification degree of scandium from iron during the extraction from 8MHCl by 50% TBP solution in kerosene with the consequent 4mHCl reextraction equals approximately 80%. To attain deeper scandium purification from iron the method of extraction chromatography has been used which enables to separate scandium and iron, their quantity ratios being equal to 1:1000. This separation method has been employed for the analysis of technological solutions. The relative standard deviation of the analysis results did not exceed 0.08

  20. A single scale retinex based method for palm vein extraction

    OpenAIRE

    Wang, Chongyang; Peng, Ming; Xu, Lingfeng; Chen, Tong

    2016-01-01

    Palm vein recognition is a novel biometric identification technology. But how to gain a better vein extraction result from the raw palm image is still a challenging problem, especially when the raw data collection has the problem of asymmetric illumination. This paper proposes a method based on single scale Retinex algorithm to extract palm vein image when strong shadow presents due to asymmetric illumination and uneven geometry of the palm. We test our method on a multispectral palm image. T...

  1. Determination of vanadium in ferronickel by extraction-polarography method

    International Nuclear Information System (INIS)

    Polarographic behaviour of dibromoxyquinoline and its complex with V(5) on a graphite anode in toluene extracts has been studied. Against the background of 0.05 M of the LiCl solution in the toluene-ethyl alcohol (1:1) mixture the complex of V(5) with dibromoxyquinoline results in an anodic oxidation wave with phisub(1/2)=0.6V. The ultimate current is in direct proportion with the vanadium concentration in the extract. Technique of vanadium determination in ferronickel using the extraction-polarography method has been developed. The determination limit of vanadium by the given method is 0.06 mg in 25 ml

  2. Forward gated-diode method for parameter extraction of MOSFETs*

    Institute of Scientific and Technical Information of China (English)

    Zhang Chenfei; Ma Chenyue; Guo Xinjie; Zhang Xiufang; He Jin; Wang Guozeng; Yang Zhang; Liu Zhiwei

    2011-01-01

    The forward gated-diode method is used to extract the dielectric oxide thickness and body doping concentration of MOSFETs, especially when both of the variables are unknown previously. First, the dielectric oxide thickness and the body doping concentration as a function of forward gated-diode peak recombination-generation (R-G) current are derived from the device physics. Then the peak R-G current characteristics of the MOSFETs with different dielectric oxide thicknesses and body doping concentrations are simulated with ISE-Dessis for parameter extraction. The results from the simulation data demonstrate excellent agreement with those extracted from the forward gated-diode method.

  3. Extracting natural dyes from wool--an evaluation of extraction methods.

    Science.gov (United States)

    Manhita, Ana; Ferreira, Teresa; Candeias, António; Dias, Cristina Barrocas

    2011-05-01

    The efficiency of eight different procedures used for the extraction of natural dyes was evaluated using contemporary wool samples dyed with cochineal, madder, woad, weld, brazilwood and logwood. Comparison was made based on the LC-DAD peak areas of the natural dye's main components which had been extracted from the wool samples. Among the tested methods, an extraction procedure with Na(2)EDTA in water/DMF (1:1, v/v) proved to be the most suitable for the extraction of the studied dyes, which presented a wide range of chemical structures. The identification of the natural dyes used in the making of an eighteenth century Arraiolos carpet was possible using the Na(2)EDTA/DMF extraction of the wool embroidery samples and an LC-DAD-MS methodology. The effectiveness of the Na(2)EDTA/DMF extraction method was particularly observed in the extraction of weld dye components. Nine flavone derivatives previously identified in weld extracts could be identified in a single historical sample, confirming the use of this natural dye in the making of Arraiolos carpets. Indigo and brazilwood were also identified in the samples, and despite the fact that these natural dyes were referred in the historical recipes of Arraiolos dyeing, it is the first time that the use of brazilwood is confirmed. Mordant analysis by ICP-MS identified the widespread use of alum in the dyeing process, but in some samples with darker hues, high amounts of iron were found instead. PMID:21416400

  4. Arsenic extraction and speciation in plants: Method comparison and development.

    Science.gov (United States)

    Zhao, Di; Li, Hong-Bo; Xu, Jia-Yi; Luo, Jun; Ma, Lena Qiying

    2015-08-01

    We compared four methods to extract arsenic (As) from three different plants containing different As levels for As speciation with the goal of developing a more efficient method, i.e., As-hyperaccumulator Pteris vittata at 459-7714mgkg(-1), rice seedling at 53.4-574mgkg(-1), and tobacco leaf at 0.32-0.35mgkg(-1). The four methods included heating with dilute HNO3, and sonication with phosphate buffered solution, methanol/water, and ethanol/water, with As being analyzed using high-performance liquid chromatography coupled with inductively-coupled plasma mass spectrometry (HPLC-ICP-MS). Among the four methods, the ethanol/water method produced the most satisfactory extraction efficiency (~80% for the roots and >85% for the fronds) without changing As species based on P. vittata. The lower extraction efficiency from P. vittata roots was attributed to its dominance by arsenate (82%) while arsenite dominated in the fronds (89%). The ethanol/water method used sample:solution ratio of 1:200 (0.05g:10mL) with 50% ethanol and 2h sonication. Based on different extraction times (0.5-2h), ethanol concentrations (25-100%) and sample:solution ratios (1:50-1:300), the optimized ethanol/water method used less ethanol (25%) and time (0.5h for the fronds and 2h for the roots). Satisfactory extraction was also obtained for tobacco leaf (78-92%) and rice seedlings (~70%) using the optimized method, which was better than the other three methods. Based on satisfactory extraction efficiency with little change in As species during extraction from three plants containing different As levels, the optimized method has the potential to be used for As speciation in other plants. PMID:25863504

  5. Noncontact optical imaging in mice with full angular coverage and automatic surface extraction

    Science.gov (United States)

    Meyer, Heiko; Garofalakis, Anikitos; Zacharakis, Giannis; Psycharakis, Stylianos; Mamalaki, Clio; Kioussis, Dimitris; Economou, Eleftherios N.; Ntziachristos, Vasilis; Ripoll, Jorge

    2007-06-01

    During the past decade, optical imaging combined with tomographic approaches has proved its potential in offering quantitative three-dimensional spatial maps of chromophore or fluorophore concentration in vivo. Due to its direct application in biology and biomedicine, diffuse optical tomography (DOT) and its fluorescence counterpart, fluorescence molecular tomography (FMT), have benefited from an increase in devoted research and new experimental and theoretical developments, giving rise to a new imaging modality. The most recent advances in FMT and DOT are based on the capability of collecting large data sets by using CCDs as detectors, and on the ability to include multiple projections through recently developed noncontact approaches. For these to be implemented, we have developed an imaging setup that enables three-dimensional imaging of arbitrary shapes in fluorescence or absorption mode that is appropriate for small animal imaging. This is achieved by implementing a noncontact approach both for sources and detectors and coregistering surface geometry measurements using the same CCD camera. A thresholded shadowgrammetry approach is applied to the geometry measurements to retrieve the surface mesh. We present the evaluation of the system and method in recovering three-dimensional surfaces from phantom data and live mice. The approach is used to map the measured in vivo fluorescence data onto the tissue surface by making use of the free-space propagation equations, as well as to reconstruct fluorescence concentrations inside highly scattering tissuelike phantom samples. Finally, the potential use of this setup for in vivo small animal imaging and its impact on biomedical research is discussed.

  6. An automatic registration method for frameless stereotaxy, image guided surgery, and enhanced reality visualization

    International Nuclear Information System (INIS)

    There is a need for frameless guidance systems to help surgeons plan the exact location for incisions, to define the margins of tumors, and to precisely identify locations of neighboring critical structures. The authors have developed an automatic technique for registering clinical data, such as segmented magnetic resonance imaging (MRI) or computed tomography (CT) reconstructions, with any view of the patient on the operating table. They demonstrate on the specific example of neurosurgery. The method enables a visual mix of live video of the patient and the segmented three-dimensional (3-D) MRI or CT model. This supports enhanced reality techniques for planning and guiding neurosurgical procedures and allows them to interactively view extracranial or intracranial structures nonintrusively. Extensions of the method include image guided biopsies, focused therapeutic procedures, and clinical studies involving change detection over time sequences of images

  7. Prenatal express-diagnosis by the method of QF-PCR and automatic microelectroforesis with microarrays

    Institute of Scientific and Technical Information of China (English)

    Zaporozhan VN; Bubnov VV; Marichereda VG; Verbitskaya TG; Belous OB

    2011-01-01

    The modern molecular-genetic methods have been implementing actively into the medical practiee.They improve diagnostic accuracy,help to prognosticate the course of oncological diseases,optimize the results of prenatal diagnosis,decrease mothers' anxiety and improve the clinical outcomes of pregnancy.There are used the various traditional approaches e.g.cariotyping,FISH and more contemporary-real-time PCR,comparative genomic hybridization (CGH) or chromosomal microarray analysis (CMA),Quantitative Fluorescent PCR (QF-PCR). For expressing diagnosis of triploidy by 21st and 18th chromosomes there was used QFPCR technologies with the consequent quantative analysis on the automatic capillary microelectrophoresis on the microarrays Experion DNA1K.There was determined that diagnostic accuracy of QF-PCR was comparable with existing routine methods,but it had some advantages including expressity and could be recommended for implementation into practical medicine.

  8. An adaptive spatial clustering method for automatic brain MR image segmentation

    Institute of Scientific and Technical Information of China (English)

    Jingdan Zhang; Daoqing Dai

    2009-01-01

    In this paper, an adaptive spatial clustering method is presented for automatic brain MR image segmentation, which is based on a competitive learning algorithm-self-organizing map (SOM). We use a pattern recognition approach in terms of feature generation and classifier design. Firstly, a multi-dimensional feature vector is constructed using local spatial information. Then, an adaptive spatial growing hierarchical SOM (ASGHSOM) is proposed as the classifier, which is an extension of SOM, fusing multi-scale segmentation with the competitive learning clustering algorithm to overcome the problem of overlapping grey-scale intensities on boundary regions. Furthermore, an adaptive spatial distance is integrated with ASGHSOM, in which local spatial information is considered in the cluster-ing process to reduce the noise effect and the classification ambiguity. Our proposed method is validated by extensive experiments using both simulated and real MR data with varying noise level, and is compared with the state-of-the-art algorithms.

  9. Effect of extraction method and orientin content on radio-protective effect of tulsi extracts

    International Nuclear Information System (INIS)

    Extract of tulsi leaves (Ocimum sanctum) has been reported for its radioprotective efficacy. In our initial studies we observed significant variation in the survival of irradiated mice with different batches of tulsi extracts and therefore we employed different extraction methods on leaves collected during various seasons from different localities to study any variation in the radioprotective efficacy. Orientin, a component of tulsi extract, was considered a marker. Mice whole body survival (at 10 Gy lethal whole body irradiation) study and day 11 endo-CFU-s assay (at 5 Gy WBI) were performed employing 3 treatment schedules, 50 mg/kg or 25 mg/kg b.w (single injection, 30 min irradiation), and 10 mg/kgb.w (one injection per day for 5 day, last injection being 30 min before irradiation). Single dose of 25 mg/kg b.w (both aqueous and alcoholic) did not provide any significant survival benefit. The orientin concentrations in the extracts tested varied from 3.3 to 9.91 mg/g extract as studied by HPLC method. With a single administration (i.p) of 50 mg/kg, the aqueous extract from leaves of monsoon season had an orientin content of 9.91 mg/g extract and gave a survival of 60% with a CFU-s count of 37, while extract of leaf summer leaves had an orientin content of 4.15 mg/g extract and gave a survival of 50% with a CFU-s count of 11.6. At the same dose (50 mg/kg), the aqueous extract from the winter season had an orientin content of 3.30 mg/g extract and gave 25% survival with a CFU-s count of 19, while the ethanolic extract had an orientin content of 7.70 mg/g extract and gave a survival of 50% with a CFU-s count of 13. These observations suggest that different climatic factors, orientin content and the doses of administration are important factors regulating radioprotection afforded by different extracts of tulsi. (author)

  10. A Circular Statistical Method for Extracting Rotation Measures

    Indian Academy of Sciences (India)

    S. Sarala; Pankaj Jain

    2002-03-01

    We propose a new method for the extraction of Rotation Measures from spectral polarization data. The method is based on maximum likelihood analysis and takes into account the circular nature of the polarization data. The method is unbiased and statistically more efficient than the standard 2 procedure.

  11. Automatic Method for Controlling the Iodine Adsorption Number in Carbon Black Oil Furnaces

    Directory of Open Access Journals (Sweden)

    Zečević, N.

    2008-12-01

    Full Text Available There are numerous of different inlet process factors in carbon black oil furnaces which must be continuously and automatically adjusted, due to stable quality of final product. The most important six inlet process factors in carbon black oil-furnaces are:1. volume flow of process air for combustion2. temperature of process air for combustion3. volume flow of natural gas for insurance the necessary heat for thermal reaction of conversionthe hydrocarbon oil feedstock in oil-furnace carbon black4. mass flow rate of hydrocarbon oil feedstock5. type and quantity of additive for adjustment the structure of oil-furnace carbon black6. quantity and position of the quench water for cooling the reaction of oil-furnace carbon black.The control of oil-furnace carbon black adsorption capacity is made with mass flow rate of hydrocarbon feedstock, which is the most important inlet process factor. Oil-furnace carbon black adsorption capacity in industrial process is determined with laboratory analyze of iodine adsorption number. It is shown continuously and automatically method for controlling iodine adsorption number in carbon black oil-furnaces to get as much as possible efficient control of adsorption capacity. In the proposed method it can be seen the correlation between qualitatively-quantitatively composition of the process tail gasses in the production of oil-furnace carbon black and relationship between air for combustion and hydrocarbon feedstock. It is shown that the ratio between air for combustion and hydrocarbon oil feedstock is depended of adsorption capacity summarized by iodine adsorption number, regarding to BMCI index of hydrocarbon oil feedstock.The mentioned correlation can be seen through the figures from 1. to 4. From the whole composition of the process tail gasses the best correlation for continuously and automatically control of iodine adsorption number is show the volume fraction of methane. The volume fraction of methane in the

  12. An automatic seismic signal detection method based on fourth-order statistics and applications

    Institute of Scientific and Technical Information of China (English)

    Liu Xi-Qiang; Cai Yin; Zhao Rui; Zhao Yin-Gang; Qu Bao-An; Feng Zhi-Jun; Li Hong

    2014-01-01

    Real-time, automatic, and accurate determination of seismic signals is critical for rapid earthquake reporting and early warning. In this study, we present a correction trigger function (CTF) for automatically detecting regional seismic events and a fourth-order statistics algorithm with the Akaike information criterion (AIC) for determining the direct wave phase, based on the differences, or changes, in energy, frequency, and amplitude of the direct P- or S-waves signal and noise. Simulations suggest for that the proposed fourth-order statistics result in high resolution even for weak signal and noise variations at different amplitude, frequency, and polarization characteristics. To improve the precision of establishing the S-waves onset,fi rst a specifi c segment of P-wave seismograms is selected and the polarization characteristics of the data are obtained. Second, the S-wave seismograms that contained the specifi c segment of P-wave seismograms are analyzed by S-wave polarizationfi ltering. Finally, the S-wave phase onset times are estimated. The proposed algorithm was used to analyze regional earthquake data from the Shandong Seismic Network. The results suggest that compared with conventional methods, the proposed algorithm greatly decreased false and missed earthquake triggers, and improved the detection precision of direct P- and S-wave phases.

  13. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-01-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83–0.96, p < 0.001) were observed between volumetric estimates of BraTumIA and of each of the human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments. PMID:27001047

  14. A semi-automatic computer-aided method for surgical template design

    Science.gov (United States)

    Chen, Xiaojun; Xu, Lu; Yang, Yue; Egger, Jan

    2016-02-01

    This paper presents a generalized integrated framework of semi-automatic surgical template design. Several algorithms were implemented including the mesh segmentation, offset surface generation, collision detection, ruled surface generation, etc., and a special software named TemDesigner was developed. With a simple user interface, a customized template can be semi- automatically designed according to the preoperative plan. Firstly, mesh segmentation with signed scalar of vertex is utilized to partition the inner surface from the input surface mesh based on the indicated point loop. Then, the offset surface of the inner surface is obtained through contouring the distance field of the inner surface, and segmented to generate the outer surface. Ruled surface is employed to connect inner and outer surfaces. Finally, drilling tubes are generated according to the preoperative plan through collision detection and merging. It has been applied to the template design for various kinds of surgeries, including oral implantology, cervical pedicle screw insertion, iliosacral screw insertion and osteotomy, demonstrating the efficiency, functionality and generality of our method.

  15. Clinical Evaluation of a Fully-automatic Segmentation Method for Longitudinal Brain Tumor Volumetry

    Science.gov (United States)

    Meier, Raphael; Knecht, Urspeter; Loosli, Tina; Bauer, Stefan; Slotboom, Johannes; Wiest, Roland; Reyes, Mauricio

    2016-03-01

    Information about the size of a tumor and its temporal evolution is needed for diagnosis as well as treatment of brain tumor patients. The aim of the study was to investigate the potential of a fully-automatic segmentation method, called BraTumIA, for longitudinal brain tumor volumetry by comparing the automatically estimated volumes with ground truth data acquired via manual segmentation. Longitudinal Magnetic Resonance (MR) Imaging data of 14 patients with newly diagnosed glioblastoma encompassing 64 MR acquisitions, ranging from preoperative up to 12 month follow-up images, was analysed. Manual segmentation was performed by two human raters. Strong correlations (R = 0.83–0.96, p human raters for the contrast-enhancing (CET) and non-enhancing T2-hyperintense tumor compartments (NCE-T2). A quantitative analysis of the inter-rater disagreement showed that the disagreement between BraTumIA and each of the human raters was comparable to the disagreement between the human raters. In summary, BraTumIA generated volumetric trend curves of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments comparable to estimates of human raters. These findings suggest the potential of automated longitudinal tumor segmentation to substitute manual volumetric follow-up of contrast-enhancing and non-enhancing T2-hyperintense tumor compartments.

  16. Contrast-based fully automatic segmentation of white matter hyperintensities: method and validation.

    Directory of Open Access Journals (Sweden)

    Thomas Samaille

    Full Text Available White matter hyperintensities (WMH on T2 or FLAIR sequences have been commonly observed on MR images of elderly people. They have been associated with various disorders and have been shown to be a strong risk factor for stroke and dementia. WMH studies usually required visual evaluation of WMH load or time-consuming manual delineation. This paper introduced WHASA (White matter Hyperintensities Automated Segmentation Algorithm, a new method for automatically segmenting WMH from FLAIR and T1 images in multicentre studies. Contrary to previous approaches that were based on intensities, this method relied on contrast: non linear diffusion filtering alternated with watershed segmentation to obtain piecewise constant images with increased contrast between WMH and surroundings tissues. WMH were then selected based on subject dependant automatically computed threshold and anatomical information. WHASA was evaluated on 67 patients from two studies, acquired on six different MRI scanners and displaying a wide range of lesion load. Accuracy of the segmentation was assessed through volume and spatial agreement measures with respect to manual segmentation; an intraclass correlation coefficient (ICC of 0.96 and a mean similarity index (SI of 0.72 were obtained. WHASA was compared to four other approaches: Freesurfer and a thresholding approach as unsupervised methods; k-nearest neighbours (kNN and support vector machines (SVM as supervised ones. For these latter, influence of the training set was also investigated. WHASA clearly outperformed both unsupervised methods, while performing at least as good as supervised approaches (ICC range: 0.87-0.91 for kNN; 0.89-0.94 for SVM. Mean SI: 0.63-0.71 for kNN, 0.67-0.72 for SVM, and did not need any training set.

  17. ISS Contingency Attitude Control Recovery Method for Loss of Automatic Thruster Control

    Science.gov (United States)

    Bedrossian, Nazareth; Bhatt, Sagar; Alaniz, Abran; McCants, Edward; Nguyen, Louis; Chamitoff, Greg

    2008-01-01

    In this paper, the attitude control issues associated with International Space Station (ISS) loss of automatic thruster control capability are discussed and methods for attitude control recovery are presented. This scenario was experienced recently during Shuttle mission STS-117 and ISS Stage 13A in June 2007 when the Russian GN&C computers, which command the ISS thrusters, failed. Without automatic propulsive attitude control, the ISS would not be able to regain attitude control after the Orbiter undocked. The core issues associated with recovering long-term attitude control using CMGs are described as well as the systems engineering analysis to identify recovery options. It is shown that the recovery method can be separated into a procedure for rate damping to a safe harbor gravity gradient stable orientation and a capability to maneuver the vehicle to the necessary initial conditions for long term attitude hold. A manual control option using Soyuz and Progress vehicle thrusters is investigated for rate damping and maneuvers. The issues with implementing such an option are presented and the key issue of closed-loop stability is addressed. A new non-propulsive alternative to thruster control, Zero Propellant Maneuver (ZPM) attitude control method is introduced and its rate damping and maneuver performance evaluated. It is shown that ZPM can meet the tight attitude and rate error tolerances needed for long term attitude control. A combination of manual thruster rate damping to a safe harbor attitude followed by a ZPM to Stage long term attitude control orientation was selected by the Anomaly Resolution Team as the alternate attitude control method for such a contingency.

  18. Using Nanoinformatics Methods for Automatically Identifying Relevant Nanotoxicology Entities from the Literature

    Directory of Open Access Journals (Sweden)

    Miguel García-Remesal

    2013-01-01

    Full Text Available Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets to 93.0% (routes of exposure, while recall values range from 82.6% (routes of exposure to 87.4% (toxic effects. These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER-dependent tasks, such as for instance augmented reading or semantic searches. This research is a “proof of concept” that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine.

  19. Using nanoinformatics methods for automatically identifying relevant nanotoxicology entities from the literature.

    Science.gov (United States)

    García-Remesal, Miguel; García-Ruiz, Alejandro; Pérez-Rey, David; de la Iglesia, Diana; Maojo, Víctor

    2013-01-01

    Nanoinformatics is an emerging research field that uses informatics techniques to collect, process, store, and retrieve data, information, and knowledge on nanoparticles, nanomaterials, and nanodevices and their potential applications in health care. In this paper, we have focused on the solutions that nanoinformatics can provide to facilitate nanotoxicology research. For this, we have taken a computational approach to automatically recognize and extract nanotoxicology-related entities from the scientific literature. The desired entities belong to four different categories: nanoparticles, routes of exposure, toxic effects, and targets. The entity recognizer was trained using a corpus that we specifically created for this purpose and was validated by two nanomedicine/nanotoxicology experts. We evaluated the performance of our entity recognizer using 10-fold cross-validation. The precisions range from 87.6% (targets) to 93.0% (routes of exposure), while recall values range from 82.6% (routes of exposure) to 87.4% (toxic effects). These results prove the feasibility of using computational approaches to reliably perform different named entity recognition (NER)-dependent tasks, such as for instance augmented reading or semantic searches. This research is a "proof of concept" that can be expanded to stimulate further developments that could assist researchers in managing data, information, and knowledge at the nanolevel, thus accelerating research in nanomedicine. PMID:23509721

  20. Accurate and automatic extrinsic calibration method for blade measurement system integrated by different optical sensors

    Science.gov (United States)

    He, Wantao; Li, Zhongwei; Zhong, Kai; Shi, Yusheng; Zhao, Can; Cheng, Xu

    2014-11-01

    Fast and precise 3D inspection system is in great demand in modern manufacturing processes. At present, the available sensors have their own pros and cons, and hardly exist an omnipotent sensor to handle the complex inspection task in an accurate and effective way. The prevailing solution is integrating multiple sensors and taking advantages of their strengths. For obtaining a holistic 3D profile, the data from different sensors should be registrated into a coherent coordinate system. However, some complex shape objects own thin wall feather such as blades, the ICP registration method would become unstable. Therefore, it is very important to calibrate the extrinsic parameters of each sensor in the integrated measurement system. This paper proposed an accurate and automatic extrinsic parameter calibration method for blade measurement system integrated by different optical sensors. In this system, fringe projection sensor (FPS) and conoscopic holography sensor (CHS) is integrated into a multi-axis motion platform, and the sensors can be optimally move to any desired position at the object's surface. In order to simple the calibration process, a special calibration artifact is designed according to the characteristics of the two sensors. An automatic registration procedure based on correlation and segmentation is used to realize the artifact datasets obtaining by FPS and CHS rough alignment without any manual operation and data pro-processing, and then the Generalized Gauss-Markoff model is used to estimate the optimization transformation parameters. The experiments show the measurement result of a blade, where several sampled patches are merged into one point cloud, and it verifies the performance of the proposed method.

  1. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    Science.gov (United States)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  2. Optimising extraction of extracellular polymeric substances (EPS) from benthic diatoms: comparison of the efficiency of six EPS extraction methods

    OpenAIRE

    Takahashi, Eri; Ledauphin, Jerome; Goux, Didier; Orvain, Francis

    2009-01-01

    There is no universal method that can be applied to extract bound extracellular polymeric substances (EPS) from benthic diatoms of intertidal sediments without causing cell lysis. Six extraction methods were tested on a diatom culture of Navicula jeffreyi to establish the best compromise between high yields of carbohydrate extraction and minimum cell lysis. Extraction with distilled water provoked cell lysis (as already known). The five other extraction methods (dowex resin, artificial seawat...

  3. Automatic detection method for mura defects on display film surface using modified Weber's law

    Science.gov (United States)

    Kim, Myung-Muk; Lee, Seung-Ho

    2014-07-01

    We propose a method that automatically detects mura defects on display film surfaces using a modified version of Weber's law. The proposed method detects mura defects regardless of their properties and shapes by identifying regions perceived by human vision as mura using the brightness of pixel and image distribution ratio of mura in an image histogram. The proposed detection method comprises five stages. In the first stage, the display film surface image is acquired and a gray-level shift performed. In the second and third stages, the image histogram is acquired and analyzed, respectively. In the fourth stage, the mura range is acquired. This is followed by postprocessing in the fifth stage. Evaluations of the proposed method conducted using 200 display film mura image samples indicate a maximum detection rate of ˜95.5%. Further, the results of application of the Semu index for luminance mura in flat panel display (FPD) image quality inspection indicate that the proposed method is more reliable than a popular conventional method.

  4. Evaluating current automatic de-identification methods with Veteran’s health administration clinical documents

    Directory of Open Access Journals (Sweden)

    Ferrández Oscar

    2012-07-01

    Full Text Available Abstract Background The increased use and adoption of Electronic Health Records (EHR causes a tremendous growth in digital information useful for clinicians, researchers and many other operational purposes. However, this information is rich in Protected Health Information (PHI, which severely restricts its access and possible uses. A number of investigators have developed methods for automatically de-identifying EHR documents by removing PHI, as specified in the Health Insurance Portability and Accountability Act “Safe Harbor” method. This study focuses on the evaluation of existing automated text de-identification methods and tools, as applied to Veterans Health Administration (VHA clinical documents, to assess which methods perform better with each category of PHI found in our clinical notes; and when new methods are needed to improve performance. Methods We installed and evaluated five text de-identification systems “out-of-the-box” using a corpus of VHA clinical documents. The systems based on machine learning methods were trained with the 2006 i2b2 de-identification corpora and evaluated with our VHA corpus, and also evaluated with a ten-fold cross-validation experiment using our VHA corpus. We counted exact, partial, and fully contained matches with reference annotations, considering each PHI type separately, or only one unique ‘PHI’ category. Performance of the systems was assessed using recall (equivalent to sensitivity and precision (equivalent to positive predictive value metrics, as well as the F2-measure. Results Overall, systems based on rules and pattern matching achieved better recall, and precision was always better with systems based on machine learning approaches. The highest “out-of-the-box” F2-measure was 67% for partial matches; the best precision and recall were 95% and 78%, respectively. Finally, the ten-fold cross validation experiment allowed for an increase of the F2-measure to 79% with partial matches

  5. A novel method of genomic DNA extraction for Cactaceae 1

    OpenAIRE

    Fehlberg, Shannon D.; Allen, Jessica M.; Kathleen Church

    2013-01-01

    • Premise of the study: Genetic studies of Cactaceae can at times be impeded by difficult sampling logistics and/or high mucilage content in tissues. Simplifying sampling and DNA isolation through the use of cactus spines has not previously been investigated. • Methods and Results: Several protocols for extracting DNA from spines were tested and modified to maximize yield, amplification, and sequencing. Sampling of and extraction from spines resulted in a simplified protocol overall and compl...

  6. Method for improved extraction of DNA from Nocardia asteroides.

    OpenAIRE

    Loeffelholz, M. J.; Scholl, D R

    1989-01-01

    In a variation of standard DNA extraction methods, Nocardia asteroides was repeatedly exposed to sodium dodecyl sulfate at 60 degrees C for 30 min; each extraction was followed by centrifugation, removal of the nucleic acid-rich supernatant, and suspension of the cell pellet in fresh sodium dodecyl sulfate. The pooled supernatants contained a substantially higher amount of DNA than the first supernatant alone. The possible implications of this procedure on the development of DNA probes are di...

  7. Methods and automatic procedures for processing images based on blind evaluation of noise type and characteristics

    Science.gov (United States)

    Lukin, Vladimir V.; Abramov, Sergey K.; Ponomarenko, Nikolay N.; Uss, Mikhail L.; Zriakhov, Mikhail; Vozel, Benoit; Chehdi, Kacem; Astola, Jaakko T.

    2011-01-01

    In many modern applications, methods and algorithms used for image processing require a priori knowledge or estimates of noise type and its characteristics. Noise type and basic parameters can be sometimes known in advance or determined in an interactive manner. However, it occurs more and more often that they should be estimated in a blind manner. The results of noise-type blind determination can be false, and the estimates of noise parameters are characterized by certain accuracy. Such false decisions and estimation errors have an impact on performance of image-processing techniques that is based on the obtained information. We address some issues of such a negative influence. Possible structures of automatic procedures are presented and discussed for several typical applications of image processing as remote sensing data preprocessing and compression.

  8. Transducer-actuator systems and methods for performing on-machine measurements and automatic part alignment

    Energy Technology Data Exchange (ETDEWEB)

    Barkman, William E.; Dow, Thomas A.; Garrard, Kenneth P.; Marston, Zachary

    2016-07-12

    Systems and methods for performing on-machine measurements and automatic part alignment, including: a measurement component operable for determining the position of a part on a machine; and an actuation component operable for adjusting the position of the part by contacting the part with a predetermined force responsive to the determined position of the part. The measurement component consists of a transducer. The actuation component consists of a linear actuator. Optionally, the measurement component and the actuation component consist of a single linear actuator operable for contacting the part with a first lighter force for determining the position of the part and with a second harder force for adjusting the position of the part. The actuation component is utilized in a substantially horizontal configuration and the effects of gravitational drop of the part are accounted for in the force applied and the timing of the contact.

  9. A New Method to Extract Text from Natural Scenes

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper presents a new method for text detection, location and binarization fron natural scenes. Several morphological steps are used to detect the general positian of the text, including English, Chinese and Japanese characters. Next bounding boxes are processed by a new "Expand, Break and Merge" (EBM) method to get the precise text areas. Finally, text is binarized by a hybrid method based on Otsu and Niblack. This new approach can extract different kinds of text from complicated natural scenes. It is insensitive to noise, distortedness, and text orientation. It also has good performance on extracting texts in various sizes.

  10. A semi-automatic non-destructive method to quantify grapevine downy mildew sporulation.

    Science.gov (United States)

    Peressotti, Elisa; Duchêne, Eric; Merdinoglu, Didier; Mestre, Pere

    2011-02-01

    The availability of fast, reliable and non-destructive methods for the analysis of pathogen development contributes to a better understanding of plant-pathogen interactions. This is particularly true for the genetic analysis of quantitative resistance to plant pathogens, where the availability of a method allowing a precise quantification of pathogen development allows the reliable detection of different genomic regions involved in the resistance. Grapevine downy mildew, caused by the biotrophic Oomycete Plasmopara viticola, is one of the most important diseases affecting viticulture. Here we report the development of a simple image analysis-based semi-automatic method for the quantification of grapevine downy mildew sporulation, requiring just a compact digital camera and the open source software ImageJ. We confirm the suitability of the method for the analysis of the interaction between grapevine and downy mildew by performing QTL analysis of resistance to downy mildew as well as analysis of the kinetics of downy mildew infection. The non-destructive nature of the method will enable comparison between the phenotypic and molecular data obtained from the very same sample, resulting in a more accurate description of the interaction, while its simplicity makes it easily adaptable to other plant-pathogen interactions, in particular those involving downy mildews. PMID:21167874

  11. Comparison of extraction methods for analysis of flavonoids in onions

    OpenAIRE

    Soeltoft, Malene; Knuthsen, Pia; Nielsen, John

    2008-01-01

    Onions are known to contain high levels of flavonoids and a comparison of the efficiency, reproducibility and detection limits of various extraction methods has been made in order to develop fast and reliable analytical methods for analysis of flavonoids in onions. Conventional and classical methods are time- and solvent-consuming and the presence of light and oxygen during sample preparation facilitate degradation reactions. Thus, classical methods were compared with microwave (irradiatio...

  12. An automatic method for producing robust regression models from hyperspectral data using multiple simple genetic algorithms

    Science.gov (United States)

    Sykas, Dimitris; Karathanassi, Vassilia

    2015-06-01

    This paper presents a new method for automatically determining the optimum regression model, which enable the estimation of a parameter. The concept lies on the combination of k spectral pre-processing algorithms (SPPAs) that enhance spectral features correlated to the desired parameter. Initially a pre-processing algorithm uses as input a single spectral signature and transforms it according to the SPPA function. A k-step combination of SPPAs uses k preprocessing algorithms serially. The result of each SPPA is used as input to the next SPPA, and so on until the k desired pre-processed signatures are reached. These signatures are then used as input to three different regression methods: the Normalized band Difference Regression (NDR), the Multiple Linear Regression (MLR) and the Partial Least Squares Regression (PLSR). Three Simple Genetic Algorithms (SGAs) are used, one for each regression method, for the selection of the optimum combination of k SPPAs. The performance of the SGAs is evaluated based on the RMS error of the regression models. The evaluation not only indicates the selection of the optimum SPPA combination but also the regression method that produces the optimum prediction model. The proposed method was applied on soil spectral measurements in order to predict Soil Organic Matter (SOM). In this study, the maximum value assigned to k was 3. PLSR yielded the highest accuracy while NDR's accuracy was satisfactory compared to its complexity. MLR method showed severe drawbacks due to the presence of noise in terms of collinearity at the spectral bands. Most of the regression methods required a 3-step combination of SPPAs for achieving the highest performance. The selected preprocessing algorithms were different for each regression method since each regression method handles with a different way the explanatory variables.

  13. An Automatic Statistical Method to detect the Breast Border in a Mammogram

    Directory of Open Access Journals (Sweden)

    Wai Tak (Arthur Hung

    2007-03-01

    Full Text Available Segmentation is an image processing technique to divide an image into several meaningful objects. Edge enhancement and border detection are important components of image segmentation. A mammogram is a soft x-ray of a woman's breast, which is read by radiologists to detect breast cancer. Recently, digital mammography is also available. In order to do computer aided detection on mammogram, the image has to be either in digital form or digitized. A preprocessing step to a digital/digitized mammogram is to detect the breast border so as to minimize the area to search for breast lesion. An enclosed curve is used to define the breast area. In this paper we propose a modified measure of class separability and used it to select the best segmentation result objectively, which leads to an improved border detection method. This new method is then used to analyze a test set of 35 mammograms. The breast border of these 35 mammograms was also traced manually twice to test for their repeatability using Hung's method1. The borders obtained from the proposed automatic border detection method are shown to be of better quality than the corresponding ones traced manually.

  14. Influence of Extraction Methods on the Yield of Steviol Glycosides and Antioxidants in Stevia rebaudiana Extracts.

    Science.gov (United States)

    Periche, Angela; Castelló, Maria Luisa; Heredia, Ana; Escriche, Isabel

    2015-06-01

    This study evaluated the application of ultrasound techniques and microwave energy, compared to conventional extraction methods (high temperatures at atmospheric pressure), for the solid-liquid extraction of steviol glycosides (sweeteners) and antioxidants (total phenols, flavonoids and antioxidant capacity) from dehydrated Stevia leaves. Different temperatures (from 50 to 100 °C), times (from 1 to 40 min) and microwave powers (1.98 and 3.30 W/g extract) were used. There was a great difference in the resulting yields according to the treatments applied. Steviol glycosides and antioxidants were negatively correlated; therefore, there is no single treatment suitable for obtaining the highest yield in both groups of compounds simultaneously. The greatest yield of steviol glycosides was obtained with microwave energy (3.30 W/g extract, 2 min), whereas, the conventional method (90 °C, 1 min) was the most suitable for antioxidant extraction. Consequently, the best process depends on the subsequent use (sweetener or antioxidant) of the aqueous extract of Stevia leaves. PMID:25726419

  15. BMAA extraction of cyanobacteria samples: which method to choose?

    Science.gov (United States)

    Lage, Sandra; Burian, Alfred; Rasmussen, Ulla; Costa, Pedro Reis; Annadotter, Heléne; Godhe, Anna; Rydberg, Sara

    2016-01-01

    β-N-Methylamino-L-alanine (BMAA), a neurotoxin reportedly produced by cyanobacteria, diatoms and dinoflagellates, is proposed to be linked to the development of neurological diseases. BMAA has been found in aquatic and terrestrial ecosystems worldwide, both in its phytoplankton producers and in several invertebrate and vertebrate organisms that bioaccumulate it. LC-MS/MS is the most frequently used analytical technique in BMAA research due to its high selectivity, though consensus is lacking as to the best extraction method to apply. This study accordingly surveys the efficiency of three extraction methods regularly used in BMAA research to extract BMAA from cyanobacteria samples. The results obtained provide insights into possible reasons for the BMAA concentration discrepancies in previous publications. In addition and according to the method validation guidelines for analysing cyanotoxins, the TCA protein precipitation method, followed by AQC derivatization and LC-MS/MS analysis, is now validated for extracting protein-bound (after protein hydrolysis) and free BMAA from cyanobacteria matrix. BMAA biological variability was also tested through the extraction of diatom and cyanobacteria species, revealing a high variance in BMAA levels (0.0080-2.5797 μg g(-1) DW). PMID:26304815

  16. Microscale extraction method for HPLC carotenoid analysis in vegetable matrices

    Directory of Open Access Journals (Sweden)

    Sidney Pacheco

    2014-10-01

    Full Text Available In order to generate simple, efficient analytical methods that are also fast, clean, and economical, and are capable of producing reliable results for a large number of samples, a micro scale extraction method for analysis of carotenoids in vegetable matrices was developed. The efficiency of this adapted method was checked by comparing the results obtained from vegetable matrices, based on extraction equivalence, time required and reagents. Six matrices were used: tomato (Solanum lycopersicum L., carrot (Daucus carota L., sweet potato with orange pulp (Ipomoea batatas (L. Lam., pumpkin (Cucurbita moschata Duch., watermelon (Citrullus lanatus (Thunb. Matsum. & Nakai and sweet potato (Ipomoea batatas (L. Lam. flour. Quantification of the total carotenoids was made by spectrophotometry. Quantification and determination of carotenoid profiles were formulated by High Performance Liquid Chromatography with photodiode array detection. Microscale extraction was faster, cheaper and cleaner than the commonly used one, and advantageous for analytical laboratories.

  17. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Some techniques and methods for deriving water information from SPOT -4 (XI) image were investigatedand discussed in this paper. An algorithm of decision-tree (DT) classification which includes several classifiers based onthe spectral responding characteristics of water bodies and other objects, was developed and put forward to delineate wa-ter bodies. Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary informa-tion of DEM and slope (DTDS) was also designed for water bodies extraction. In addition, supervised classificationmethod of maximum-likelyhood classification (MLC), and unsupervised method of interactive self-organizing dada analy-sis technique (ISODATA) were used to extract waterbodies for comparison purpose. An index was designed and used toassess the accuracy of different methods adopted in the research. Results have shown that water extraction accuracy wasvariable with respect to the various techniques applied. It was low using ISODATA, very high using DT algorithm andmuch higher using both DTDS and MLC.

  18. Spectrophotometric validation of assay method for selected medicinal plant extracts

    Directory of Open Access Journals (Sweden)

    Matthew Arhewoh

    2014-09-01

    Full Text Available Objective: To develop UV spectrophotometric assay validation methods for some selected medicinal plant extracts.Methods: Dried, powdered leaves of Annona muricata (AM and Andrographis paniculata (AP as well as seeds of Garcinia kola (GK and Hunteria umbellata (HU were separately subjected to maceration using distilled water. Different concentrations of the extracts were scanned spectrophotometrically to obtain wavelengths of maximum absorbance. The different extracts were then subjected to validation studies following international guidelines at the respective wavelengths obtained.Results: The results showed linearity at peak wavelengths of maximum absorbance of 292, 280, 274 and 230 nm for GK, HU, AM and AP, respectively. The calibration curves for the different concentrations of the extract gave R2 values ranging from 0.9831 for AM to 0.9996 for AP the inter-day and intra-day precision study showed that the relative standard deviation (% was ≤ 10% for all the extracts.Conclusion: The aqueous extracts and isolates of these plants can be assayed and monitored using these wavelengths.

  19. Computerization of reporting and data storage using automatic coding method in the department of radiology

    International Nuclear Information System (INIS)

    The authors developed a computer program for use in printing report as well as data storage and retrieval in the Radiology department. This program used IBM PC AT and was written in dBASE III plus language. The automatic coding method of the ACR code, developed by Kim et al was applied in this program, and the framework of this program is the same as that developed for the surgical pathology department. The working sheet, which contained the name card for X-ray film identification and the results of previous radiologic studies, were printed during registration. The word precessing function was applied for issuing the formal report of radiologic study, and the data storage was carried out during the typewriting of the report. Two kinds of data files were stored in the hard disk ; the temporary file contained full information and the permanent file contained patient's identification data, and ACR code. Searching of a specific case was performed by chart number, patients name, date of study, or ACR code within a second. All the cases were arranged by ACR codes of procedure code, anatomy code, and pathology code. Every new data was copied to the diskette after daily work automatically, with which data could be restored in case of hard diskette failure. The main advantage of this program with comparison to the larger computer system is its low price. Based on the experience in the Seoul District Armed Forces General Hospital, we assume that this program provides solution to various problems in the radiology department where a large computer system with well designed software is not available

  20. A Novel Method of Genomic DNA Extraction for Cactaceae

    Directory of Open Access Journals (Sweden)

    Shannon D. Fehlberg

    2013-03-01

    Full Text Available Premise of the study: Genetic studies of Cactaceae can at times be impeded by difficult sampling logistics and/or high mucilage content in tissues. Simplifying sampling and DNA isolation through the use of cactus spines has not previously been investigated. Methods and Results: Several protocols for extracting DNA from spines were tested and modified to maximize yield, amplification, and sequencing. Sampling of and extraction from spines resulted in a simplified protocol overall and complete avoidance of mucilage as compared to typical tissue extractions. Sequences from one nuclear and three plastid regions were obtained across eight genera and 20 species of cacti using DNA extracted from spines. Conclusions: Genomic DNA useful for amplification and sequencing can be obtained from cactus spines. The protocols described here are valuable for any cactus species, but are particularly useful for investigators interested in sampling living collections, extensive field sampling, and/or conservation genetic studies.

  1. A novel method of genomic DNA extraction for Cactaceae1

    Science.gov (United States)

    Fehlberg, Shannon D.; Allen, Jessica M.; Church, Kathleen

    2013-01-01

    • Premise of the study: Genetic studies of Cactaceae can at times be impeded by difficult sampling logistics and/or high mucilage content in tissues. Simplifying sampling and DNA isolation through the use of cactus spines has not previously been investigated. • Methods and Results: Several protocols for extracting DNA from spines were tested and modified to maximize yield, amplification, and sequencing. Sampling of and extraction from spines resulted in a simplified protocol overall and complete avoidance of mucilage as compared to typical tissue extractions. Sequences from one nuclear and three plastid regions were obtained across eight genera and 20 species of cacti using DNA extracted from spines. • Conclusions: Genomic DNA useful for amplification and sequencing can be obtained from cactus spines. The protocols described here are valuable for any cactus species, but are particularly useful for investigators interested in sampling living collections, extensive field sampling, and/or conservation genetic studies. PMID:25202521

  2. Analysis of medicinal plant extracts by neutron activation method

    International Nuclear Information System (INIS)

    This dissertation has presented the results from analysis of medicinal plant extracts using neutron activation method. Instrumental neutron activation analysis was applied to the determination of the elements Al, Br, Ca, Ce, Cl, Cr, Cs, Fe, K, La, Mg, Mn, Na, Rb, Sb, Sc and Zn in medicinal extracts obtained from Achyrolcline satureoides DC, Casearia sylvestris, Centella asiatica, Citrus aurantium L., Solano lycocarpum, Solidago microglossa, Stryphnondedron barbatiman and Zingiber officinale R. plants. The elements Hg and Se were determined using radiochemical separation by means of retention of Se in HMD inorganic exchanger and solvent extraction of Hg by bismuth diethyl-dithiocarbamate solution. Precision and accuracy of the results have been evaluated by analysing reference materials. The therapeutic action of some elements found in plant extracts analyzed was briefly discussed

  3. Effect of Extraction Method on the Phenolic and Cyanogenic Glucoside Profile of Flaxseed Extracts and their Antioxidant Capacity

    OpenAIRE

    Waszkowiak, Katarzyna; Gliszczyńska-Świgło, Anna; Barthet, Veronique; Skręty, Joanna

    2015-01-01

    The application of flaxseed extracts as food ingredients is a subject of interest to food technologists and nutritionists. Therefore, the influence of the extraction method on the content and composition of beneficial compounds as well as anti-nutrients is important. In the study, the effects of two solvent extraction methods, aqueous and 60 % ethanolic, on phenolic and cyanogenic glucoside profiles of flaxseed extract were determined and compared. The impact of extracted phenolic compounds o...

  4. Comparison of Automatic Classifiers’ Performances using Word-based Feature Extraction Techniques in an E-government setting

    OpenAIRE

    Marin Rodenas, Alfonso

    2011-01-01

    Nowadays email is commonly used by citizens to establish communication with their government. On the received emails, governments deal with some common queries and subjects which some handling officers have to manually answer. Automatic email classification of the incoming emails allows to increase the communication efficiency by decreasing the delay between the query and its response. This thesis takes part within the IMAIL project, which aims to provide an automatic answering solution to th...

  5. Optimization of the Phenol -Chloroform Silica DNA Extraction Method in Ancient Bones DNA Extraction

    Directory of Open Access Journals (Sweden)

    Morteza Sadeghi

    2014-04-01

    Full Text Available Introduction: DNA extraction from the ancient bones tissues is currently very difficult. Phenol chloroform silica method is one of the methods currently used for this aim. The purpose of this study was to optimize the assessment method. Methods: DNA of 62 bone tissues (average 3-11 years was first extracted with phenol chloroform silica methods and then with changing of some parameters of the methods the extracted DNA was amplified in eight polymorphisms area including FES, F13, D13S317, D16, D5S818, vWA and CD4. Results from samples gained by two methods were compared in acrylamide gel. Results: The average of PCR yield for new method and common method in eight polymorphism regions was 75%, 78%, 81%, 76%, 85%, 71%, 89%, 86% and 64%, 39%, 70%, 49%, 68%, 76%, 71% and 28% respectively. The average of DNA in optimized (in 35l silica density and common method were 267.5 µg/ml with 1.12 purity and 192.76 g/ml with 0.84 purity respectively. Conclusions: According to the findings of this study, it is estimated that longer EDTA attendance is an efficient agent in removing calcium and also adequate density of silica particles can be efficient in removal of PCR inhibitors.

  6. Automatic calibration method of voxel size for cone-beam 3D-CT scanning system

    International Nuclear Information System (INIS)

    For a cone-beam three-dimensional computed tomography (3D-CT) scanning system, voxel size is an important indicator to guarantee the accuracy of data analysis and feature measurement based on 3D-CT images. Meanwhile, the voxel size changes with the movement of the rotary stage along X-ray direction. In order to realize the automatic calibration of the voxel size, a new and easily-implemented method is proposed. According to this method, several projections of a spherical phantom are captured at different imaging positions and the corresponding voxel size values are calculated by non-linear least-square fitting. Through these interpolation values, a linear equation is obtained that reflects the relationship between the voxel size and the rotary stage translation distance from its nominal zero position. Finally, the linear equation is imported into the calibration module of the 3D-CT scanning system. When the rotary stage is moving along X-ray direction, the accurate value of the voxel size is dynamically exported. The experimental results prove that this method meets the requirements of the actual CT scanning system, and has virtues of easy implementation and high accuracy. (authors)

  7. Automatic generation of a view to geographical database

    OpenAIRE

    Dunkars, Mats

    2001-01-01

    This thesis concerns object oriented modelling and automatic generalisation of geographic information. The focus however is not on traditional paper maps, but on screen maps that are automatically generated from a geographical database. Object oriented modelling is used to design screen maps that are equipped with methods that automatically extracts information from a geographical database, generalises the information and displays it on a screen. The thesis consists of three parts: a theoreti...

  8. Comparison of RNA extraction methods in Thai aromatic coconut water

    Directory of Open Access Journals (Sweden)

    Nopporn Jaroonchon

    2015-10-01

    Full Text Available Many researches have reported that nucleic acid in coconut water is in free form and at very low yields which makes it difficult to process in molecular studies. Our research attempted to compare two extraction methods to obtain a higher yield of total RNA in aromatic coconut water and monitor its change at various fruit stages. The first method used ethanol and sodium acetate as reagents; the second method used lithium chloride. We found that extraction using only lithium chloride gave a higher total RNA yield than the method using ethanol to precipitate nucleic acid. In addition, the total RNA from both methods could be used in amplification of betaine aldehyde dehydrogenase2 (Badh2 genes, which is involved in coconut aroma biosynthesis, and could be used to perform further study as we expected. From the molecular study, the nucleic acid found in coconut water increased with fruit age.

  9. Comparative Research on EPS Extraction from Mechanical Dewatered Sludge with Different Methods

    Directory of Open Access Journals (Sweden)

    Weiyun Wang

    2015-09-01

    Full Text Available In order to find a suitable extracellular polymer substance (EPS extraction method for mechanical dewatered sludge, four different methods including EDTA extraction, alkali extraction, acid extraction, ultrasonic extraction method have been used in extracting EPS from belt filter dewatered sludge. The contents of polysaccharide and proteins extracted from the dewatered sludge by different extraction methods are also analyzed. The results indicated that EDTA method and alkali extraction method are more suitable for dewatered sludge with more EPS content and less cell damage, while sulfuric acid extraction and ultrasonic extraction were poorer with obvious cell lysis shown by higher DNA content in extracted EPS. Contents of proteins and polysaccharide in EPS extracted from mechanical dewatered sludge, is at the contents between that in EPS extracted from activated sludge and anaerobic digestion sludge.

  10. Development of 99mTc extraction-recovery by solvent extraction method

    International Nuclear Information System (INIS)

    99mTc is used as a radiopharmaceutical in the medical field for the diagnosis, and manufactured from 99Mo, the parent nuclide. In this study, the solvent extraction with MEK was selected, and preliminary experiments were carried out using Re instead of 99mTc. Two tests were carried out in the experiments; the one is the Re extraction test with MEK from Re-Mo solution, the other is the Re recovery test from the Re-MEK. As to the Re extraction test, and it was clear that the Re extraction yield was more than 90%. Two kinds of Re recovery tests, which are an evaporation method using the evaporator and an adsorption/elution method using the alumina column, were carried out. As to the evaporation method, the Re concentration in the collected solution increased more than 150 times. As to the adsorption/elution method, the Re concentration increased in the eluted solution more than 20 times. (author)

  11. Calculation of radon concentration in water by toluene extraction method

    Energy Technology Data Exchange (ETDEWEB)

    Saito, Masaaki [Tokyo Metropolitan Isotope Research Center (Japan)

    1997-02-01

    Noguchi method and Horiuchi method have been used as the calculation method of radon concentration in water. Both methods have two problems in the original, that is, the concentration calculated is changed by the extraction temperature depend on the incorrect solubility data and the concentration calculated are smaller than the correct values, because the radon calculation equation does not true to the gas-liquid equilibrium theory. However, the two problems are solved by improving the radon equation. I presented the Noguchi-Saito equation and the constant B of Horiuchi-Saito equation. The calculating results by the improved method showed about 10% of error. (S.Y.)

  12. Development of a rapid method for the automatic classification of biological agents' fluorescence spectral signatures

    Science.gov (United States)

    Carestia, Mariachiara; Pizzoferrato, Roberto; Gelfusa, Michela; Cenciarelli, Orlando; Ludovici, Gian Marco; Gabriele, Jessica; Malizia, Andrea; Murari, Andrea; Vega, Jesus; Gaudio, Pasquale

    2015-11-01

    Biosecurity and biosafety are key concerns of modern society. Although nanomaterials are improving the capacities of point detectors, standoff detection still appears to be an open issue. Laser-induced fluorescence of biological agents (BAs) has proved to be one of the most promising optical techniques to achieve early standoff detection, but its strengths and weaknesses are still to be fully investigated. In particular, different BAs tend to have similar fluorescence spectra due to the ubiquity of biological endogenous fluorophores producing a signal in the UV range, making data analysis extremely challenging. The Universal Multi Event Locator (UMEL), a general method based on support vector regression, is commonly used to identify characteristic structures in arrays of data. In the first part of this work, we investigate fluorescence emission spectra of different simulants of BAs and apply UMEL for their automatic classification. In the second part of this work, we elaborate a strategy for the application of UMEL to the discrimination of different BAs' simulants spectra. Through this strategy, it has been possible to discriminate between these BAs' simulants despite the high similarity of their fluorescence spectra. These preliminary results support the use of SVR methods to classify BAs' spectral signatures.

  13. An object-based classification method for automatic detection of lunar impact craters from topographic data

    Science.gov (United States)

    Vamshi, Gasiganti T.; Martha, Tapas R.; Vinod Kumar, K.

    2016-05-01

    Identification of impact craters is a primary requirement to study past geological processes such as impact history. They are also used as proxies for measuring relative ages of various planetary or satellite bodies and help to understand the evolution of planetary surfaces. In this paper, we present a new method using object-based image analysis (OBIA) technique to detect impact craters of wide range of sizes from topographic data. Multiresolution image segmentation of digital terrain models (DTMs) available from the NASA's LRO mission was carried out to create objects. Subsequently, objects were classified into impact craters using shape and morphometric criteria resulting in 95% detection accuracy. The methodology developed in a training area in parts of Mare Imbrium in the form of a knowledge-based ruleset when applied in another area, detected impact craters with 90% accuracy. The minimum and maximum sizes (diameters) of impact craters detected in parts of Mare Imbrium by our method are 29 m and 1.5 km, respectively. Diameters of automatically detected impact craters show good correlation (R2 > 0.85) with the diameters of manually detected impact craters.

  14. Automatic Method for Identifying Photospheric Bright Points and Granules Observed by Sunrise

    CERN Document Server

    Javaherian, Mohsen; Amiri, Ali; Ziaei, Shervin

    2014-01-01

    In this study, we propose methods for the automatic detection of photospheric features (bright points and granules) from ultra-violet (UV) radiation, using a feature-based classifier. The methods use quiet-Sun observations at 214 nm and 525 nm images taken by Sunrise on 9 June 2009. The function of region growing and mean shift procedure are applied to segment the bright points (BPs) and granules, respectively. Zernike moments of each region are computed. The Zernike moments of BPs, granules, and other features are distinctive enough to be separated using a support vector machine (SVM) classifier. The size distribution of BPs can be fitted with a power-law slope -1.5. The peak value of granule sizes is found to be about 0.5 arcsec^2. The mean value of the filling factor of BPs is 0.01, and for granules it is 0.51. There is a critical scale for granules so that small granules with sizes smaller than 2.5 arcsec^2 cover a wide range of brightness, while the brightness of large granules approaches unity. The mean...

  15. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  16. Photoelectric scanning-based method for positioning omnidirectional automatic guided vehicle

    Science.gov (United States)

    Huang, Zhe; Yang, Linghui; Zhang, Yunzhi; Guo, Yin; Ren, Yongjie; Lin, Jiarui; Zhu, Jigui

    2016-03-01

    Automatic guided vehicle (AGV) as a kind of mobile robot has been widely used in many applications. For better adapting to the complex working environment, more and more AGVs are designed to be omnidirectional by being equipped with Mecanum wheels for increasing their flexibility and maneuverability. However, as the AGV with this kind of wheels suffers from the position errors mainly because of the frequent slipping property, how to measure its position accurately in real time is an extremely important issue. Among the ways of achieving it, the photoelectric scanning methodology based on angle measurement is efficient. Hence, we propose a feasible method to ameliorate the positioning process, which mainly integrates four photoelectric receivers and one laser transmitter. To verify the practicality and accuracy, actual experiments and computer simulations have been conducted. In the simulation, the theoretical positioning error is less than 0.28 mm in a 10 m×10 m space. In the actual experiment, the performances about the stability, accuracy, and dynamic capability of this method were inspected. It demonstrates that the system works well and the performance of the position measurement is high enough to fulfill the mainstream tasks.

  17. Study on the improvement of signal transmission method in automatic gamma scanner

    International Nuclear Information System (INIS)

    Industrial column is one of the most important units in petrochemical industry and on-line diagnosis on it offers valuable information for the effective maintenance and the optimal operation. Vertical density profile which can be obtained from the measurement of the transmitted gamma radiation can reveal the critical clue for the on-line diagnosis. The radiation measurement result is transmitted as an analog signal through co-axial cable 100m long to data processing unit in the conventional method. The measurement is readily affected by electric noise in this method because of the long co-axial cable and the interface between the radiation circuit and the controller for mechanical operation. The radiation detection system introduced here was designed to generate digital modulated signal by internal power supply system and signal processing circuits. The signal is sent by FSK MODEM installed inside the radiation detection system and transmitted to the data acquisition system through a loop coil which makes no physical contact between rotating part and stationery part of the column scanner. This self-powered detection system gives good solution for automatic gamma scanner by isolating the controlling circuit of mechanical system from radiation detecting circuit which is extremely sensitive to surrounding electrical noise

  18. Automatic Calibration Method of Voxel Size for Cone-beam 3D-CT Scanning System

    CERN Document Server

    Yang, Min; Liu, Yipeng; Men, Fanyong; Li, Xingdong; Liu, Wenli; Wei, Dongbo

    2013-01-01

    For cone-beam three-dimensional computed tomography (3D-CT) scanning system, voxel size is an important indicator to guarantee the accuracy of data analysis and feature measurement based on 3D-CT images. Meanwhile, the voxel size changes with the movement of the rotary table along X-ray direction. In order to realize the automatic calibration of the voxel size, a new easily-implemented method is proposed. According to this method, several projections of a spherical phantom are captured at different imaging positions and the corresponding voxel size values are calculated by non-linear least square fitting. Through these interpolation values, a linear equation is obtained, which reflects the relationship between the rotary table displacement distance from its nominal zero position and the voxel size. Finally, the linear equation is imported into the calibration module of the 3D-CT scanning system, and when the rotary table is moving along X-ray direction, the accurate value of the voxel size is dynamically expo...

  19. [Searching for WDMS Candidates In SDSS-DR10 With Automatic Method].

    Science.gov (United States)

    Jiang, Bin; Wang, Cheng-you; Wang, Wen-yu; Wang, Wei

    2015-05-01

    The Sloan Digital Sky Survey (SDSS) has released the latest data (DR10) which covers the first APOGEE spectra. The massive spectra can be used for large sample research inscluding the structure and evolution of the Galaxy and multi-wave-band identi cation. In addition, the spectra are also ideal for searching for rare and special objects like white dwarf main-sequence star (WDMS). WDMS consist of a white dwarf primary and a low-mass main-sequence (MS) companion which has positive significance to the study of evolution and parameter of close binaries. WDMS is generally discovered by repeated imaging of the same area of sky, measuring light curves for objects or through photometric selection with follow-up observations. These methods require significant manual processing time with low accuracy and the real-time processing requirements can not be satisfied. In this paper, an automatic and efficient method for searching for WDMS candidates is presented. The method Genetic Algorithm (GA) is applied in the newly released SDSS-DR10 spectra. A total number of 4 140 WDMS candidates are selected by the method and 24 of them are new discoveries which prove that our approach of finding special celestial bodies in massive spectra data is feasible. In addition, this method is also applicable to mining other special celestial objects in sky survey telescope data. We report the identfication of 24 new WDMS with spectra. A compendium of positions, mjd, plate and fiberid of these new discoveries is presented which enrich the spectral library and will be useful to the research of binary evolution models. PMID:26415473

  20. Single corn kernel aflatoxin B1 extraction and analysis method

    Science.gov (United States)

    Aflatoxins are highly carcinogenic compounds produced by the fungus Aspergillus flavus. Aspergillus flavus is a phytopathogenic fungus that commonly infects crops such as cotton, peanuts, and maize. The goal was to design an effective sample preparation method and analysis for the extraction of afla...

  1. Facial Feature Extraction Method Based on Coefficients of Variances

    Institute of Scientific and Technical Information of China (English)

    Feng-Xi Song; David Zhang; Cai-Kou Chen; Jing-Yu Yang

    2007-01-01

    Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two popular feature ex- traction techniques in statistical pattern recognition field. Due to small sample size problem LDA cannot be directly applied to appearance-based face recognition tasks. As a consequence, a lot of LDA-based facial feature extraction techniques are proposed to deal with the problem one after the other. Nullspace Method is one of the most effective methods among them. The Nullspace Method tries to find a set of discriminant vectors which maximize the between-class scatter in the null space of the within-class scatter matrix. The calculation of its discriminant vectors will involve performing singular value decomposition on a high-dimensional matrix. It is generally memory- and time-consuming. Borrowing the key idea in Nullspace method and the concept of coefficient of variance in statistical analysis we present a novel facial feature extraction method, i.e., Discriminant based on Coefficient of Variance (DCV) in this paper. Experimental results performed on the FERET and AR face image databases demonstrate that DCV is a promising technique in comparison with Eigenfaces, Nullspace Method, and other state-of-the-art facial feature extraction methods.

  2. Extraction of uranium from simulated ore by the supercritical carbon dioxide fluid extraction method with nitric acid-TBP complex

    International Nuclear Information System (INIS)

    The supercritical fluid extraction (SFE) method using CO2 as a medium with an extractant of HNO3-tri-n-butyl phosphate (TBP) complex was applied to extract uranium from several uranyl phosphate compounds and simulated uranium ores. An extraction method consisting of a static extraction process and a dynamic one was established, and the effects of the experimental conditions, such as pressure, temperature, and extraction time, on the extraction of uranium were ascertained. It was found that uranium could be efficiently extracted from both the uranyl phosphates and simulated ores by the SFE method using CO2. It was thus demonstrated that the SFE method using CO2 is useful as a pretreatment method for the analysis of uranium in ores. (author)

  3. Automatic Method for Synchronizing Workpiece Frames in Twin-robot Nondestructive Testing System

    Institute of Scientific and Technical Information of China (English)

    LU Zongxing; XU Chunguang; PAN Qinxue; MENG Fanwu; LI Xinliang

    2015-01-01

    The workpiece frames relative to each robot base frame should be known in advance for the proper operation of twin-robot nondestructive testing system. However, when two robots are separated from the workpieces, the twin robots cannot reach the same point to complete the process of workpiece frame positioning. Thus, a new method is proposed to solve the problem of coincidence between workpiece frames. Transformation between two robot base frames is initiated by measuring the coordinate values of three non-collinear calibration points. The relationship between the workpiece frame and that of the slave robot base frame is then determined according to the known transformation of two robot base frames, as well as the relationship between the workpiece frame and that of the master robot base frame. Only one robot is required to actually measure the coordinate values of the calibration points on the workpiece. This requirement is beneficial when one of the robots cannot reach and measure the calibration points. The coordinate values of the calibration points are derived by driving the robot hand to the points and recording the values of top center point(TCP) coordinates. The translation and rotation matrices relate either the two robot base frames or the workpiece and master robot. The coordinated are solved using the measured values of the calibration points according to the Cartesian transformation principle. An optimal method is developed based on exponential mapping of Lie algebra to ensure that the rotation matrix is orthogonal. Experimental results show that this method involves fewer steps, offers significant advantages in terms of operation and time-saving. A method used to synchronize workpiece frames in twin-robot system automatically is presented.

  4. A HYBRID METHOD FOR AUTOMATIC SPEECH RECOGNITION PERFORMANCE IMPROVEMENT IN REAL WORLD NOISY ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Urmila Shrawankar

    2013-01-01

    Full Text Available It is a well known fact that, speech recognition systems perform well when the system is used in conditions similar to the one used to train the acoustic models. However, mismatches degrade the performance. In adverse environment, it is very difficult to predict the category of noise in advance in case of real world environmental noise and difficult to achieve environmental robustness. After doing rigorous experimental study it is observed that, a unique method is not available that will clean the noisy speech as well as preserve the quality which have been corrupted by real natural environmental (mixed noise. It is also observed that only back-end techniques are not sufficient to improve the performance of a speech recognition system. It is necessary to implement performance improvement techniques at every step of back-end as well as front-end of the Automatic Speech Recognition (ASR model. Current recognition systems solve this problem using a technique called adaptation. This study presents an experimental study that aims two points, first is to implement the hybrid method that will take care of clarifying the speech signal as much as possible with all combinations of filters and enhancement techniques. The second point is to develop a method for training all categories of noise that can adapt the acoustic models for a new environment that will help to improve the performance of the speech recognizer under real world environmental mismatched conditions. This experiment confirms that hybrid adaptation methods improve the ASR performance on both levels, (Signal-to-Noise Ratio SNR improvement as well as word recognition accuracy in real world noisy environment.

  5. A Comparison of DNA Extraction Methods using Petunia hybrida Tissues

    OpenAIRE

    Tamari, Farshad; Hinkley, Craig S.; Ramprashad, Naderia

    2013-01-01

    Extraction of DNA from plant tissue is often problematic, as many plants contain high levels of secondary metabolites that can interfere with downstream applications, such as the PCR. Removal of these secondary metabolites usually requires further purification of the DNA using organic solvents or other toxic substances. In this study, we have compared two methods of DNA purification: the cetyltrimethylammonium bromide (CTAB) method that uses the ionic detergent hexadecyltrimethylammonium brom...

  6. Rapid method to extract DNA from Cryptococcus neoformans.

    OpenAIRE

    Varma, A.; Kwon-Chung, K. J.

    1991-01-01

    A rapid and easy method for the extraction of total cellular DNA from Cryptococcus neoformans is described. This procedure modifies and considerably simplifies previously reported methods. Numerous steps were either eliminated or replaced, including preincubations with cell wall permeability agents such as beta-mercaptoethanol and dithiothreitol. The commercially available enzyme preparation Novozyme 234 was found to contain a potent concentration of DNases which actively degrade DNA. Degrada...

  7. Automatic Enhancement of the Reference Set for Multi-Criteria Sorting in The Frame of Theseus Method

    Directory of Open Access Journals (Sweden)

    Fernandez Eduardo

    2014-05-01

    Full Text Available Some recent works have established the importance of handling abundant reference information in multi-criteria sorting problems. More valid information allows a better characterization of the agent’s assignment policy, which can lead to an improved decision support. However, sometimes information for enhancing the reference set may be not available, or may be too expensive. This paper explores an automatic mode of enhancing the reference set in the framework of the THESEUS multi-criteria sorting method. Some performance measures are defined in order to test results of the enhancement. Several theoretical arguments and practical experiments are provided here, supporting a basic advantage of the automatic enhancement: a reduction of the vagueness measure that improves the THESEUS accuracy, without additional efforts from the decision agent. The experiments suggest that the errors coming from inadequate automatic assignments can be kept at a manageable level.

  8. Automatic Mapping Extraction from Multiecho T2-Star Weighted Magnetic Resonance Images for Improving Morphological Evaluations in Human Brain

    Directory of Open Access Journals (Sweden)

    Shaode Yu

    2013-01-01

    Full Text Available Mapping extraction is useful in medical image analysis. Similarity coefficient mapping (SCM replaced signal response to time course in tissue similarity mapping with signal response to TE changes in multiecho T2-star weighted magnetic resonance imaging without contrast agent. Since different tissues are with different sensitivities to reference signals, a new algorithm is proposed by adding a sensitivity index to SCM. It generates two mappings. One measures relative signal strength (SSM and the other depicts fluctuation magnitude (FMM. Meanwhile, the new method is adaptive to generate a proper reference signal by maximizing the sum of contrast index (CI from SSM and FMM without manual delineation. Based on four groups of images from multiecho T2-star weighted magnetic resonance imaging, the capacity of SSM and FMM in enhancing image contrast and morphological evaluation is validated. Average contrast improvement index (CII of SSM is 1.57, 1.38, 1.34, and 1.41. Average CII of FMM is 2.42, 2.30, 2.24, and 2.35. Visual analysis of regions of interest demonstrates that SSM and FMM show better morphological structures than original images, T2-star mapping and SCM. These extracted mappings can be further applied in information fusion, signal investigation, and tissue segmentation.

  9. Automatic extraction of corpus callosum from midsagittal head MR image and examination of Alzheimer-type dementia objective diagnostic system in feature analysis

    International Nuclear Information System (INIS)

    We studied the objective diagnosis of Alzheimer-type dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 40 Alzheimer-type dementia patients (15 men and 25 women; mean age, 75.4±5.5 years) and 31 healthy elderly persons (10 men and 21 women; mean age, 73.4±7.5 years), 71 subjects altogether. First, the corpus callosum was automatically extracted from midsagittal head MR images. Next, Alzheimer-type dementia was compared with the healthy elderly individuals using the features of shape factor and six features of Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum succeeded in 64 of 71 individuals, for an extraction rate of 90.1%. A statistically significant difference was found in 7 of the 9 features between Alzheimer-type dementia patients and the healthy elderly adults. Discriminant analysis using the 7 features demonstrated a sensitivity rate of 82.4%, specificity of 89.3%, and overall accuracy of 85.5%. These results indicated the possibility of an objective diagnostic system for Alzheimer-type dementia using feature analysis based on change in the corpus callosum. (author)

  10. Fast Marching and Runge-Kutta Based Method for Centreline Extraction of Right Coronary Artery in Human Patients.

    Science.gov (United States)

    Cui, Hengfei; Wang, Desheng; Wan, Min; Zhang, Jun-Mei; Zhao, Xiaodan; Tan, Ru San; Huang, Weimin; Xiong, Wei; Duan, Yuping; Zhou, Jiayin; Luo, Tong; Kassab, Ghassan S; Zhong, Liang

    2016-06-01

    The CT angiography (CTA) is a clinically indicated test for the assessment of coronary luminal stenosis that requires centerline extractions. There is currently no centerline extraction algorithm that is automatic, real-time and very accurate. Therefore, we sought to (i) develop a hybrid approach by incorporating fast marching and Runge-Kutta based methods for the extraction of coronary artery centerlines from CTA; (ii) evaluate the accuracy of the present method compared to Van's method by using ground truth centerline as a reference; (iii) evaluate the coronary lumen area of our centerline method in comparison with the intravascular ultrasound (IVUS) as the standard of reference. The proposed method was found to be more computationally efficient, and performed better than the Van's method in terms of overlap measures (i.e., OV: [Formula: see text] vs. [Formula: see text]; OF: [Formula: see text] vs. [Formula: see text]; and OT: [Formula: see text] vs. [Formula: see text], all [Formula: see text]). In comparison with IVUS derived coronary lumen area, the proposed approach was more accurate than the Van's method. This hybrid approach by incorporating fast marching and Runge-Kutta based methods could offer fast and accurate extraction of centerline as well as the lumen area. This method may garner wider clinical potential as a real-time coronary stenosis assessment tool. PMID:27140197

  11. Research of Anti-Noise Image Salient Region Extraction Method

    Directory of Open Access Journals (Sweden)

    Bing XU

    2014-01-01

    Full Text Available The existing image salient region extraction technology is mostly suitable for processing noise-free images, and there is a lack of studies on the impact of noise on images. In this study the adaptive kernel function was employed in image salient region detection. The salient property of a region was determined by the dissimilarities between the pixels of the image region and its surroundings. The dissimilarity was measured as a decreasing function associated with adaptive kernel regression. The proposed algorithm used multi-scale fusion method to obtain the salient region of the whole image. As adaptive kernel function has strong anti-noise characteristics, the proposed algorithm was characterized with the same robustness. A numerical simulation experiment was conducted on salient region extraction of images with noise and without noise. A comparison between this study’s results and two existing salient region extraction methods revealed that the proposed method in this study was superior in its extraction accuracy of image salient regions and could reduce interference of image noise.

  12. New Multipole Method for 3-D Capacitance Extraction

    Institute of Scientific and Technical Information of China (English)

    Zhao-Zhi Yang; Ze-Yi Wang

    2004-01-01

    This paper describes an effcient improvement of the multipole accelerated boundary element method for 3-D capacitance extraction.The overall relations between the positions of 2-D boundary elements are considered instead of only the relations between the center-points of the elements,and a new method of cube partitioning is introduced.Numerical results are presented to demonstrate that the method is accurate and has nearly linear computational growth as O(n),where n is the number of panels/boundary elements.The proposed method is more accurate and much faster than Fastcap.

  13. Photoplethysmography-Based Method for Automatic Detection of Premature Ventricular Contractions.

    Science.gov (United States)

    Solosenko, Andrius; Petrenas, Andrius; Marozas, Vaidotas

    2015-10-01

    This work introduces a method for detection of premature ventricular contractions (PVCs) in photoplethysmogram (PPG). The method relies on 6 features, characterising PPG pulse power, and peak-to-peak intervals. A sliding window approach is applied to extract the features, which are later normalized with respect to an estimated heart rate. Artificial neural network with either linear and non-linear outputs was investigated as a feature classifier. PhysioNet databases, namely, the MIMIC II and the MIMIC, were used for training and testing, respectively. After annotating the PPGs with respect to synchronously recorded electrocardiogram, two main types of PVCs were distinguished: with and without the observable PPG pulse. The obtained sensitivity and specificity values for both considered PVC types were 92.4/99.9% and 93.2/99.9%, respectively. The achieved high classification results form a basis for a reliable PVC detection using a less obtrusive approach than the electrocardiography-based detection methods. PMID:26513800

  14. Automatic leveling procedure by use of the spring method in measurement of three-dimensional surface roughness

    Science.gov (United States)

    Kurokawa, Syuhei; Ariura, Yasutsune; Yamamoto, Tatsuyuki

    2008-12-01

    Leveling of specimen surfaces is very important in measurement of surface roughness. If the surface is not leveled, the measured roughness has large distortion and less vertical measurement range. It is convenient to utilize some automatic leveling procedures instead of manual leveling which needs longer adjustment time. In automatic leveling, a new algorithm is proposed, which is named the spring method superior to the least square method. The spring method has an advantage that a part of tentative data points is used to calculate the surface inclination, so the obtained results are less influenced by local pits for example. As examples, the spring method was applied to actual engineered surfaces, which were milled, shot-peened, and ground surfaces, and also an artificial ditched surface. The results went well for the calculation of the surface inclinations and consequently the specimen surfaces were leveled with less distortion and large vertical measurement range can be achieved. It is also found the least square method is a special case of the spring method with using all sampling data points. That means the spring method is a comprehensive procedure including the least square method. This must become a very strong and robust method in automatic leveling algorithm

  15. Spoken Language Identification Using Hybrid Feature Extraction Methods

    CERN Document Server

    Kumar, Pawan; Mishra, A N; Chandra, Mahesh

    2010-01-01

    This paper introduces and motivates the use of hybrid robust feature extraction technique for spoken language identification (LID) system. The speech recognizers use a parametric form of a signal to get the most important distinguishable features of speech signal for recognition task. In this paper Mel-frequency cepstral coefficients (MFCC), Perceptual linear prediction coefficients (PLP) along with two hybrid features are used for language Identification. Two hybrid features, Bark Frequency Cepstral Coefficients (BFCC) and Revised Perceptual Linear Prediction Coefficients (RPLP) were obtained from combination of MFCC and PLP. Two different classifiers, Vector Quantization (VQ) with Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM) were used for classification. The experiment shows better identification rate using hybrid feature extraction techniques compared to conventional feature extraction methods.BFCC has shown better performance than MFCC with both classifiers. RPLP along with GMM has shown be...

  16. Gaharu oil processing: gaharu oil from conventional extraction method

    International Nuclear Information System (INIS)

    Gaharu oil is extracted through water or steam distillation of gaharu wood powder. Gaharu oil can fetch prices ranging from RM 25,000 to RM 50,000 per kg, depending on the quality or grade of gaharu wood used to produce the oil. The oil is commonly exported to the Middle East and customarily used as a perfume base. This paper describes gaharu oil extraction technique from traditional method which is commonly practiced by gaharu entrepreneurs in Malaysia. Gaharu woods are initially chopped, dried and ground into powder form. The gaharu wood powder is then soaked in water for a week. After the soaking process, the fermented powder is distilled with water using a special distiller for 4 to 10 days depending on the quality of gaharu wood used in the extraction process. (Author)

  17. Analytical methods and problems for the diamides type of extractants

    International Nuclear Information System (INIS)

    Diamides of carboxylic acids and especially malonamides are able to extract alpha emitters (including trivalent ions such as Am and Cm) contained in the wastes solutions of the nuclear industry. As they are completely incinerable and easy to purify, they could be an alternative to the mixture CMPO-TBP which is used in the TRUEX process. A large oxyalkyl radical enhances the distribution coefficients of americium in nitric acid sufficiently to permit the decontamination of wastes solutions in a classical mixers-settlers battery. Now researches are pursued with the aim of optimizing the formula of extractant, the influence of the structure of the extractant on its basicity and stability under radiolysis and hydrolysis is investigated. Analytical methods (potentiometry and NMR of C13) have been developed for solvent titration and to evaluate the percentage of degradation and to identify some of the degradation products

  18. Automatic disease screening method using image processing for dried blood microfluidic drop stain pattern recognition.

    Science.gov (United States)

    Sikarwar, Basant S; Roy, Mukesh; Ranjan, Priya; Goyal, Ayush

    2016-07-01

    This paper examines programmed automatic recognition of infection from samples of dried stains of micro-scale drops of patient blood. This technique has the upside of being low-cost and less-intrusive and not requiring puncturing the patient with a needle for drawing blood, which is especially critical for infants and the matured. It also does not require expensive pathological blood test laboratory equipment. The method is shown in this work to be successful for ailment identification in patients suffering from tuberculosis and anaemia. Illness affects the physical properties of blood, which thus influence the samples of dried micro-scale blood drop stains. For instance, if a patient has a severe drop in platelet count, which is often the case of dengue or malaria patients, the blood's physical property of viscosity drops substantially, i.e. the blood is thinner. Thus, the blood micro-scale drop stain samples can be utilised for diagnosing maladies. This paper presents programmed automatic examination of the dried micro-scale drop blood stain designs utilising an algorithm based on pattern recognition. The samples of micro-scale blood drop stains of ordinary non-infected people are clearly recognisable as well as the samples of micro-scale blood drop stains of sick people, due to key distinguishing features. As a contextual analysis, the micro-scale blood drop stains of patients infected with tuberculosis have been contrasted with the micro-scale blood drop stains of typical normal healthy people. The paper dives into the fundamental flow mechanics behind how the samples of the dried micro-scale blood drop stain is shaped. What has been found is a thick ring like feature in the dried micro-scale blood drop stains of non-ailing people and thin shape like lines in the dried micro-scale blood drop stains of patients with anaemia or tuberculosis disease. The ring like feature at the periphery is caused by an outward stream conveying suspended particles to the edge

  19. Self-organizing criticality and the method of automatic search of critical points

    International Nuclear Information System (INIS)

    We discuss the method of automatic search of critical point (MASCP) in the context of self-organizing criticality (SOC). The system analyzed is a contact process that presents a non-equilibrium phase transition between two states: active state and inactive state (the so-called absorbing state). The lattice sites represent infected and healthy individuals. We apply the technique MASCP to the propagation of epidemy in an unidimensional lattice at the criticality (space-domain). We take the technique MASCP to study SOC behavior. The time-series of density of infected individuals is analyzed using two complementary tools: Fourier analysis and detrended fluctuation analysis. We find numeric evidence that the time evolution that drives the system to the critical point in MASCP is not a SOC problem, but Gaussian noise. A SOC problem is characterized by an interaction-dominated system that goes spontaneously to the critical point. In fact MASCP goes by itself to a stationary point but it is not an interaction-dominated process, but a mean-field interaction process

  20. A method for automatic matching of multi-timepoint findings for enhanced clinical workflow

    Science.gov (United States)

    Raghupathi, Laks; Dinesh, MS; Devarakota, Pandu R.; Valadez, Gerardo Hermosillo; Wolf, Matthias

    2013-03-01

    Non-interventional diagnostics (CT or MR) enables early identification of diseases like cancer. Often, lesion growth assessment done during follow-up is used to distinguish between benign and malignant ones. Thus correspondences need to be found for lesions localized at each time point. Manually matching the radiological findings can be time consuming as well as tedious due to possible differences in orientation and position between scans. Also, the complicated nature of the disease makes the physicians to rely on multiple modalities (PETCT, PET-MR) where it is even more challenging. Here, we propose an automatic feature-based matching that is robust to change in organ volume, subpar or no registration that can be done with very less computations. Traditional matching methods rely mostly on accurate image registration and applying the resulting deformation map on the findings coordinates. This has disadvantages when accurate registration is time-consuming or may not be possible due to vast organ volume differences between scans. Our novel matching proposes supervised learning by taking advantage of the underlying CAD features that are already present and considering the matching as a classification problem. In addition, the matching can be done extremely fast and at reasonable accuracy even when the image registration fails for some reason. Experimental results∗ on real-world multi-time point thoracic CT data showed an accuracy of above 90% with negligible false positives on a variety of registration scenarios.