WorldWideScience

Sample records for automatically extracting molecular

  1. Challenges for automatically extracting molecular interactions from full-text articles.

    Science.gov (United States)

    McIntosh, Tara; Curran, James R

    2009-09-24

    The increasing availability of full-text biomedical articles will allow more biomedical knowledge to be extracted automatically with greater reliability. However, most Information Retrieval (IR) and Extraction (IE) tools currently process only abstracts. The lack of corpora has limited the development of tools that are capable of exploiting the knowledge in full-text articles. As a result, there has been little investigation into the advantages of full-text document structure, and the challenges developers will face in processing full-text articles. We manually annotated passages from full-text articles that describe interactions summarised in a Molecular Interaction Map (MIM). Our corpus tracks the process of identifying facts to form the MIM summaries and captures any factual dependencies that must be resolved to extract the fact completely. For example, a fact in the results section may require a synonym defined in the introduction. The passages are also annotated with negated and coreference expressions that must be resolved.We describe the guidelines for identifying relevant passages and possible dependencies. The corpus includes 2162 sentences from 78 full-text articles. Our corpus analysis demonstrates the necessity of full-text processing; identifies the article sections where interactions are most commonly stated; and quantifies the proportion of interaction statements requiring coherent dependencies. Further, it allows us to report on the relative importance of identifying synonyms and resolving negated expressions. We also experiment with an oracle sentence retrieval system using the corpus as a gold-standard evaluation set. We introduce the MIM corpus, a unique resource that maps interaction facts in a MIM to annotated passages within full-text articles. It is an invaluable case study providing guidance to developers of biomedical IR and IE systems, and can be used as a gold-standard evaluation set for full-text IR tasks.

  2. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  3. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  4. Automatic fault extraction using a modified ant-colony algorithm

    International Nuclear Information System (INIS)

    Zhao, Junsheng; Sun, Sam Zandong

    2013-01-01

    The basis of automatic fault extraction is seismic attributes, such as the coherence cube which is always used to identify a fault by the minimum value. The biggest challenge in automatic fault extraction is noise, including that of seismic data. However, a fault has a better spatial continuity in certain direction, which makes it quite different from noise. Considering this characteristic, a modified ant-colony algorithm is introduced into automatic fault identification and tracking, where the gradient direction and direction consistency are used as constraints. Numerical model test results show that this method is feasible and effective in automatic fault extraction and noise suppression. The application of field data further illustrates its validity and superiority. (paper)

  5. Automatic Knowledge Extraction and Knowledge Structuring for a National Term Bank

    DEFF Research Database (Denmark)

    Lassen, Tine; Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2011-01-01

    This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data fr...... various existing sources, as well as methods for target group oriented knowledge dissemination. In this paper, we mainly focus on the plans for automatic knowledge extraction and knowledge structuring that will result in ontologies for a national term bank.......This paper gives an introduction to the plans and ongoing work in a project, the aim of which is to develop methods for automatic knowledge extraction and automatic construction and updating of ontologies. The project also aims at developing methods for automatic merging of terminological data from...

  6. Automatic extraction of legal concepts and definitions

    NARCIS (Netherlands)

    Winkels, R.; Hoekstra, R.

    2012-01-01

    In this paper we present the results of an experiment in automatic concept and definition extraction from written sources of law using relatively simple natural language and standard semantic web technology. The software was tested on six laws from the tax domain.

  7. Automatic Contour Extraction from 2D Image

    Directory of Open Access Journals (Sweden)

    Panagiotis GIOANNIS

    2011-03-01

    Full Text Available Aim: To develop a method for automatic contour extraction from a 2D image. Material and Method: The method is divided in two basic parts where the user initially chooses the starting point and the threshold. Finally the method is applied to computed tomography of bone images. Results: An interesting method is developed which can lead to a successful boundary extraction of 2D images. Specifically data extracted from a computed tomography images can be used for 2D bone reconstruction. Conclusions: We believe that such an algorithm or part of it can be applied on several other applications for shape feature extraction in medical image analysis and generally at computer graphics.

  8. Automatic sentence extraction for the detection of scientific paper relations

    Science.gov (United States)

    Sibaroni, Y.; Prasetiyowati, S. S.; Miftachudin, M.

    2018-03-01

    The relations between scientific papers are very useful for researchers to see the interconnection between scientific papers quickly. By observing the inter-article relationships, researchers can identify, among others, the weaknesses of existing research, performance improvements achieved to date, and tools or data typically used in research in specific fields. So far, methods that have been developed to detect paper relations include machine learning and rule-based methods. However, a problem still arises in the process of sentence extraction from scientific paper documents, which is still done manually. This manual process causes the detection of scientific paper relations longer and inefficient. To overcome this problem, this study performs an automatic sentences extraction while the paper relations are identified based on the citation sentence. The performance of the built system is then compared with that of the manual extraction system. The analysis results suggested that the automatic sentence extraction indicates a very high level of performance in the detection of paper relations, which is close to that of manual sentence extraction.

  9. A Risk Assessment System with Automatic Extraction of Event Types

    Science.gov (United States)

    Capet, Philippe; Delavallade, Thomas; Nakamura, Takuya; Sandor, Agnes; Tarsitano, Cedric; Voyatzi, Stavroula

    In this article we describe the joint effort of experts in linguistics, information extraction and risk assessment to integrate EventSpotter, an automatic event extraction engine, into ADAC, an automated early warning system. By detecting as early as possible weak signals of emerging risks ADAC provides a dynamic synthetic picture of situations involving risk. The ADAC system calculates risk on the basis of fuzzy logic rules operated on a template graph whose leaves are event types. EventSpotter is based on a general purpose natural language dependency parser, XIP, enhanced with domain-specific lexical resources (Lexicon-Grammar). Its role is to automatically feed the leaves with input data.

  10. Automatic extraction of left ventricle in SPECT myocardial perfusion imaging

    International Nuclear Information System (INIS)

    Liu Li; Zhao Shujun; Yao Zhiming; Wang Daoyu

    1999-01-01

    An automatic method of extracting left ventricle from SPECT myocardial perfusion data was introduced. This method was based on the least square analysis of the positions of all short-axis slices pixels from the half sphere-cylinder myocardial model, and used a iterative reconstruction technique to automatically cut off the non-left ventricular tissue from the perfusion images. Thereby, this technique provided the bases for further quantitative analysis

  11. Study on automatic control of high uranium concentration solvent extraction with pulse sieve-plate column

    International Nuclear Information System (INIS)

    You Wenzhi; Xing Guangxuan; Long Maoxiong; Zhang Jianmin; Zhou Qin; Chen Fuping; Ye Lingfeng

    1998-01-01

    The author mainly described the working condition of the automatic control system of high uranium concentration solvent extraction with pulse sieve-plate column on a large scale test. The use of the automatic instrument and meter, automatic control circuit, and the best feedback control point of the solvent extraction processing with pulse sieve-plate column are discussed in detail. The writers point out the success of this experiment on automation, also present some questions that should be cared for the automatic control, instruments and meters in production in the future

  12. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik

    Background: Dynamic PET can be used to extract forward stroke volume (FSV) by the indicator dilution principle. The technique employed can be automated and is in theory independent on the tracer used and may therefore be added to any dynamic cardiac PET protocol. The aim of this study...... was to validate automated methods for extracting FSV directly from dynamic PET studies for two different tracers and to examine potential scanner hardware bias. Methods: 21 subjects underwent a dynamic 27 min 11C-acetate PET scan on a Siemens Biograph TruePoint 64 PET/CT scanner (scanner I). In addition, 8...... subjects underwent a dynamic 6 min 15O-water PET scan followed by a 27 min 11C-acetate PET scan on a GE Discovery ST PET/CT scanner (scanner II). The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was isolated by automatic...

  13. An enhanced model for automatically extracting topic phrase from ...

    African Journals Online (AJOL)

    The key benefit foreseen from this automatic document classification is not only related to search engines, but also to many other fields like, document organization, text filtering and semantic index managing. Key words: Keyphrase extraction, machine learning, search engine snippet, document classification, topic tracking ...

  14. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    OpenAIRE

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S.; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality i...

  15. An automatic rat brain extraction method based on a deformable surface model.

    Science.gov (United States)

    Li, Jiehua; Liu, Xiaofeng; Zhuo, Jiachen; Gullapalli, Rao P; Zara, Jason M

    2013-08-15

    The extraction of the brain from the skull in medical images is a necessary first step before image registration or segmentation. While pre-clinical MR imaging studies on small animals, such as rats, are increasing, fully automatic imaging processing techniques specific to small animal studies remain lacking. In this paper, we present an automatic rat brain extraction method, the Rat Brain Deformable model method (RBD), which adapts the popular human brain extraction tool (BET) through the incorporation of information on the brain geometry and MR image characteristics of the rat brain. The robustness of the method was demonstrated on T2-weighted MR images of 64 rats and compared with other brain extraction methods (BET, PCNN, PCNN-3D). The results demonstrate that RBD reliably extracts the rat brain with high accuracy (>92% volume overlap) and is robust against signal inhomogeneity in the images. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Development of automatic extraction method of left ventricular contours on long axis view MR cine images

    International Nuclear Information System (INIS)

    Utsunomiya, Shinichi; Iijima, Naoto; Yamasaki, Kazunari; Fujita, Akinori

    1995-01-01

    In the MRI cardiac function analysis, left ventricular volume curves and diagnosis parameters are obtained by extracting the left ventricular cavities as regions of interest (ROI) from long axis view MR cine images. The ROI extractions had to be done by manual operations, because automatization of the extraction is difficult. A long axis view left ventricular contour consists of a cardiac wall part and an aortic valve part. The above mentioned difficulty is due to the decline of contrast on the cardiac wall part, and the disappearance of edge on the aortic valve part. In this paper, we report a new automatic extraction method for long axis view MR cine images, which needs only 3 manually indicated points on the 1st image to extract all the contours from the total sequence of images. At first, candidate points of a contour are detected by edge detection. Then, selecting the best matched combination of candidate points by Dynamic Programming, the cardiac wall part is automatically extracted. The aortic valve part is manually extracted for the 1st image by indicating both the end points, and is automatically extracted for the rest of the images, by utilizing the aortic valve motion characteristics throughout a cardiac cycle. (author)

  17. A method for automatically extracting infectious disease-related primers and probes from the literature

    Directory of Open Access Journals (Sweden)

    Pérez-Rey David

    2010-08-01

    Full Text Available Abstract Background Primer and probe sequences are the main components of nucleic acid-based detection systems. Biologists use primers and probes for different tasks, some related to the diagnosis and prescription of infectious diseases. The biological literature is the main information source for empirically validated primer and probe sequences. Therefore, it is becoming increasingly important for researchers to navigate this important information. In this paper, we present a four-phase method for extracting and annotating primer/probe sequences from the literature. These phases are: (1 convert each document into a tree of paper sections, (2 detect the candidate sequences using a set of finite state machine-based recognizers, (3 refine problem sequences using a rule-based expert system, and (4 annotate the extracted sequences with their related organism/gene information. Results We tested our approach using a test set composed of 297 manuscripts. The extracted sequences and their organism/gene annotations were manually evaluated by a panel of molecular biologists. The results of the evaluation show that our approach is suitable for automatically extracting DNA sequences, achieving precision/recall rates of 97.98% and 95.77%, respectively. In addition, 76.66% of the detected sequences were correctly annotated with their organism name. The system also provided correct gene-related information for 46.18% of the sequences assigned a correct organism name. Conclusions We believe that the proposed method can facilitate routine tasks for biomedical researchers using molecular methods to diagnose and prescribe different infectious diseases. In addition, the proposed method can be expanded to detect and extract other biological sequences from the literature. The extracted information can also be used to readily update available primer/probe databases or to create new databases from scratch.

  18. Automatic Extraction of Urban Built-Up Area Based on Object-Oriented Method and Remote Sensing Data

    Science.gov (United States)

    Li, L.; Zhou, H.; Wen, Q.; Chen, T.; Guan, F.; Ren, B.; Yu, H.; Wang, Z.

    2018-04-01

    Built-up area marks the use of city construction land in the different periods of the development, the accurate extraction is the key to the studies of the changes of urban expansion. This paper studies the technology of automatic extraction of urban built-up area based on object-oriented method and remote sensing data, and realizes the automatic extraction of the main built-up area of the city, which saves the manpower cost greatly. First, the extraction of construction land based on object-oriented method, the main technical steps include: (1) Multi-resolution segmentation; (2) Feature Construction and Selection; (3) Information Extraction of Construction Land Based on Rule Set, The characteristic parameters used in the rule set mainly include the mean of the red band (Mean R), Normalized Difference Vegetation Index (NDVI), Ratio of residential index (RRI), Blue band mean (Mean B), Through the combination of the above characteristic parameters, the construction site information can be extracted. Based on the degree of adaptability, distance and area of the object domain, the urban built-up area can be quickly and accurately defined from the construction land information without depending on other data and expert knowledge to achieve the automatic extraction of the urban built-up area. In this paper, Beijing city as an experimental area for the technical methods of the experiment, the results show that: the city built-up area to achieve automatic extraction, boundary accuracy of 2359.65 m to meet the requirements. The automatic extraction of urban built-up area has strong practicality and can be applied to the monitoring of the change of the main built-up area of city.

  19. Automatic extraction of drug indications from FDA drug labels.

    Science.gov (United States)

    Khare, Ritu; Wei, Chih-Hsuan; Lu, Zhiyong

    2014-01-01

    Extracting computable indications, i.e. drug-disease treatment relationships, from narrative drug resources is the key for building a gold standard drug indication repository. The two steps to the extraction problem are disease named-entity recognition (NER) to identify disease mentions from a free-text description and disease classification to distinguish indications from other disease mentions in the description. While there exist many tools for disease NER, disease classification is mostly achieved through human annotations. For example, we recently resorted to human annotations to prepare a corpus, LabeledIn, capturing structured indications from the drug labels submitted to FDA by pharmaceutical companies. In this study, we present an automatic end-to-end framework to extract structured and normalized indications from FDA drug labels. In addition to automatic disease NER, a key component of our framework is a machine learning method that is trained on the LabeledIn corpus to classify the NER-computed disease mentions as "indication vs. non-indication." Through experiments with 500 drug labels, our end-to-end system delivered 86.3% F1-measure in drug indication extraction, with 17% improvement over baseline. Further analysis shows that the indication classifier delivers a performance comparable to human experts and that the remaining errors are mostly due to disease NER (more than 50%). Given its performance, we conclude that our end-to-end approach has the potential to significantly reduce human annotation costs.

  20. Automatic extraction of forward stroke volume using dynamic PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær

    2015-01-01

    Background The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. Methods 35 subjects underwent...... a dynamic 11 C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic 15 O-water PET and 11 C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically...... from PET data using cluster analysis. The first-pass peak was isolated by automatic extrapolation of the downslope of the TAC. FSV was calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured using phase...

  1. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    Science.gov (United States)

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  2. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Chunhua Li

    2017-01-01

    Full Text Available Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  3. Automatically extracting functionally equivalent proteins from SwissProt

    Directory of Open Access Journals (Sweden)

    Martin Andrew CR

    2008-10-01

    Full Text Available Abstract Background There is a frequent need to obtain sets of functionally equivalent homologous proteins (FEPs from different species. While it is usually the case that orthology implies functional equivalence, this is not always true; therefore datasets of orthologous proteins are not appropriate. The information relevant to extracting FEPs is contained in databanks such as UniProtKB/Swiss-Prot and a manual analysis of these data allow FEPs to be extracted on a one-off basis. However there has been no resource allowing the easy, automatic extraction of groups of FEPs – for example, all instances of protein C. We have developed FOSTA, an automatically generated database of FEPs annotated as having the same function in UniProtKB/Swiss-Prot which can be used for large-scale analysis. The method builds a candidate list of homologues and filters out functionally diverged proteins on the basis of functional annotations using a simple text mining approach. Results Large scale evaluation of our FEP extraction method is difficult as there is no gold-standard dataset against which the method can be benchmarked. However, a manual analysis of five protein families confirmed a high level of performance. A more extensive comparison with two manually verified functional equivalence datasets also demonstrated very good performance. Conclusion In summary, FOSTA provides an automated analysis of annotations in UniProtKB/Swiss-Prot to enable groups of proteins already annotated as functionally equivalent, to be extracted. Our results demonstrate that the vast majority of UniProtKB/Swiss-Prot functional annotations are of high quality, and that FOSTA can interpret annotations successfully. Where FOSTA is not successful, we are able to highlight inconsistencies in UniProtKB/Swiss-Prot annotation. Most of these would have presented equal difficulties for manual interpretation of annotations. We discuss limitations and possible future extensions to FOSTA, and

  4. Automatic extraction of road features in urban environments using dense ALS data

    Science.gov (United States)

    Soilán, Mario; Truong-Hong, Linh; Riveiro, Belén; Laefer, Debra

    2018-02-01

    This paper describes a methodology that automatically extracts semantic information from urban ALS data for urban parameterization and road network definition. First, building façades are segmented from the ground surface by combining knowledge-based information with both voxel and raster data. Next, heuristic rules and unsupervised learning are applied to the ground surface data to distinguish sidewalk and pavement points as a means for curb detection. Then radiometric information was employed for road marking extraction. Using high-density ALS data from Dublin, Ireland, this fully automatic workflow was able to generate a F-score close to 95% for pavement and sidewalk identification with a resolution of 20 cm and better than 80% for road marking detection.

  5. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  6. Automatic Definition Extraction and Crossword Generation From Spanish News Text

    Directory of Open Access Journals (Sweden)

    Jennifer Esteche

    2017-08-01

    Full Text Available This paper describes the design and implementation of a system that takes Spanish texts and generates crosswords (board and definitions in a fully automatic way using definitions extracted from those texts. Our solution divides the problem in two parts: a definition extraction module that applies pattern matching implemented in Python, and a crossword generation module that uses a greedy strategy implemented in Prolog. The system achieves 73% precision and builds crosswords similar to those built by humans.

  7. AUTOMATIC EXTRACTION AND TOPOLOGY RECONSTRUCTION OF URBAN VIADUCTS FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2015-08-01

    Full Text Available Urban viaducts are important infrastructures for the transportation system of a city. In this paper, an original method is proposed to automatically extract urban viaducts and reconstruct topology of the viaduct network just with airborne LiDAR point cloud data. It will greatly simplify the effort-taking procedure of viaducts extraction and reconstruction. In our method, the point cloud first is filtered to divide all the points into ground points and none-ground points. Region growth algorithm is adopted to find the viaduct points from the none-ground points by the features generated from its general prescriptive designation rules. Then, the viaduct points are projected into 2D images to extract the centerline of every viaduct and generate cubic functions to represent passages of viaducts by least square fitting, with which the topology of the viaduct network can be rebuilt by combining the height information. Finally, a topological graph of the viaducts network is produced. The full-automatic method can potentially benefit the application of urban navigation and city model reconstruction.

  8. Automatic extraction of forward stroke volume using dynamic 11C-acetate PET/CT

    DEFF Research Database (Denmark)

    Harms, Hans; Tolbod, Lars Poulsen; Hansson, Nils Henrik

    Objectives: Dynamic PET with 11C-acetate can be used to quantify myocardial blood flow and oxidative metabolism, the latter of which is used to calculate myocardial external efficiency (MEE). Calculation of MEE requires forward stroke volume (FSV) data. FSV is affected by cardiac loading conditions......, potentially introducing bias if measured with a separate modality. The aim of this study was to develop and validate methods for automatically extracting FSV directly from the dynamic PET used for measuring oxidative metabolism. Methods: 16 subjects underwent a dynamic 27 min PET scan on a Siemens Biograph...... TruePoint 64 PET/CT scanner after bolus injection of 399±27 MBq of 11C-acetate. The LV-aortic time-activity curve (TAC) was extracted automatically from dynamic PET data using cluster analysis. The first-pass peak was derived by automatic extrapolation of the down-slope of the TAC. FSV...

  9. 2D automatic body-fitted structured mesh generation using advancing extraction method

    Science.gov (United States)

    Zhang, Yaoxin; Jia, Yafei

    2018-01-01

    This paper presents an automatic mesh generation algorithm for body-fitted structured meshes in Computational Fluids Dynamics (CFD) analysis using the Advancing Extraction Method (AEM). The method is applicable to two-dimensional domains with complex geometries, which have the hierarchical tree-like topography with extrusion-like structures (i.e., branches or tributaries) and intrusion-like structures (i.e., peninsula or dikes). With the AEM, the hierarchical levels of sub-domains can be identified, and the block boundary of each sub-domain in convex polygon shape in each level can be extracted in an advancing scheme. In this paper, several examples were used to illustrate the effectiveness and applicability of the proposed algorithm for automatic structured mesh generation, and the implementation of the method.

  10. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  11. Automatic Road Centerline Extraction from Imagery Using Road GPS Data

    Directory of Open Access Journals (Sweden)

    Chuqing Cao

    2014-09-01

    Full Text Available Road centerline extraction from imagery constitutes a key element in numerous geospatial applications, which has been addressed through a variety of approaches. However, most of the existing methods are not capable of dealing with challenges such as different road shapes, complex scenes, and variable resolutions. This paper presents a novel method for road centerline extraction from imagery in a fully automatic approach that addresses the aforementioned challenges by exploiting road GPS data. The proposed method combines road color feature with road GPS data to detect road centerline seed points. After global alignment of road GPS data, a novel road centerline extraction algorithm is developed to extract each individual road centerline in local regions. Through road connection, road centerline network is generated as the final output. Extensive experiments demonstrate that our proposed method can rapidly and accurately extract road centerline from remotely sensed imagery.

  12. Automatic extraction of discontinuity orientation from rock mass surface 3D point cloud

    Science.gov (United States)

    Chen, Jianqin; Zhu, Hehua; Li, Xiaojun

    2016-10-01

    This paper presents a new method for extracting discontinuity orientation automatically from rock mass surface 3D point cloud. The proposed method consists of four steps: (1) automatic grouping of discontinuity sets using an improved K-means clustering method, (2) discontinuity segmentation and optimization, (3) discontinuity plane fitting using Random Sample Consensus (RANSAC) method, and (4) coordinate transformation of discontinuity plane. The method is first validated by the point cloud of a small piece of a rock slope acquired by photogrammetry. The extracted discontinuity orientations are compared with measured ones in the field. Then it is applied to a publicly available LiDAR data of a road cut rock slope at Rockbench repository. The extracted discontinuity orientations are compared with the method proposed by Riquelme et al. (2014). The results show that the presented method is reliable and of high accuracy, and can meet the engineering needs.

  13. Automatic Extraction and Size Distribution of Landslides in Kurdistan Region, NE Iraq

    Directory of Open Access Journals (Sweden)

    Arsalan A. Othman

    2013-05-01

    Full Text Available This study aims to assess the localization and size distribution of landslides using automatic remote sensing techniques in (semi- arid, non-vegetated, mountainous environments. The study area is located in the Kurdistan region (NE Iraq, within the Zagros orogenic belt, which is characterized by the High Folded Zone (HFZ, the Imbricated Zone and the Zagros Suture Zone (ZSZ. The available reference inventory includes 3,190 landslides mapped from sixty QuickBird scenes using manual delineation. The landslide types involve rock falls, translational slides and slumps, which occurred in different lithological units. Two hundred and ninety of these landslides lie within the ZSZ, representing a cumulated surface of 32 km2. The HFZ implicates 2,900 landslides with an overall coverage of about 26 km2. We first analyzed cumulative landslide number-size distributions using the inventory map. We then proposed a very simple and robust algorithm for automatic landslide extraction using specific band ratios selected upon the spectral signatures of bare surfaces as well as posteriori slope and the normalized difference vegetation index (NDVI thresholds. The index is based on the contrast between landslides and their background, whereas the landslides have high reflections in the green and red bands. We applied the slope threshold map to remove low slope areas, which have high reflectance in red and green bands. The algorithm was able to detect ~96% of the recent landslides known from the reference inventory on a test site. The cumulative landslide number-size distribution of automatically extracted landslide is very similar to the one based on visual mapping. The automatic extraction is therefore adapted for the quantitative analysis of landslides and thus can contribute to the assessment of hazards in similar regions.

  14. Towards automatic music transcription: note extraction based on independent subspace analysis

    Science.gov (United States)

    Wellhausen, Jens; Hoynck, Michael

    2005-01-01

    Due to the increasing amount of music available electronically the need of automatic search, retrieval and classification systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications, music analysis and music classification. The first part of the algorithm performs a note accurate temporal audio segmentation. In the second part, the resulting segments are examined using Independent Subspace Analysis to extract sounding notes. Finally, the results are used to build a MIDI file as a new representation of the piece of music which is examined.

  15. Molecular and supramolecular speciation of monoamide extractant systems

    International Nuclear Information System (INIS)

    Ferru, G.

    2012-01-01

    DEHiBA (N,N-di-(ethyl-2-hexyl)isobutyramide, a monoamide, was chosen as selective extractant for the recovery of uranium in the first cycle of the GANEX process, which aims to realize the grouped extraction of actinides in the second step of the process. The aim of this work is an improved description of monoamide organic solutions in alkane diluent after solutes extraction: water, nitric acid and uranyl nitrate. A parametric study was undertaken to characterize species at molecular scale (by IR spectroscopy, UV-visible spectroscopy, time-resolved laser-induced fluorescence spectroscopy, and electro-spray ionisation mass spectrometry) and at supramolecular scale (by vapor pressure osmometry and small angle X-ray scattering coupled to molecular dynamic simulations). Extraction isotherms were modelled taking into account the molecular and supramolecular speciation. These works showed that the organization of the organic solution depends on the amide concentration, the nature and the concentration of the extracted solute. Three regimes can be distinguished. 1/For extractant concentration less than 0.5 mol/L, monomers are predominate species. 2/ For extractant concentrations between 0.5 and 1 mol/L, small aggregates are formed containing 2 to 4 molecules of monoamide. 3/ For more concentrated solutions (greater than 1 mol/L), slightly larger species can be formed after water or nitric acid extraction. Concerning uranyl nitrate extraction, an important and strong organization of the organic phase is observed, which no longer allows the formation of well spherical defined aggregates. At molecular scale, complexes are not sensitive to the organization of the solution: the same species are observed, regardless of the solute and extractant concentrations in organic phase. (author) [fr

  16. AUTOMATIC RAILWAY POWER LINE EXTRACTION USING MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    S. Zhang

    2016-06-01

    Full Text Available Research on power line extraction technology using mobile laser point clouds has important practical significance on railway power lines patrol work. In this paper, we presents a new method for automatic extracting railway power line from MLS (Mobile Laser Scanning data. Firstly, according to the spatial structure characteristics of power-line and trajectory, the significant data is segmented piecewise. Then, use the self-adaptive space region growing method to extract power lines parallel with rails. Finally use PCA (Principal Components Analysis combine with information entropy theory method to judge a section of the power line whether is junction or not and which type of junction it belongs to. The least squares fitting algorithm is introduced to model the power line. An evaluation of the proposed method over a complicated railway point clouds acquired by a RIEGL VMX450 MLS system shows that the proposed method is promising.

  17. Automatic Glaucoma Detection Based on Optic Disc Segmentation and Texture Feature Extraction

    Directory of Open Access Journals (Sweden)

    Maíla de Lima Claro

    2016-08-01

    Full Text Available The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classiffication of images in glaucomatous or not. We obtained results of 93% accuracy.

  18. Automatic feature extraction in large fusion databases by using deep learning approach

    Energy Technology Data Exchange (ETDEWEB)

    Farias, Gonzalo, E-mail: gonzalo.farias@ucv.cl [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile); Dormido-Canto, Sebastián [Departamento de Informática y Automática, UNED, Madrid (Spain); Vega, Jesús; Rattá, Giuseppe [Asociación EURATOM/CIEMAT Para Fusión, CIEMAT, Madrid (Spain); Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín [Pontificia Universidad Católica de Valparaíso, Valparaíso (Chile)

    2016-11-15

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  19. Automatic feature extraction in large fusion databases by using deep learning approach

    International Nuclear Information System (INIS)

    Farias, Gonzalo; Dormido-Canto, Sebastián; Vega, Jesús; Rattá, Giuseppe; Vargas, Héctor; Hermosilla, Gabriel; Alfaro, Luis; Valencia, Agustín

    2016-01-01

    Highlights: • Feature extraction is a very critical stage in any machine learning algorithm. • The problem dimensionality can be reduced enormously when selecting suitable attributes. • Despite the importance of feature extraction, the process is commonly done manually by trial and error. • Fortunately, recent advances in deep learning approach have proposed an encouraging way to find a good feature representation automatically. • In this article, deep learning is applied to the TJ-II fusion database to get more robust and accurate classifiers in comparison to previous work. - Abstract: Feature extraction is one of the most important machine learning issues. Finding suitable attributes of datasets can enormously reduce the dimensionality of the input space, and from a computational point of view can help all of the following steps of pattern recognition problems, such as classification or information retrieval. However, the feature extraction step is usually performed manually. Moreover, depending on the type of data, we can face a wide range of methods to extract features. In this sense, the process to select appropriate techniques normally takes a long time. This work describes the use of recent advances in deep learning approach in order to find a good feature representation automatically. The implementation of a special neural network called sparse autoencoder and its application to two classification problems of the TJ-II fusion database is shown in detail. Results have shown that it is possible to get robust classifiers with a high successful rate, in spite of the fact that the feature space is reduced to less than 0.02% from the original one.

  20. Automatization of laboratory extraction installation intended for investigations in the field of reprocessing of spenf fuels

    International Nuclear Information System (INIS)

    Vznuzdaev, E.A.; Galkin, B.Ya.; Gofman, F.Eh.

    1981-01-01

    Automatized stand for solving the problem of optimum control on technological extraction process in the spent fuel reprocessing by means of an automatized control system which is based on the means of computation technick is described in the paper. Preliminary experiments which had been conducted on the stand with spent fuel from WWER-440 reactor have shown high efficiency of automatization and possibility to conduct technological investigations in a short period of time and to have much of information which can not be obtained by ordinary organisation of work [ru

  1. Feature extraction and classification in automatic weld seam radioscopy

    International Nuclear Information System (INIS)

    Heindoerfer, F.; Pohle, R.

    1994-01-01

    The investigations conducted have shown that automatic feature extraction and classification procedures permit the identification of weld seam flaws. Within this context the favored learning fuzzy classificator represents a very good alternative to conventional classificators. The results have also made clear that improvements mainly in the field of image registration are still possible by increasing the resolution of the radioscopy system. Since, only if the flaw is segmented correctly, i.e. in its full size, and due to improved detail recognizability and sufficient contrast difference will an almost error-free classification be conceivable. (orig./MM) [de

  2. Molecularly imprinted solid-phase extraction in the analysis of agrochemicals.

    Science.gov (United States)

    Yi, Ling-Xiao; Fang, Rou; Chen, Guan-Hua

    2013-08-01

    The molecular imprinting technique is a highly predeterminative recognition technology. Molecularly imprinted polymers (MIPs) can be applied to the cleanup and preconcentration of analytes as the selective adsorbent of solid-phase extraction (SPE). In recent years, a new type of SPE has formed, molecularly imprinted polymer solid-phase extraction (MISPE), and has been widely applied to the extraction of agrochemicals. In this review, the mechanism of the molecular imprinting technique and the methodology of MIP preparations are explained. The extraction modes of MISPE, including offline and online, are discussed, and the applications of MISPE in the analysis of agrochemicals such as herbicides, fungicides and insecticides are summarized. It is concluded that MISPE is a powerful tool to selectively isolate agrochemicals from real samples with higher extraction and cleanup efficiency than commercial SPE and that it has great potential for broad applications.

  3. Automatic facial animation parameters extraction in MPEG-4 visual communication

    Science.gov (United States)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  4. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    Science.gov (United States)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  5. Towards Automatic Music Transcription: Extraction of MIDI-Data out of Polyphonic Piano Music

    Directory of Open Access Journals (Sweden)

    Jens Wellhausen

    2005-06-01

    Full Text Available Driven by the increasing amount of music available electronically the need of automatic search and retrieval systems for music becomes more and more important. In this paper an algorithm for automatic transcription of polyphonic piano music into MIDI data is presented, which is a very interesting basis for database applications and music analysis. The first part of the algorithm performs a note accurate temporal audio segmentation. The resulting segments are examined to extract the notes played in the second part. An algorithm for chord separation based on Independent Subspace Analysis is presented. Finally, the results are used to build a MIDI file.

  6. The automatic extraction of pitch perturbation using microcomputers: some methodological considerations.

    Science.gov (United States)

    Deem, J F; Manning, W H; Knack, J V; Matesich, J S

    1989-09-01

    A program for the automatic extraction of jitter (PAEJ) was developed for the clinical measurement of pitch perturbations using a microcomputer. The program currently includes 12 implementations of an algorithm for marking the boundary criteria for a fundamental period of vocal fold vibration. The relative sensitivity of these extraction procedures for identifying the pitch period was compared using sine waves. Data obtained to date provide information for each procedure concerning the effects of waveform peakedness and slope, sample duration in cycles, noise level of the analysis system with both direct and tape recorded input, and the influence of interpolation. Zero crossing extraction procedures provided lower jitter values regardless of sine wave frequency or sample duration. The procedures making use of positive- or negative-going zero crossings with interpolation provided the lowest measures of jitter with the sine wave stimuli. Pilot data obtained with normal-speaking adults indicated that jitter measures varied as a function of the speaker, vowel, and sample duration.

  7. Automatic Extraction of High-Resolution Rainfall Series from Rainfall Strip Charts

    Science.gov (United States)

    Saa-Requejo, Antonio; Valencia, Jose Luis; Garrido, Alberto; Tarquis, Ana M.

    2015-04-01

    Soil erosion is a complex phenomenon involving the detachment and transport of soil particles, storage and runoff of rainwater, and infiltration. The relative magnitude and importance of these processes depends on a host of factors, including climate, soil, topography, cropping and land management practices among others. Most models for soil erosion or hydrological processes need an accurate storm characterization. However, this data are not always available and in some cases indirect models are generated to fill this gap. In Spain, the rain intensity data known for time periods less than 24 hours back to 1924 and many studies are limited by it. In many cases this data is stored in rainfall strip charts in the meteorological stations but haven't been transfer in a numerical form. To overcome this deficiency in the raw data a process of information extraction from large amounts of rainfall strip charts is implemented by means of computer software. The method has been developed that largely automates the intensive-labour extraction work based on van Piggelen et al. (2011). The method consists of the following five basic steps: 1) scanning the charts to high-resolution digital images, 2) manually and visually registering relevant meta information from charts and pre-processing, 3) applying automatic curve extraction software in a batch process to determine the coordinates of cumulative rainfall lines on the images (main step), 4) post processing the curves that were not correctly determined in step 3, and 5) aggregating the cumulative rainfall in pixel coordinates to the desired time resolution. A colour detection procedure is introduced that automatically separates the background of the charts and rolls from the grid and subsequently the rainfall curve. The rainfall curve is detected by minimization of a cost function. Some utilities have been added to improve the previous work and automates some auxiliary processes: readjust the bands properly, merge bands when

  8. Automatic Rail Extraction and Celarance Check with a Point Cloud Captured by Mls in a Railway

    Science.gov (United States)

    Niina, Y.; Honma, R.; Honma, Y.; Kondo, K.; Tsuji, K.; Hiramatsu, T.; Oketani, E.

    2018-05-01

    Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.

  9. Automatic Centerline Extraction of Coverd Roads by Surrounding Objects from High Resolution Satellite Images

    Science.gov (United States)

    Kamangir, H.; Momeni, M.; Satari, M.

    2017-09-01

    This paper presents an automatic method to extract road centerline networks from high and very high resolution satellite images. The present paper addresses the automated extraction roads covered with multiple natural and artificial objects such as trees, vehicles and either shadows of buildings or trees. In order to have a precise road extraction, this method implements three stages including: classification of images based on maximum likelihood algorithm to categorize images into interested classes, modification process on classified images by connected component and morphological operators to extract pixels of desired objects by removing undesirable pixels of each class, and finally line extraction based on RANSAC algorithm. In order to evaluate performance of the proposed method, the generated results are compared with ground truth road map as a reference. The evaluation performance of the proposed method using representative test images show completeness values ranging between 77% and 93%.

  10. AUTOMR: An automatic processing program system for the molecular replacement method

    International Nuclear Information System (INIS)

    Matsuura, Yoshiki

    1991-01-01

    An automatic processing program system of the molecular replacement method AUTMR is presented. The program solves the initial model of the target crystal structure using a homologous molecule as the search model. It processes the structure-factor calculation of the model molecule, the rotation function, the translation function and the rigid-group refinement successively in one computer job. Test calculations were performed for six protein crystals and the structures were solved in all of these cases. (orig.)

  11. [Automatic Extraction and Analysis of Dosimetry Data in Radiotherapy Plans].

    Science.gov (United States)

    Song, Wei; Zhao, Di; Lu, Hong; Zhang, Biyun; Ma, Jun; Yu, Dahai

    To improve the efficiency and accuracy of extraction and analysis of dosimetry data in radiotherapy plans for a batch of patients. With the interface function provided in Matlab platform, a program was written to extract the dosimetry data exported from treatment planning system in DICOM RT format and exported the dose-volume data to an Excel file with the SPSS compatible format. This method was compared with manual operation for 14 gastric carcinoma patients to validate the efficiency and accuracy. The output Excel data were compatible with SPSS in format, the dosimetry data error for PTV dose interval of 90%-98%, PTV dose interval of 99%-106% and all OARs were -3.48E-5 ± 3.01E-5, -1.11E-3 ± 7.68E-4, -7.85E-5 ± 9.91E-5 respectively. Compared with manual operation, the time required was reduced from 5.3 h to 0.19 h and input error was reduced from 0.002 to 0. The automatic extraction of dosimetry data in DICOM RT format for batch patients, the SPSS compatible data exportation, quick analysis were achieved in this paper. The efficiency of clinical researches based on dosimetry data analysis of large number of patients will be improved with this methods.

  12. Improvement in the performance of CAD for the Alzheimer-type dementia based on automatic extraction of temporal lobe from coronal MR images

    International Nuclear Information System (INIS)

    Kaeriyama, Tomoharu; Kodama, Naoki; Kaneko, Tomoyuki; Shimada, Tetsuo; Tanaka, Hiroyuki; Takeda, Ai; Fukumoto, Ichiro

    2004-01-01

    In this study, we extracted whole brain and temporal lobe images from MR images (26 healthy elderly controls and 34 Alzheimer-type dementia patients) by means of binarize, mask processing, template matching, Hough transformation, and boundary tracing etc. We assessed the extraction accuracy by comparing the extracted images to images extracts by a radiological technologist. The results of assessment by consistent rate; brain images 91.3±4.3%, right temporal lobe 83.3±6.9%, left temporal lobe 83.7±7.6%. Furthermore discriminant analysis using 6 textural features demonstrated sensitivity and specificity of 100% when the healthy elderly controls were compared to the Alzheimer-type dementia patients. Our research showed the possibility of automatic objective diagnosis of temporal lobe abnormalities by automatic extracted images of the temporal lobes. (author)

  13. MOLECULARLY IMPRINTED SOLID PHASE EXTRACTION FOR TRACE ANALYSIS OF DIAZINON IN DRINKING WATER

    Directory of Open Access Journals (Sweden)

    M. Rahiminejad ، S. J. Shahtaheri ، M. R. Ganjali ، A. Rahimi Forushani ، F. Golbabaei

    2009-04-01

    Full Text Available Amongst organophosphate pesticides, the one most widely used and common environmental contaminant is diazinon; thus methods for its trace analysis in environmental samples must be developed. Use of diazinon imprinted polymers such as sorbents in solid phase extraction, is a prominent and novel application area of molecular imprinted polymers. For diazinon extraction, high performance liquid chromatography analysis was demonstrated in this study. During optimization of the molecular imprinted solid phase extraction procedure for efficient solid phase extraction of diazinon, Plackett-Burman design was conducted. Eight experimental factors with critical influence on molecular imprinted solid phase extraction performance were selected, and 12 different experimental runs based on Plackett-Burman design were carried out. The applicability of diazinon imprinted polymers as the sorbent in solid phase extraction, presented obtained good recoveries of diazinon from LC-grade water. An increase in pH caused an increase in the recovery on molecular imprinted solid phase extraction. From these results, the optimal molecular imprinted solid phase extraction procedure was as follows: solid phase extraction packing with 100 mg diazinon imprinted polymers; conditioning with 5 mL of methanol and 6 mL of LC-grade water; sample loading containing diazinon (pH=10; washing with 1 mL of LC-grade water, 1 mL LC- grade water containing 30% acetonitrile and 0.5 mL of acetonitrile, respectively; eluting with 1 mL of methanol containing 2% acetic acid. The percentage recoveries obtained by the optimized molecular imprinted solid phase extraction were more than 90% with drinking water spiked at different trace levels of diazinon. Generally speaking, the molecular imprinted solid phase extraction procedure and subsequent high performance liquid chromatography analysis can be a relatively fast and proper approach for qualitative and quantitative analysis of diazinon in

  14. Independent component analysis for automatic note extraction from musical trills

    Science.gov (United States)

    Brown, Judith C.; Smaragdis, Paris

    2004-05-01

    The method of principal component analysis, which is based on second-order statistics (or linear independence), has long been used for redundancy reduction of audio data. The more recent technique of independent component analysis, enforcing much stricter statistical criteria based on higher-order statistical independence, is introduced and shown to be far superior in separating independent musical sources. This theory has been applied to piano trills and a database of trill rates was assembled from experiments with a computer-driven piano, recordings of a professional pianist, and commercially available compact disks. The method of independent component analysis has thus been shown to be an outstanding, effective means of automatically extracting interesting musical information from a sea of redundant data.

  15. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  16. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    Science.gov (United States)

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  17. Automatic extraction and identification of users' responses in Facebook medical quizzes.

    Science.gov (United States)

    Rodríguez-González, Alejandro; Menasalvas Ruiz, Ernestina; Mayer Pujadas, Miguel A

    2016-04-01

    In the last few years the use of social media in medicine has grown exponentially, providing a new area of research based on the analysis and use of Web 2.0 capabilities. In addition, the use of social media in medical education is a subject of particular interest which has been addressed in several studies. One example of this application is the medical quizzes of The New England Journal of Medicine (NEJM) that regularly publishes a set of questions through their Facebook timeline. We present an approach for the automatic extraction of medical quizzes and their associated answers on a Facebook platform by means of a set of computer-based methods and algorithms. We have developed a tool for the extraction and analysis of medical quizzes stored on Facebook timeline at the NEJM Facebook page, based on a set of computer-based methods and algorithms using Java. The system is divided into two main modules: Crawler and Data retrieval. The system was launched on December 31, 2014 and crawled through a total of 3004 valid posts and 200,081 valid comments. The first post was dated on July 23, 2009 and the last one on December 30, 2014. 285 quizzes were analyzed with 32,780 different users providing answers to the aforementioned quizzes. Of the 285 quizzes, patterns were found in 261 (91.58%). From these 261 quizzes where trends were found, we saw that users follow trends of incorrect answers in 13 quizzes and trends of correct answers in 248. This tool is capable of automatically identifying the correct and wrong answers to a quiz provided on Facebook posts in a text format to a quiz, with a small rate of false negative cases and this approach could be applicable to the extraction and analysis of other sources after including some adaptations of the information on the Internet. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Automatic extraction of property norm-like data from large text corpora.

    Science.gov (United States)

    Kelly, Colin; Devereux, Barry; Korhonen, Anna

    2014-01-01

    Traditional methods for deriving property-based representations of concepts from text have focused on either extracting only a subset of possible relation types, such as hyponymy/hypernymy (e.g., car is-a vehicle) or meronymy/metonymy (e.g., car has wheels), or unspecified relations (e.g., car--petrol). We propose a system for the challenging task of automatic, large-scale acquisition of unconstrained, human-like property norms from large text corpora, and discuss the theoretical implications of such a system. We employ syntactic, semantic, and encyclopedic information to guide our extraction, yielding concept-relation-feature triples (e.g., car be fast, car require petrol, car cause pollution), which approximate property-based conceptual representations. Our novel method extracts candidate triples from parsed corpora (Wikipedia and the British National Corpus) using syntactically and grammatically motivated rules, then reweights triples with a linear combination of their frequency and four statistical metrics. We assess our system output in three ways: lexical comparison with norms derived from human-generated property norm data, direct evaluation by four human judges, and a semantic distance comparison with both WordNet similarity data and human-judged concept similarity ratings. Our system offers a viable and performant method of plausible triple extraction: Our lexical comparison shows comparable performance to the current state-of-the-art, while subsequent evaluations exhibit the human-like character of our generated properties.

  19. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    International Nuclear Information System (INIS)

    Fripp, Jurgen; Crozier, Stuart; Warfield, Simon K; Ourselin, Sebastien

    2007-01-01

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis

  20. Automatic segmentation of the bone and extraction of the bone-cartilage interface from magnetic resonance images of the knee

    Energy Technology Data Exchange (ETDEWEB)

    Fripp, Jurgen [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia); Crozier, Stuart [School of Information Technology and Electrical Engineering, University of Queensland, St Lucia, QLD 4072 (Australia); Warfield, Simon K [Computational Radiology Laboratory, Harvard Medical School, Children' s Hospital Boston, 300 Longwood Avenue, Boston, MA 02115 (United States); Ourselin, Sebastien [BioMedIA Lab, Autonomous Systems Laboratory, CSIRO ICT Centre, Level 20, 300 Adelaide street, Brisbane, QLD 4001 (Australia)

    2007-03-21

    The accurate segmentation of the articular cartilages from magnetic resonance (MR) images of the knee is important for clinical studies and drug trials into conditions like osteoarthritis. Currently, segmentations are obtained using time-consuming manual or semi-automatic algorithms which have high inter- and intra-observer variabilities. This paper presents an important step towards obtaining automatic and accurate segmentations of the cartilages, namely an approach to automatically segment the bones and extract the bone-cartilage interfaces (BCI) in the knee. The segmentation is performed using three-dimensional active shape models, which are initialized using an affine registration to an atlas. The BCI are then extracted using image information and prior knowledge about the likelihood of each point belonging to the interface. The accuracy and robustness of the approach was experimentally validated using an MR database of fat suppressed spoiled gradient recall images. The (femur, tibia, patella) bone segmentation had a median Dice similarity coefficient of (0.96, 0.96, 0.89) and an average point-to-surface error of 0.16 mm on the BCI. The extracted BCI had a median surface overlap of 0.94 with the real interface, demonstrating its usefulness for subsequent cartilage segmentation or quantitative analysis.

  1. Quantify Water Extraction by TBP/Dodecane via Molecular Dynamics Simulations

    International Nuclear Information System (INIS)

    Khomami, Bamin; Cui, Shengting; De Almeida, Valmor F.

    2013-01-01

    The purpose of this project is to quantify the interfacial transport of water into the most prevalent nuclear reprocessing solvent extractant mixture, namely tri-butyl- phosphate (TBP) and dodecane, via massively parallel molecular dynamics simulations on the most powerful machines available for open research. Specifically, we will accomplish this objective by evolving the water/TBP/dodecane system up to 1 ms elapsed time, and validate the simulation results by direct comparison with experimentally measured water solubility in the organic phase. The significance of this effort is to demonstrate for the first time that the combination of emerging simulation tools and state-of-the-art supercomputers can provide quantitative information on par to experimental measurements for solvent extraction systems of relevance to the nuclear fuel cycle. Results: Initially, the isolated single component, and single phase systems were studied followed by the two-phase, multicomponent counterpart. Specifically, the systems we studied were: pure TBP; pure n-dodecane; TBP/n-dodecane mixture; and the complete extraction system: water-TBP/n-dodecane two phase system to gain deep insight into the water extraction process. We have completely achieved our goal of simulating the molecular extraction of water molecules into the TBP/n-dodecane mixture up to the saturation point, and obtained favorable comparison with experimental data. Many insights into fundamental molecular level processes and physics were obtained from the process. Most importantly, we found that the dipole moment of the extracting agent is crucially important in affecting the interface roughness and the extraction rate of water molecules into the organic phase. In addition, we have identified shortcomings in the existing OPLS-AA force field potential for long-chain alkanes. The significance of this force field is that it is supposed to be optimized for molecular liquid simulations. We found that it failed for dodecane and

  2. An automatic extraction algorithm of three dimensional shape of brain parenchyma from MR images

    International Nuclear Information System (INIS)

    Matozaki, Takeshi

    2000-01-01

    For the simulation of surgical operations, the extraction of the selected region using MR images is useful. However, this segmentation requires a high level of skill and experience from the technicians. We have developed an unique automatic extraction algorithm for extracting three dimensional brain parenchyma using MR head images. It is named the ''three dimensional gray scale clumsy painter method''. In this method, a template having the shape of a pseudo-circle, a so called clumsy painter (CP), moves along the contour of the selected region and extracts the region surrounded by the contour. This method has advantages compared with the morphological filtering and the region growing method. Previously, this method was applied to binary images, but there were some problems in that the results of the extractions were varied by the value of the threshold level. We introduced gray level information of images to decide the threshold, and depend upon the change of image density between the brain parenchyma and CSF. We decided the threshold level by the vector of a map of templates, and changed the map according to the change of image density. As a result, the over extracted ratio was improved by 36%, and the under extracted ratio was improved by 20%. (author)

  3. Comparison of methods of DNA extraction for real-time PCR in a model of pleural tuberculosis.

    Science.gov (United States)

    Santos, Ana; Cremades, Rosa; Rodríguez, Juan Carlos; García-Pachón, Eduardo; Ruiz, Montserrat; Royo, Gloria

    2010-01-01

    Molecular methods have been reported to have different sensitivities in the diagnosis of pleural tuberculosis and this may in part be caused by the use of different methods of DNA extraction. Our study compares nine DNA extraction systems in an experimental model of pleural tuberculosis. An inoculum of Mycobacterium tuberculosis was added to 23 pleural liquid samples with different characteristics. DNA was subsequently extracted using nine different methods (seven manual and two automatic) for analysis with real-time PCR. Only two methods were able to detect the presence of M. tuberculosis DNA in all the samples: extraction using columns (Qiagen) and automated extraction with the TNAI system (Roche). The automatic method is more expensive, but requires less time. Almost all the false negatives were because of the difficulty involved in extracting M. tuberculosis DNA, as in general, all the methods studied are capable of eliminating inhibitory substances that block the amplification reaction. The method of M. tuberculosis DNA extraction used affects the results of the diagnosis of pleural tuberculosis by molecular methods. DNA extraction systems that have been shown to be effective in pleural liquid should be used.

  4. Molecular design of highly efficient extractants for separation of lanthanides and actinides by computational chemistry

    International Nuclear Information System (INIS)

    Uezu, Kazuya; Yamagawa, Jun-ichiro; Goto, Masahiro

    2006-01-01

    Novel organophosphorus extractants, which have two functional moieties in the molecular structure, were developed for the recycle system of transuranium elements using liquid-liquid extraction. The synthesized extractants showed extremely high extractability to lanthanides elements compared to those of commercially available extractants. The results of extraction equilibrium suggested that the structural effect of extractants is one of the key factors to enhance the selectivity and extractability in lanthanides extractions. Furthermore, molecular modeling was carried out to evaluate the extraction properties for extraction of lanthanides by the synthesized extractants. Molecular modeling was shown to be very useful for designing new extractants. The new concept to connect some functional moieties with a spacer is very useful and is a promising method to develop novel extractants for treatment of nuclear fuel. (author)

  5. An automatic device for sample insertion and extraction to/from reactor irradiation facilities

    International Nuclear Information System (INIS)

    Alloni, L.; Venturelli, A.; Meloni, S.

    1990-01-01

    At the previous European Triga Users Conference in Vienna,a paper was given describing a new handling tool for irradiated samples at the L.E.N.A plant. This tool was the first part of an automatic device for the management of samples to be irradiated in the TRIGA MARK ii reactor and successively extracted and stored. So far sample insertion and extraction to/from irradiation facilities available on reactor top (central thimble,rotatory specimen rack and channel f),has been carried out manually by reactor and health-physics operators using the ''traditional'' fishing pole provided by General Atomic, thus exposing reactor personnel to ''unjustified'' radiation doses. The present paper describes the design and the operation of a new device, a ''robot''type machine,which, remotely operated, takes care of sample insertion into the different irradiation facilities,sample extraction after irradiation and connection to the storage pits already described. The extraction of irradiated sample does not require the presence of reactor personnel on the reactor top and,therefore,radiation doses are strongly reduced. All work from design to construction has been carried out by the personnel of the electronic group of the L.E.N.A plant. (orig.)

  6. Automatic extraction analysis of the anatomical functional area for normal brain 18F-FDG PET imaging

    International Nuclear Information System (INIS)

    Guo Wanhua; Jiang Xufeng; Zhang Liying; Lu Zhongwei; Li Peiyong; Zhu Chengmo; Zhang Jiange; Pan Jiapu

    2003-01-01

    Using self-designed automatic extraction software of brain functional area, the grey scale distribution of 18 F-FDG imaging and the relationship between the 18 F-FDG accumulation of brain anatomic function area and the 18 F-FDG injected dose, the level of glucose, the age, etc., were studied. According to the Talairach coordinate system, after rotation, drift and plastic deformation, the 18 F-FDG PET imaging was registered into the Talairach coordinate atlas, and then the average gray value scale ratios between individual brain anatomic functional area and whole brain area was calculated. Further more the statistics of the relationship between the 18 F-FDG accumulation of every brain anatomic function area and the 18 F-FDG injected dose, the level of glucose and the age were tested by using multiple stepwise regression model. After images' registration, smoothing and extraction, main cerebral cortex of the 18 F-FDG PET brain imaging can be successfully localized and extracted, such as frontal lobe, parietal lobe, occipital lobe, temporal lobe, cerebellum, brain ventricle, thalamus and hippocampus. The average ratios to the inner reference of every brain anatomic functional area were 1.01 ± 0.15. By multiple stepwise regression with the exception of thalamus and hippocampus, the grey scale of all the brain functional area was negatively correlated to the ages, but with no correlation to blood sugar and dose in all areas. To the 18 F-FDG PET imaging, the brain functional area extraction program could automatically delineate most of the cerebral cortical area, and also successfully reflect the brain blood and metabolic study, but extraction of the more detailed area needs further investigation

  7. The study of automatic brain extraction of basal ganglia based on atlas of Talairach in 18F-FDG PET images

    International Nuclear Information System (INIS)

    Zuo Chantao; Guan Yihui; Zhao Jun; Lin Xiangtong; Wang Jian; Zhang Jiange; Zhang Lu

    2005-01-01

    Objective: To establish a method which can extract functional areas of the brain basal ganglia automatically. Methods: 18 F-fluorodeoxyglucose (FDG) PET images were spatial normalized to Talairach atlas space through two steps, image registration and image deformation. The functional areas were extracted from three dimension PET images based on the coordinate obtained from atlas; caudate and putamen were extracted and rendered, the grey value of the area was normalized by whole brain. Results: The normal ratio of left caudate head, body and tail were 1.02 ± 0.04, 0.92 ± 0.07 and 0.71 ± 0.03, the right were 0.98 ± 0.03, 0.87 ± 0.04 and 0.71 ± 0.01 respectively. The normal ratio of left and right putamen were 1.20 ± 0.06 and 1.20 ± 0.04. The mean grey value between left and right basal ganglia had no significant difference (P>0.05). Conclusion: The automatic functional area extracting method based on atlas of Talairach is feasible. (authors)

  8. A Method for Automatic Extracting Intracranial Region in MR Brain Image

    Science.gov (United States)

    Kurokawa, Keiji; Miura, Shin; Nishida, Makoto; Kageyama, Yoichi; Namura, Ikuro

    It is well known that temporal lobe in MR brain image is in use for estimating the grade of Alzheimer-type dementia. It is difficult to use only region of temporal lobe for estimating the grade of Alzheimer-type dementia. From the standpoint for supporting the medical specialists, this paper proposes a data processing approach on the automatic extraction of the intracranial region from the MR brain image. The method is able to eliminate the cranium region with the laplacian histogram method and the brainstem with the feature points which are related to the observations given by a medical specialist. In order to examine the usefulness of the proposed approach, the percentage of the temporal lobe in the intracranial region was calculated. As a result, the percentage of temporal lobe in the intracranial region on the process of the grade was in agreement with the visual sense standards of temporal lobe atrophy given by the medical specialist. It became clear that intracranial region extracted by the proposed method was good for estimating the grade of Alzheimer-type dementia.

  9. Predicting the performance of molecularly imprinted polymers: Selective extraction of caffeine by molecularly imprinted solid phase extraction

    Energy Technology Data Exchange (ETDEWEB)

    Farrington, Keith [School of Chemical Sciences, Dublin City University, Glasnevin, Dublin 9 (Ireland); Magner, Edmond [Materials and Surface Science Institute, Chemical and Environmental Sciences Department, University of Limerick, Limerick (Ireland); Regan, Fiona [School of Chemical Sciences, Dublin City University, Glasnevin, Dublin 9 (Ireland)]. E-mail: fiona.regan@dcu.ie

    2006-04-27

    A rational design approach was taken to the planning and synthesis of a molecularly imprinted polymer capable of extracting caffeine (the template molecule) from a standard solution of caffeine and further from a food sample containing caffeine. Data from NMR titration experiments in conjunction with a molecular modelling approach was used in predicting the relative ratios of template to functional monomer and furthermore determined both the choice of solvent (porogen) and the amount used for the study. In addition the molecular modelling program yielded information regarding the thermodynamic stability of the pre-polymerisation complex. Post-polymerisation analysis of the polymer itself by analysis of the pore size distribution by BET yielded significant information regarding the nature of the size and distribution of the pores within the polymer matrix. Here is proposed a stepwise procedure for the development and testing of a molecularly imprinted polymer using a well-studied compound-caffeine as a model system. It is shown that both the physical characteristics of a molecularly imprinted polymer (MIP) and the analysis of the pre-polymerisation complex can yield vital information, which can predict how well a given MIP will perform.

  10. Predicting the performance of molecularly imprinted polymers: Selective extraction of caffeine by molecularly imprinted solid phase extraction

    International Nuclear Information System (INIS)

    Farrington, Keith; Magner, Edmond; Regan, Fiona

    2006-01-01

    A rational design approach was taken to the planning and synthesis of a molecularly imprinted polymer capable of extracting caffeine (the template molecule) from a standard solution of caffeine and further from a food sample containing caffeine. Data from NMR titration experiments in conjunction with a molecular modelling approach was used in predicting the relative ratios of template to functional monomer and furthermore determined both the choice of solvent (porogen) and the amount used for the study. In addition the molecular modelling program yielded information regarding the thermodynamic stability of the pre-polymerisation complex. Post-polymerisation analysis of the polymer itself by analysis of the pore size distribution by BET yielded significant information regarding the nature of the size and distribution of the pores within the polymer matrix. Here is proposed a stepwise procedure for the development and testing of a molecularly imprinted polymer using a well-studied compound-caffeine as a model system. It is shown that both the physical characteristics of a molecularly imprinted polymer (MIP) and the analysis of the pre-polymerisation complex can yield vital information, which can predict how well a given MIP will perform

  11. Role of Artificial Intelligence Techniques (Automatic Classifiers) in Molecular Imaging Modalities in Neurodegenerative Diseases.

    Science.gov (United States)

    Cascianelli, Silvia; Scialpi, Michele; Amici, Serena; Forini, Nevio; Minestrini, Matteo; Fravolini, Mario Luca; Sinzinger, Helmut; Schillaci, Orazio; Palumbo, Barbara

    2017-01-01

    Artificial Intelligence (AI) is a very active Computer Science research field aiming to develop systems that mimic human intelligence and is helpful in many human activities, including Medicine. In this review we presented some examples of the exploiting of AI techniques, in particular automatic classifiers such as Artificial Neural Network (ANN), Support Vector Machine (SVM), Classification Tree (ClT) and ensemble methods like Random Forest (RF), able to analyze findings obtained by positron emission tomography (PET) or single-photon emission tomography (SPECT) scans of patients with Neurodegenerative Diseases, in particular Alzheimer's Disease. We also focused our attention on techniques applied in order to preprocess data and reduce their dimensionality via feature selection or projection in a more representative domain (Principal Component Analysis - PCA - or Partial Least Squares - PLS - are examples of such methods); this is a crucial step while dealing with medical data, since it is necessary to compress patient information and retain only the most useful in order to discriminate subjects into normal and pathological classes. Main literature papers on the application of these techniques to classify patients with neurodegenerative disease extracting data from molecular imaging modalities are reported, showing that the increasing development of computer aided diagnosis systems is very promising to contribute to the diagnostic process.

  12. How does the preparation of rye porridge affect molecular weight distribution of extractable dietary fibers?

    Science.gov (United States)

    Rakha, Allah; Aman, Per; Andersson, Roger

    2011-01-01

    Extractable dietary fiber (DF) plays an important role in nutrition. This study on porridge making with whole grain rye investigated the effect of rest time of flour slurries at room temperature before cooking and amount of flour and salt in the recipe on the content of DF components and molecular weight distribution of extractable fructan, mixed linkage (1→3)(1→4)-β-d-glucan (β-glucan) and arabinoxylan (AX) in the porridge. The content of total DF was increased (from about 20% to 23% of dry matter) during porridge making due to formation of insoluble resistant starch. A small but significant increase in the extractability of β-glucan (P = 0.016) and AX (P = 0.002) due to rest time was also noted. The molecular weight of extractable fructan and AX remained stable during porridge making. However, incubation of the rye flour slurries at increased temperature resulted in a significant decrease in extractable AX molecular weight. The molecular weight of extractable β-glucan decreased greatly during a rest time before cooking, most likely by the action of endogenous enzymes. The amount of salt and flour used in the recipe had small but significant effects on the molecular weight of β-glucan. These results show that whole grain rye porridge made without a rest time before cooking contains extractable DF components maintaining high molecular weights. High molecular weight is most likely of nutritional importance.

  13. How Does the Preparation of Rye Porridge Affect Molecular Weight Distribution of Extractable Dietary Fibers?

    Directory of Open Access Journals (Sweden)

    Roger Andersson

    2011-05-01

    Full Text Available Extractable dietary fiber (DF plays an important role in nutrition. This study on porridge making with whole grain rye investigated the effect of rest time of flour slurries at room temperature before cooking and amount of flour and salt in the recipe on the content of DF components and molecular weight distribution of extractable fructan, mixed linkage (1→3(1→4-β-D-glucan (β-glucan and arabinoxylan (AX in the porridge. The content of total DF was increased (from about 20% to 23% of dry matter during porridge making due to formation of insoluble resistant starch. A small but significant increase in the extractability of β-glucan (P = 0.016 and AX (P = 0.002 due to rest time was also noted. The molecular weight of extractable fructan and AX remained stable during porridge making. However, incubation of the rye flour slurries at increased temperature resulted in a significant decrease in extractable AX molecular weight. The molecular weight of extractable β-glucan decreased greatly during a rest time before cooking, most likely by the action of endogenous enzymes. The amount of salt and flour used in the recipe had small but significant effects on the molecular weight of β-glucan. These results show that whole grain rye porridge made without a rest time before cooking contains extractable DF components maintaining high molecular weights. High molecular weight is most likely of nutritional importance.

  14. Automatic Fontanel Extraction from Newborns' CT Images Using Variational Level Set

    Science.gov (United States)

    Kazemi, Kamran; Ghadimi, Sona; Lyaghat, Alireza; Tarighati, Alla; Golshaeyan, Narjes; Abrishami-Moghaddam, Hamid; Grebe, Reinhard; Gondary-Jouet, Catherine; Wallois, Fabrice

    A realistic head model is needed for source localization methods used for the study of epilepsy in neonates applying Electroencephalographic (EEG) measurements from the scalp. The earliest models consider the head as a series of concentric spheres, each layer corresponding to a different tissue whose conductivity is assumed to be homogeneous. The results of the source reconstruction depend highly on the electric conductivities of the tissues forming the head.The most used model is constituted of three layers (scalp, skull, and intracranial). Most of the major bones of the neonates’ skull are ossified at birth but can slightly move relative to each other. This is due to the sutures, fibrous membranes that at this stage of development connect the already ossified flat bones of the neurocranium. These weak parts of the neurocranium are called fontanels. Thus it is important to enter the exact geometry of fontaneles and flat bone in a source reconstruction because they show pronounced in conductivity. Computer Tomography (CT) imaging provides an excellent tool for non-invasive investigation of the skull which expresses itself in high contrast to all other tissues while the fontanels only can be identified as absence of bone, gaps in the skull formed by flat bone. Therefore, the aim of this paper is to extract the fontanels from CT images applying a variational level set method. We applied the proposed method to CT-images of five different subjects. The automatically extracted fontanels show good agreement with the manually extracted ones.

  15. Chemical name extraction based on automatic training data generation and rich feature set.

    Science.gov (United States)

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  16. Template-based automatic extraction of the joint space of foot bones from CT scan

    Science.gov (United States)

    Park, Eunbi; Kim, Taeho; Park, Jinah

    2016-03-01

    Clean bone segmentation is critical in studying the joint anatomy for measuring the spacing between the bones. However, separation of the coupled bones in CT images is sometimes difficult due to ambiguous gray values coming from the noise and the heterogeneity of bone materials as well as narrowing of the joint space. For fine reconstruction of the individual local boundaries, manual operation is a common practice where the segmentation remains to be a bottleneck. In this paper, we present an automatic method for extracting the joint space by applying graph cut on Markov random field model to the region of interest (ROI) which is identified by a template of 3D bone structures. The template includes encoded articular surface which identifies the tight region of the high-intensity bone boundaries together with the fuzzy joint area of interest. The localized shape information from the template model within the ROI effectively separates the bones nearby. By narrowing the ROI down to the region including two types of tissue, the object extraction problem was reduced to binary segmentation and solved via graph cut. Based on the shape of a joint space marked by the template, the hard constraint was set by the initial seeds which were automatically generated from thresholding and morphological operations. The performance and the robustness of the proposed method are evaluated on 12 volumes of ankle CT data, where each volume includes a set of 4 tarsal bones (calcaneus, talus, navicular and cuboid).

  17. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    Science.gov (United States)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  18. Automatic Extraction of Road Markings from Mobile Laser Scanning Data

    Science.gov (United States)

    Ma, H.; Pei, Z.; Wei, Z.; Zhong, R.

    2017-09-01

    Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS) and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS) system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  19. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    OpenAIRE

    U. Mallast; R. Gloaguen; S. Geyer; T. Rödiger; C. Siebert

    2011-01-01

    In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxili...

  20. A semi-automatic method for extracting thin line structures in images as rooted tree network

    Energy Technology Data Exchange (ETDEWEB)

    Brazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [Los Alamos National Laboratory; Soille, Pierre [EC - JRC

    2010-01-01

    This paper addresses the problem of semi-automatic extraction of line networks in digital images - e.g., road or hydrographic networks in satellite images, blood vessels in medical images, robust. For that purpose, we improve a generic method derived from morphological and hydrological concepts and consisting in minimum cost path estimation and flow simulation. While this approach fully exploits the local contrast and shape of the network, as well as its arborescent nature, we further incorporate local directional information about the structures in the image. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given seed with this metric is combined with hydrological operators for overland flow simulation to extract the line network. The algorithm is demonstrated for the extraction of blood vessels in a retina image and of a river network in a satellite image.

  1. ROADS CENTRE-AXIS EXTRACTION IN AIRBORNE SAR IMAGES: AN APPROACH BASED ON ACTIVE CONTOUR MODEL WITH THE USE OF SEMI-AUTOMATIC SEEDING

    Directory of Open Access Journals (Sweden)

    R. G. Lotte

    2013-05-01

    Full Text Available Research works dealing with computational methods for roads extraction have considerably increased in the latest two decades. This procedure is usually performed on optical or microwave sensors (radar imagery. Radar images offer advantages when compared to optical ones, for they allow the acquisition of scenes regardless of atmospheric and illumination conditions, besides the possibility of surveying regions where the terrain is hidden by the vegetation canopy, among others. The cartographic mapping based on these images is often manually accomplished, requiring considerable time and effort from the human interpreter. Maps for detecting new roads or updating the existing roads network are among the most important cartographic products to date. There are currently many studies involving the extraction of roads by means of automatic or semi-automatic approaches. Each of them presents different solutions for different problems, making this task a scientific issue still open. One of the preliminary steps for roads extraction can be the seeding of points belonging to roads, what can be done using different methods with diverse levels of automation. The identified seed points are interpolated to form the initial road network, and are hence used as an input for an extraction method properly speaking. The present work introduces an innovative hybrid method for the extraction of roads centre-axis in a synthetic aperture radar (SAR airborne image. Initially, candidate points are fully automatically seeded using Self-Organizing Maps (SOM, followed by a pruning process based on specific metrics. The centre-axis are then detected by an open-curve active contour model (snakes. The obtained results were evaluated as to their quality with respect to completeness, correctness and redundancy.

  2. Achieving Accurate Automatic Sleep Staging on Manually Pre-processed EEG Data Through Synchronization Feature Extraction and Graph Metrics.

    Science.gov (United States)

    Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula

    2018-01-01

    Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.

  3. Automatic extraction of gene ontology annotation and its correlation with clusters in protein networks

    Directory of Open Access Journals (Sweden)

    Mazo Ilya

    2007-07-01

    Full Text Available Abstract Background Uncovering cellular roles of a protein is a task of tremendous importance and complexity that requires dedicated experimental work as well as often sophisticated data mining and processing tools. Protein functions, often referred to as its annotations, are believed to manifest themselves through topology of the networks of inter-proteins interactions. In particular, there is a growing body of evidence that proteins performing the same function are more likely to interact with each other than with proteins with other functions. However, since functional annotation and protein network topology are often studied separately, the direct relationship between them has not been comprehensively demonstrated. In addition to having the general biological significance, such demonstration would further validate the data extraction and processing methods used to compose protein annotation and protein-protein interactions datasets. Results We developed a method for automatic extraction of protein functional annotation from scientific text based on the Natural Language Processing (NLP technology. For the protein annotation extracted from the entire PubMed, we evaluated the precision and recall rates, and compared the performance of the automatic extraction technology to that of manual curation used in public Gene Ontology (GO annotation. In the second part of our presentation, we reported a large-scale investigation into the correspondence between communities in the literature-based protein networks and GO annotation groups of functionally related proteins. We found a comprehensive two-way match: proteins within biological annotation groups form significantly denser linked network clusters than expected by chance and, conversely, densely linked network communities exhibit a pronounced non-random overlap with GO groups. We also expanded the publicly available GO biological process annotation using the relations extracted by our NLP technology

  4. AN EFFICIENT METHOD FOR AUTOMATIC ROAD EXTRACTION BASED ON MULTIPLE FEATURES FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    Y. Li

    2016-06-01

    Full Text Available The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1 road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2 local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3 hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for “Urban Classification and 3D Building Reconstruction” project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  5. An Efficient Method for Automatic Road Extraction Based on Multiple Features from LiDAR Data

    Science.gov (United States)

    Li, Y.; Hu, X.; Guan, H.; Liu, P.

    2016-06-01

    The road extraction in urban areas is difficult task due to the complicated patterns and many contextual objects. LiDAR data directly provides three dimensional (3D) points with less occlusions and smaller shadows. The elevation information and surface roughness are distinguishing features to separate roads. However, LiDAR data has some disadvantages are not beneficial to object extraction, such as the irregular distribution of point clouds and lack of clear edges of roads. For these problems, this paper proposes an automatic road centerlines extraction method which has three major steps: (1) road center point detection based on multiple feature spatial clustering for separating road points from ground points, (2) local principal component analysis with least squares fitting for extracting the primitives of road centerlines, and (3) hierarchical grouping for connecting primitives into complete roads network. Compared with MTH (consist of Mean shift algorithm, Tensor voting, and Hough transform) proposed in our previous article, this method greatly reduced the computational cost. To evaluate the proposed method, the Vaihingen data set, a benchmark testing data provided by ISPRS for "Urban Classification and 3D Building Reconstruction" project, was selected. The experimental results show that our method achieve the same performance by less time in road extraction using LiDAR data.

  6. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    OpenAIRE

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scien...

  7. Molecular Dynamics Simulation of Mahkota Dewa (Phaleria Macrocarpa) Extract in Subcritical Water Extraction Process

    Science.gov (United States)

    Hashim, N. A.; Mudalip, S. K. Abdul; Harun, N.; Che Man, R.; Sulaiman, S. Z.; Arshad, Z. I. M.; Shaarani, S. M.

    2018-05-01

    Mahkota Dewa (Phaleria Macrocarpa), a good source of saponin, flavanoid, polyphenol, alkaloid, and mangiferin has an extensive range of medicinal effects. The intermolecular interactions between solute and solvents such as hydrogen bonding considered as an important factor that affect the extraction of bioactive compounds. In this work, molecular dynamics simulation was performed to elucidate the hydrogen bonding exists between Mahkota Dewa extracts and water during subcritical extraction process. A bioactive compound in the Mahkota Dewa extract, namely mangiferin was selected as a model compound. The simulation was performed at 373 K and 4.0 MPa using COMPASS force field and Ewald summation method available in Material Studio 7.0 simulation package. The radial distribution functions (RDF) between mangiferin and water signify the presence of hydrogen bonding in the extraction process. The simulation of the binary mixture of mangiferin:water shows that strong hydrogen bonding was formed. It is suggested that, the intermolecular interaction between OH2O••HMR4(OH1) has been identified to be responsible for the mangiferin extraction process.

  8. An Efficient Platform for the Automatic Extraction of Patterns in Native Code

    Directory of Open Access Journals (Sweden)

    Javier Escalada

    2017-01-01

    Full Text Available Different software tools, such as decompilers, code quality analyzers, recognizers of packed executable files, authorship analyzers, and malware detectors, search for patterns in binary code. The use of machine learning algorithms, trained with programs taken from the huge number of applications in the existing open source code repositories, allows finding patterns not detected with the manual approach. To this end, we have created a versatile platform for the automatic extraction of patterns from native code, capable of processing big binary files. Its implementation has been parallelized, providing important runtime performance benefits for multicore architectures. Compared to the single-processor execution, the average performance improvement obtained with the best configuration is 3.5 factors over the maximum theoretical gain of 4 factors.

  9. Design and implementation of a control automatic module for the volume extraction of a 99mTc generator

    International Nuclear Information System (INIS)

    Lopez, Yon; Urquizo, Rafael; Gago, Javier; Mendoza, Pablo

    2014-01-01

    A module for the automatic extraction of volume from 0.05 mL to 1 mL has been developed using a 3D printer, using as base material acrylonitrile butadiene styrene (ABS). The design allows automation of the input and ejection eluate 99m Tc in the generator prototype 99 Mo/ 99m Tc processes; use in other systems is feasible due to its high degree of versatility, depending on the selection of the main components: precision syringe and multi-way solenoid valve. An accuracy equivalent to commercial equipment has been obtained, but at lower cost. This article describes the mechanical design, design calculations of the movement mechanism, electronics and automatic syringe dispenser control. (authors).

  10. AUTOMATIC EXTRACTION OF ROAD MARKINGS FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    H. Ma

    2017-09-01

    Full Text Available Road markings as critical feature in high-defination maps, which are Advanced Driver Assistance System (ADAS and self-driving technology required, have important functions in providing guidance and information to moving cars. Mobile laser scanning (MLS system is an effective way to obtain the 3D information of the road surface, including road markings, at highway speeds and at less than traditional survey costs. This paper presents a novel method to automatically extract road markings from MLS point clouds. Ground points are first filtered from raw input point clouds using neighborhood elevation consistency method. The basic assumption of the method is that the road surface is smooth. Points with small elevation-difference between neighborhood are considered to be ground points. Then ground points are partitioned into a set of profiles according to trajectory data. The intensity histogram of points in each profile is generated to find intensity jumps in certain threshold which inversely to laser distance. The separated points are used as seed points to region grow based on intensity so as to obtain road mark of integrity. We use the point cloud template-matching method to refine the road marking candidates via removing the noise clusters with low correlation coefficient. During experiment with a MLS point set of about 2 kilometres in a city center, our method provides a promising solution to the road markings extraction from MLS data.

  11. Solvent extraction of cerium (III) with high molecular weight amines

    International Nuclear Information System (INIS)

    Chatterjee, A.; Basu, S.

    1992-01-01

    The use of high molecular weight amines in the extraction of cerium (III) as EDTA complex from neutral aqueous medium is reported. The extraction condition was optimised from the study of effects of several variables like concentration of amine and EDTA pH nature of diluents etc. The method has been applied for the determination of cerium in few mineral samples. (author). 7 refs., 5 tabs

  12. Rapid methods for the extraction and archiving of molecular grade fungal genomic DNA.

    Science.gov (United States)

    Borman, Andrew M; Palmer, Michael; Johnson, Elizabeth M

    2013-01-01

    The rapid and inexpensive extraction of fungal genomic DNA that is of sufficient quality for molecular approaches is central to the molecular identification, epidemiological analysis, taxonomy, and strain typing of pathogenic fungi. Although many commercially available and in-house extraction procedures do eliminate the majority of contaminants that commonly inhibit molecular approaches, the inherent difficulties in breaking fungal cell walls lead to protocols that are labor intensive and that routinely take several hours to complete. Here we describe several methods that we have developed in our laboratory that allow the extremely rapid and inexpensive preparation of fungal genomic DNA.

  13. DNA extraction from sea anemone (Cnidaria: Actiniaria tissues for molecular analyses

    Directory of Open Access Journals (Sweden)

    Pinto S.M.

    2000-01-01

    Full Text Available A specific DNA extraction method for sea anemones is described in which extraction of total DNA from eight species of sea anemones and one species of corallimorpharian was achieved by changing the standard extraction protocols. DNA extraction from sea anemone tissue is made more difficult both by the tissue consistency and the presence of symbiotic zooxanthellae. The technique described here is an efficient way to avoid problems of DNA contamination and obtain large amounts of purified and integral DNA which can be used in different kinds of molecular analyses.

  14. Developing an Intelligent Automatic Appendix Extraction Method from Ultrasonography Based on Fuzzy ART and Image Processing

    Directory of Open Access Journals (Sweden)

    Kwang Baek Kim

    2015-01-01

    Full Text Available Ultrasound examination (US does a key role in the diagnosis and management of the patients with clinically suspected appendicitis which is the most common abdominal surgical emergency. Among the various sonographic findings of appendicitis, outer diameter of the appendix is most important. Therefore, clear delineation of the appendix on US images is essential. In this paper, we propose a new intelligent method to extract appendix automatically from abdominal sonographic images as a basic building block of developing such an intelligent tool for medical practitioners. Knowing that the appendix is located at the lower organ area below the bottom fascia line, we conduct a series of image processing techniques to find the fascia line correctly. And then we apply fuzzy ART learning algorithm to the organ area in order to extract appendix accurately. The experiment verifies that the proposed method is highly accurate (successful in 38 out of 40 cases in extracting appendix.

  15. Enhancing interpretability of automatically extracted machine learning features: application to a RBM-Random Forest system on brain lesion segmentation.

    Science.gov (United States)

    Pereira, Sérgio; Meier, Raphael; McKinley, Richard; Wiest, Roland; Alves, Victor; Silva, Carlos A; Reyes, Mauricio

    2018-02-01

    Machine learning systems are achieving better performances at the cost of becoming increasingly complex. However, because of that, they become less interpretable, which may cause some distrust by the end-user of the system. This is especially important as these systems are pervasively being introduced to critical domains, such as the medical field. Representation Learning techniques are general methods for automatic feature computation. Nevertheless, these techniques are regarded as uninterpretable "black boxes". In this paper, we propose a methodology to enhance the interpretability of automatically extracted machine learning features. The proposed system is composed of a Restricted Boltzmann Machine for unsupervised feature learning, and a Random Forest classifier, which are combined to jointly consider existing correlations between imaging data, features, and target variables. We define two levels of interpretation: global and local. The former is devoted to understanding if the system learned the relevant relations in the data correctly, while the later is focused on predictions performed on a voxel- and patient-level. In addition, we propose a novel feature importance strategy that considers both imaging data and target variables, and we demonstrate the ability of the approach to leverage the interpretability of the obtained representation for the task at hand. We evaluated the proposed methodology in brain tumor segmentation and penumbra estimation in ischemic stroke lesions. We show the ability of the proposed methodology to unveil information regarding relationships between imaging modalities and extracted features and their usefulness for the task at hand. In both clinical scenarios, we demonstrate that the proposed methodology enhances the interpretability of automatically learned features, highlighting specific learning patterns that resemble how an expert extracts relevant data from medical images. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. A quality score for coronary artery tree extraction results

    Science.gov (United States)

    Cao, Qing; Broersen, Alexander; Kitslaar, Pieter H.; Lelieveldt, Boudewijn P. F.; Dijkstra, Jouke

    2018-02-01

    Coronary artery trees (CATs) are often extracted to aid the fully automatic analysis of coronary artery disease on coronary computed tomography angiography (CCTA) images. Automatically extracted CATs often miss some arteries or include wrong extractions which require manual corrections before performing successive steps. For analyzing a large number of datasets, a manual quality check of the extraction results is time-consuming. This paper presents a method to automatically calculate quality scores for extracted CATs in terms of clinical significance of the extracted arteries and the completeness of the extracted CAT. Both right dominant (RD) and left dominant (LD) anatomical statistical models are generated and exploited in developing the quality score. To automatically determine which model should be used, a dominance type detection method is also designed. Experiments are performed on the automatically extracted and manually refined CATs from 42 datasets to evaluate the proposed quality score. In 39 (92.9%) cases, the proposed method is able to measure the quality of the manually refined CATs with higher scores than the automatically extracted CATs. In a 100-point scale system, the average scores for automatically and manually refined CATs are 82.0 (+/-15.8) and 88.9 (+/-5.4) respectively. The proposed quality score will assist the automatic processing of the CAT extractions for large cohorts which contain both RD and LD cases. To the best of our knowledge, this is the first time that a general quality score for an extracted CAT is presented.

  17. Automatic extraction of via in the CT image of PCB

    Science.gov (United States)

    Liu, Xifeng; Hu, Yuwei

    2018-04-01

    In modern industry, the nondestructive testing of printed circuit board (PCB) can prevent effectively the system failure and is becoming more and more important. In order to detect the via in the PCB base on the CT image automatically accurately and reliably, a novel algorithm for via extraction based on weighting stack combining the morphologic character of via is designed. Every slice data in the vertical direction of the PCB is superimposed to enhanced vias target. The OTSU algorithm is used to segment the slice image. OTSU algorithm of thresholding gray level images is efficient for separating an image into two classes where two types of fairly distinct classes exist in the image. Randomized Hough Transform was used to locate the region of via in the segmented binary image. Then the 3D reconstruction of via based on sequence slice images was done by volume rendering. The accuracy of via positioning and detecting from a CT images of PCB was demonstrated by proposed algorithm. It was found that the method is good in veracity and stability for detecting of via in three dimensional.

  18. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  19. Subcritical Butane Extraction of Wheat Germ Oil and Its Deacidification by Molecular Distillation

    Directory of Open Access Journals (Sweden)

    Jinwei Li

    2016-12-01

    Full Text Available Extraction and deacidification are important stages for wheat germ oil (WGO production. Crude WGO was extracted using subcritical butane extraction (SBE and compared with traditional solvent extraction (SE and supercritical carbon dioxide extraction (SCE based on the yield, chemical index and fatty acid profile. Furthermore, the effects of the molecular distillation temperature on the quality of WGO were also investigated in this study. Results indicated that WGO extracted by SBE has a higher yield of 9.10% and better quality; at the same time, its fatty acid composition has no significant difference compared with that of SE and SCE. The molecular distillation experiment showed that the acid value, peroxide value and p-anisidine value of WGO were reduced with the increase of the evaporation temperatures, and the contents of the active constituents of tocopherol, polyphenols and phytosterols are simultaneously decreased. Generally, the distillation temperature of 150 °C is an appropriate condition for WGO deacidification with the higher deacidification efficiency of 77.78% and the higher retention rate of active constituents.

  20. Automatically Extracting Typical Syntactic Differences from Corpora

    NARCIS (Netherlands)

    Wiersma, Wybo; Nerbonne, John; Lauttamus, Timo

    We develop an aggregate measure of syntactic difference for automatically finding common syntactic differences between collections of text. With the use of this measure, it is possible to mine for differences between, for example, the English of learners and natives, or between related dialects. If

  1. Historical Patterns Based on Automatically Extracted Data: the Case of Classical Composers

    DEFF Research Database (Denmark)

    Borowiecki, Karol; O'Hagan, John

    2012-01-01

    application that automatically extracts and processes information was developed to generate data on the birth location, occupations and importance (using word count methods) of over 12,000 composers over six centuries. Quantitative measures of the relative importance of different types of music......The purpose of this paper is to demonstrate the potential for generating interesting aggregate data on certain aspect of the lives of thousands of composers, and indeed other creative groups, from large on-line dictionaries and to be able to do so relatively quickly. A purpose-built java...... and of the different music instruments over the centuries were also generated. Finally quantitative indicators of the importance of different cities over the different centuries in the lives of these composers are constructed. A range of interesting findings emerge in relation to all of these aspects of the lives...

  2. Automatic three-dimensional rib centerline extraction from CT scans for enhanced visualization and anatomical context

    Science.gov (United States)

    Ramakrishnan, Sowmya; Alvino, Christopher; Grady, Leo; Kiraly, Atilla

    2011-03-01

    We present a complete automatic system to extract 3D centerlines of ribs from thoracic CT scans. Our rib centerline system determines the positional information for the rib cage consisting of extracted rib centerlines, spinal canal centerline, pairing and labeling of ribs. We show an application of this output to produce an enhanced visualization of the rib cage by the method of Kiraly et al., in which the ribs are digitally unfolded along their centerlines. The centerline extraction consists of three stages: (a) pre-trace processing for rib localization, (b) rib centerline tracing, and (c) post-trace processing to merge the rib traces. Then we classify ribs from non-ribs and determine anatomical rib labeling. Our novel centerline tracing technique uses the Random Walker algorithm to segment the structural boundary of the rib in successive 2D cross sections orthogonal to the longitudinal direction of the ribs. Then the rib centerline is progressively traced along the rib using a 3D Kalman filter. The rib centerline extraction framework was evaluated on 149 CT datasets with varying slice spacing, dose, and under a variety of reconstruction kernels. The results of the evaluation are presented. The extraction takes approximately 20 seconds on a modern radiology workstation and performs robustly even in the presence of partial volume effects or rib pathologies such as bone metastases or fractures, making the system suitable for assisting clinicians in expediting routine rib reading for oncology and trauma applications.

  3. Development of andrographolide molecularly imprinted polymer for solid-phase extraction

    Science.gov (United States)

    Yin, Xiaoying; Liu, Qingshan; Jiang, Yifan; Luo, Yongming

    2011-06-01

    A method employing molecularly imprinted polymer (MIP) as selective sorbent for solid-phase extraction (SPE) to pretreat samples was developed. The polymers were prepared by precipitation polymerization with andrographolide as template molecule. The structure of MIP was characterized and its static adsorption capacity was measured by the Scatchard equation. In comparison with C 18-SPE and non-imprinted polymer (NIP) SPE column, MIP-SPE column displays high selectivity and good affinity for andrographolide and dehydroandrographolide for extract of herb Andrographis paniculata ( Burm.f.) Nees (APN). MIP-SPE column capacity was 11.9 ± 0.6 μmol/g and 12.1 ± 0.5 μmol/g for andrographolide and dehydroandrographolide, respectively and was 2-3 times higher than that of other two columns. The precision and accuracy of the method developed were satisfactory with recoveries between 96.4% and 103.8% (RSD 3.1-4.3%, n = 5) and 96.0% and 104.2% (RSD 2.9-3.7%, n = 5) for andrographolide and dehydroandrographolide, respectively. Various real samples were employed to confirm the feasibility of method. This developed method demonstrates the potential of molecularly imprinted solid phase extraction for rapid, selective, and effective sample pretreatment.

  4. Extraction Kinetics and Molecular Size Fractionation of Humic Substances From Two Brazilian Soils

    Directory of Open Access Journals (Sweden)

    Dick Deborah Pinheiro

    1999-01-01

    Full Text Available In the present study, the extraction behaviour of humic substances (HS from an Oxisol and a Mollisol from South Brazil, by using 0.1 and 0.5 mol L-1 NaOH and 0.15 mol L-1 neutral pyrophosphate solutions, respectively, was systematically studied. The kinetics and efficiency of HS extraction were evaluated by means of UV/Vis spectroscopy. The isolated humic acids (HA and fulvic acids (FA were size-classified by multistage ultrafiltration (six fractions in the molecular weight range of 1 to 100 kDa. The obtained data show that the HS extraction yield depended not only on the extractant, but also on the soil type. Within 3 h approximately 90% of the soluble HS could be extracted following complex extraction kinetics by both methods and none or little structural modification was verified as observed from their stable extinction ratio E350/E550. In the Mollisol the pyrophosphate extraction was more effective, suggesting that a great part of HS occurred as macromolecules bonded to clay minerals and aggregated between themselves through cationic bridges. In the Oxisol a higher HS yield was verified with the alkaline method, presumably due to HS fixation onto the oxide surface by H-bonds and/or surface complexation reactions. In general, HS extracted by the pyrophosphate procedure showed higher molecular weights than those extracted by NaOH.

  5. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  6. Automatic extraction of corpus callosum from midsagittal head MR image and examination of Alzheimer-type dementia objective diagnostic system in feature analysis

    International Nuclear Information System (INIS)

    Kaneko, Tomoyuki; Kodama, Naoki; Kaeriyama, Tomoharu; Fukumoto, Ichiro

    2004-01-01

    We studied the objective diagnosis of Alzheimer-type dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 40 Alzheimer-type dementia patients (15 men and 25 women; mean age, 75.4±5.5 years) and 31 healthy elderly persons (10 men and 21 women; mean age, 73.4±7.5 years), 71 subjects altogether. First, the corpus callosum was automatically extracted from midsagittal head MR images. Next, Alzheimer-type dementia was compared with the healthy elderly individuals using the features of shape factor and six features of Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum succeeded in 64 of 71 individuals, for an extraction rate of 90.1%. A statistically significant difference was found in 7 of the 9 features between Alzheimer-type dementia patients and the healthy elderly adults. Discriminant analysis using the 7 features demonstrated a sensitivity rate of 82.4%, specificity of 89.3%, and overall accuracy of 85.5%. These results indicated the possibility of an objective diagnostic system for Alzheimer-type dementia using feature analysis based on change in the corpus callosum. (author)

  7. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    OpenAIRE

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  8. Combining Recurrence Analysis and Automatic Movement Extraction from Video Recordings to Study Behavioral Coupling in Face-to-Face Parent-Child Interactions.

    Science.gov (United States)

    López Pérez, David; Leonardi, Giuseppe; Niedźwiecka, Alicja; Radkowska, Alicja; Rączaszek-Leonardi, Joanna; Tomalski, Przemysław

    2017-01-01

    The analysis of parent-child interactions is crucial for the understanding of early human development. Manual coding of interactions is a time-consuming task, which is a limitation in many projects. This becomes especially demanding if a frame-by-frame categorization of movement needs to be achieved. To overcome this, we present a computational approach for studying movement coupling in natural settings, which is a combination of a state-of-the-art automatic tracker, Tracking-Learning-Detection (TLD), and nonlinear time-series analysis, Cross-Recurrence Quantification Analysis (CRQA). We investigated the use of TLD to extract and automatically classify movement of each partner from 21 video recordings of interactions, where 5.5-month-old infants and mothers engaged in free play in laboratory settings. As a proof of concept, we focused on those face-to-face episodes, where the mother animated an object in front of the infant, in order to measure the coordination between the infants' head movement and the mothers' hand movement. We also tested the feasibility of using such movement data to study behavioral coupling between partners with CRQA. We demonstrate that movement can be extracted automatically from standard definition video recordings and used in subsequent CRQA to quantify the coupling between movement of the parent and the infant. Finally, we assess the quality of this coupling using an extension of CRQA called anisotropic CRQA and show asymmetric dynamics between the movement of the parent and the infant. When combined these methods allow automatic coding and classification of behaviors, which results in a more efficient manner of analyzing movements than manual coding.

  9. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    Science.gov (United States)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  10. Automatic Extraction of Myocardial Mass and Volume Using Parametric Images from Dynamic Nongated PET.

    Science.gov (United States)

    Harms, Hendrik Johannes; Stubkjær Hansson, Nils Henrik; Tolbod, Lars Poulsen; Kim, Won Yong; Jakobsen, Steen; Bouchelouche, Kirsten; Wiggers, Henrik; Frøkiaer, Jørgen; Sörensen, Jens

    2016-09-01

    Dynamic cardiac PET is used to quantify molecular processes in vivo. However, measurements of left ventricular (LV) mass and volume require electrocardiogram-gated PET data. The aim of this study was to explore the feasibility of measuring LV geometry using nongated dynamic cardiac PET. Thirty-five patients with aortic-valve stenosis and 10 healthy controls underwent a 27-min (11)C-acetate PET/CT scan and cardiac MRI (CMR). The controls were scanned twice to assess repeatability. Parametric images of uptake rate K1 and the blood pool were generated from nongated dynamic data. Using software-based structure recognition, the LV wall was automatically segmented from K1 images to derive functional assessments of LV mass (mLV) and wall thickness. End-systolic and end-diastolic volumes were calculated using blood pool images and applied to obtain stroke volume and LV ejection fraction (LVEF). PET measurements were compared with CMR. High, linear correlations were found for LV mass (r = 0.95), end-systolic volume (r = 0.93), and end-diastolic volume (r = 0.90), and slightly lower correlations were found for stroke volume (r = 0.74), LVEF (r = 0.81), and thickness (r = 0.78). Bland-Altman analyses showed significant differences for mLV and thickness only and an overestimation for LVEF at lower values. Intra- and interobserver correlations were greater than 0.95 for all PET measurements. PET repeatability accuracy in the controls was comparable to CMR. LV mass and volume are accurately and automatically generated from dynamic (11)C-acetate PET without electrocardiogram gating. This method can be incorporated in a standard routine without any additional workload and can, in theory, be extended to other PET tracers. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  11. A Supporting Platform for Semi-Automatic Hyoid Bone Tracking and Parameter Extraction from Videofluoroscopic Images for the Diagnosis of Dysphagia Patients.

    Science.gov (United States)

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Paik, Nam Jong; Ryu, Ju Seok; Kim, In Young

    2017-04-01

    Conventional kinematic analysis of videofluoroscopic (VF) swallowing image, most popular for dysphagia diagnosis, requires time-consuming and repetitive manual extraction of diagnostic information from multiple images representing one swallowing period, which results in a heavy work load for clinicians and excessive hospital visits for patients to receive counseling and prescriptions. In this study, a software platform was developed that can assist in the VF diagnosis of dysphagia by automatically extracting a two-dimensional moving trajectory of the hyoid bone as well as 11 temporal and kinematic parameters. Fifty VF swallowing videos containing both non-mandible-overlapped and mandible-overlapped cases from eight patients with dysphagia of various etiologies and 19 videos from ten healthy controls were utilized for performance verification. Percent errors of hyoid bone tracking were 1.7 ± 2.1% for non-overlapped images and 4.2 ± 4.8% for overlapped images. Correlation coefficients between manually extracted and automatically extracted moving trajectories of the hyoid bone were 0.986 ± 0.017 (X-axis) and 0.992 ± 0.006 (Y-axis) for non-overlapped images, and 0.988 ± 0.009 (X-axis) and 0.991 ± 0.006 (Y-axis) for overlapped images. Based on the experimental results, we believe that the proposed platform has the potential to improve the satisfaction of both clinicians and patients with dysphagia.

  12. eCTG: an automatic procedure to extract digital cardiotocographic signals from digital images.

    Science.gov (United States)

    Sbrollini, Agnese; Agostinelli, Angela; Marcantoni, Ilaria; Morettini, Micaela; Burattini, Luca; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura

    2018-03-01

    Cardiotocography (CTG), consisting in the simultaneous recording of fetal heart rate (FHR) and maternal uterine contractions (UC), is a popular clinical test to assess fetal health status. Typically, CTG machines provide paper reports that are visually interpreted by clinicians. Consequently, visual CTG interpretation depends on clinician's experience and has a poor reproducibility. The lack of databases containing digital CTG signals has limited number and importance of retrospective studies finalized to set up procedures for automatic CTG analysis that could contrast visual CTG interpretation subjectivity. In order to help overcoming this problem, this study proposes an electronic procedure, termed eCTG, to extract digital CTG signals from digital CTG images, possibly obtainable by scanning paper CTG reports. eCTG was specifically designed to extract digital CTG signals from digital CTG images. It includes four main steps: pre-processing, Otsu's global thresholding, signal extraction and signal calibration. Its validation was performed by means of the "CTU-UHB Intrapartum Cardiotocography Database" by Physionet, that contains digital signals of 552 CTG recordings. Using MATLAB, each signal was plotted and saved as a digital image that was then submitted to eCTG. Digital CTG signals extracted by eCTG were eventually compared to corresponding signals directly available in the database. Comparison occurred in terms of signal similarity (evaluated by the correlation coefficient ρ, and the mean signal error MSE) and clinical features (including FHR baseline and variability; number, amplitude and duration of tachycardia, bradycardia, acceleration and deceleration episodes; number of early, variable, late and prolonged decelerations; and UC number, amplitude, duration and period). The value of ρ between eCTG and reference signals was 0.85 (P digital FHR and UC signals from digital CTG images. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Automatic Hidden-Web Table Interpretation by Sibling Page Comparison

    Science.gov (United States)

    Tao, Cui; Embley, David W.

    The longstanding problem of automatic table interpretation still illudes us. Its solution would not only be an aid to table processing applications such as large volume table conversion, but would also be an aid in solving related problems such as information extraction and semi-structured data management. In this paper, we offer a conceptual modeling solution for the common special case in which so-called sibling pages are available. The sibling pages we consider are pages on the hidden web, commonly generated from underlying databases. We compare them to identify and connect nonvarying components (category labels) and varying components (data values). We tested our solution using more than 2,000 tables in source pages from three different domains—car advertisements, molecular biology, and geopolitical information. Experimental results show that the system can successfully identify sibling tables, generate structure patterns, interpret tables using the generated patterns, and automatically adjust the structure patterns, if necessary, as it processes a sequence of hidden-web pages. For these activities, the system was able to achieve an overall F-measure of 94.5%.

  14. MiDas: automatic extraction of a common domain of discourse in sleep medicine for multi-center data integration.

    Science.gov (United States)

    Sahoo, Satya S; Ogbuji, Chimezie; Luo, Lingyun; Dong, Xiao; Cui, Licong; Redline, Susan S; Zhang, Guo-Qiang

    2011-01-01

    Clinical studies often use data dictionaries with controlled sets of terms to facilitate data collection, limited interoperability and sharing at a local site. Multi-center retrospective clinical studies require that these data dictionaries, originating from individual participating centers, be harmonized in preparation for the integration of the corresponding clinical research data. Domain ontologies are often used to facilitate multi-center data integration by modeling terms from data dictionaries in a logic-based language, but interoperability among domain ontologies (using automated techniques) is an unresolved issue. Although many upper-level reference ontologies have been proposed to address this challenge, our experience in integrating multi-center sleep medicine data highlights the need for an upper level ontology that models a common set of terms at multiple-levels of abstraction, which is not covered by the existing upper-level ontologies. We introduce a methodology underpinned by a Minimal Domain of Discourse (MiDas) algorithm to automatically extract a minimal common domain of discourse (upper-domain ontology) from an existing domain ontology. Using the Multi-Modality, Multi-Resource Environment for Physiological and Clinical Research (Physio-MIMI) multi-center project in sleep medicine as a use case, we demonstrate the use of MiDas in extracting a minimal domain of discourse for sleep medicine, from Physio-MIMI's Sleep Domain Ontology (SDO). We then extend the resulting domain of discourse with terms from the data dictionary of the Sleep Heart and Health Study (SHHS) to validate MiDas. To illustrate the wider applicability of MiDas, we automatically extract the respective domains of discourse from 6 sample domain ontologies from the National Center for Biomedical Ontologies (NCBO) and the OBO Foundry.

  15. Selective extraction of bisphenol A from water by one-monomer molecularly imprinted magnetic nanoparticles.

    Science.gov (United States)

    Lin, Zhenkun; Zhang, Yanfang; Su, Yu; Qi, Jinxia; Jia, Yinhang; Huang, Changjiang; Dong, Qiaoxiang

    2018-01-15

    One-monomer molecularly imprinted magnetic nanoparticles were prepared as adsorbents for selective extraction of bisphenol A from water in this study. A single bi-functional monomer was adopted for preparation of the molecularly imprinted polymer, avoiding the tedious trial-and-error optimizations as traditional strategy. Moreover, bisphenol F was used as the dummy template for bisphenol A to avoid the interference from residual template molecules. These nanoparticles showed not only large adsorption capacity and good selectivity to the bisphenol A but also outstanding magnetic response performance. Furthermore, they were successfully used as magnetic solid-phase extraction adsorbents of bisphenol A from various water samples, including tap water, river water, and seawater. The developed method was found to be much more efficient, convenient, and economical for selective extraction of bisphenol A compared with the traditional solid-phase extraction. Separation of these nanoparticles can be easily achieved with an external magnetic field, and the optimized adsorption time was only 15 min. The recoveries of bisphenol A in different water samples ranged from 85.38 to 93.75%, with relative standard deviation lower than 7.47%. These results showed that one-monomer molecularly imprinted magnetic nanoparticles had the potential to be popular adsorbents for selective extraction of pollutants from water. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Microbial diversity in fecal samples depends on DNA extraction method

    DEFF Research Database (Denmark)

    Mirsepasi, Hengameh; Persson, Søren; Struve, Carsten

    2014-01-01

    was to evaluate two different DNA extraction methods in order to choose the most efficient method for studying intestinal bacterial diversity using Denaturing Gradient Gel Electrophoresis (DGGE). FINDINGS: In this study, a semi-automatic DNA extraction system (easyMag®, BioMérieux, Marcy I'Etoile, France......BACKGROUND: There are challenges, when extracting bacterial DNA from specimens for molecular diagnostics, since fecal samples also contain DNA from human cells and many different substances derived from food, cell residues and medication that can inhibit downstream PCR. The purpose of the study...... by easyMag® from the same fecal samples. Furthermore, DNA extracts obtained using easyMag® seemed to contain inhibitory compounds, since in order to perform a successful PCR-analysis, the sample should be diluted at least 10 times. DGGE performed on PCR from DNA extracted by QIAamp DNA Stool Mini Kit DNA...

  17. Development of Molecularly Imprinted Polymers to Target Polyphenols Present in Plant Extracts

    Directory of Open Access Journals (Sweden)

    Catarina Gomes

    2017-11-01

    Full Text Available The development of molecularly imprinted polymers (MIPs to target polyphenols present in vegetable extracts was here addressed. Polydatin was selected as a template polyphenol due to its relatively high size and amphiphilic character. Different MIPs were synthesized to explore preferential interactions between the functional monomers and the template molecule. The effect of solvent polarity on the molecular imprinting efficiency, namely owing to hydrophobic interactions, was also assessed. Precipitation and suspension polymerization were examined as a possible way to change MIPs morphology and performance. Solid phase extraction and batch/continuous sorption processes were used to evaluate the polyphenols uptake/release in individual/competitive assays. Among the prepared MIPs, a suspension polymerization synthesized material, with 4-vinylpyridine as the functional monomer and water/methanol as solvent, showed a superior performance. The underlying cause of such a significant outcome is the likely surface imprinting process caused by the amphiphilic properties of polydatin. The uptake and subsequent selective release of polyphenols present in natural extracts was successfully demonstrated, considering a red wine solution as a case study. However, hydrophilic/hydrophobic interactions are inevitable (especially with complex natural extracts and the tuning of the polarity of the solvents is an important issue for the isolation of the different polyphenols.

  18. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    International Nuclear Information System (INIS)

    Gao, Yang; Zhong, Ruofei; Liu, Xianlin; Tang, Tao; Wang, Liuzhao

    2017-01-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness ( p ) and completeness ( r ) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR. (paper)

  19. Performance Analysis of the SIFT Operator for Automatic Feature Extraction and Matching in Photogrammetric Applications

    Directory of Open Access Journals (Sweden)

    Francesco Nex

    2009-05-01

    Full Text Available In the photogrammetry field, interest in region detectors, which are widely used in Computer Vision, is quickly increasing due to the availability of new techniques. Images acquired by Mobile Mapping Technology, Oblique Photogrammetric Cameras or Unmanned Aerial Vehicles do not observe normal acquisition conditions. Feature extraction and matching techniques, which are traditionally used in photogrammetry, are usually inefficient for these applications as they are unable to provide reliable results under extreme geometrical conditions (convergent taking geometry, strong affine transformations, etc. and for bad-textured images. A performance analysis of the SIFT technique in aerial and close-range photogrammetric applications is presented in this paper. The goal is to establish the suitability of the SIFT technique for automatic tie point extraction and approximate DSM (Digital Surface Model generation. First, the performances of the SIFT operator have been compared with those provided by feature extraction and matching techniques used in photogrammetry. All these techniques have been implemented by the authors and validated on aerial and terrestrial images. Moreover, an auto-adaptive version of the SIFT operator has been developed, in order to improve the performances of the SIFT detector in relation to the texture of the images. The Auto-Adaptive SIFT operator (A2 SIFT has been validated on several aerial images, with particular attention to large scale aerial images acquired using mini-UAV systems.

  20. Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR

    Science.gov (United States)

    Gao, Yang; Zhong, Ruofei; Tang, Tao; Wang, Liuzhao; Liu, Xianlin

    2017-08-01

    Pavement markings provide an important foundation as they help to keep roads users safe. Accurate and comprehensive information about pavement markings assists the road regulators and is useful in developing driverless technology. Mobile light detection and ranging (LiDAR) systems offer new opportunities to collect and process accurate pavement markings’ information. Mobile LiDAR systems can directly obtain the three-dimensional (3D) coordinates of an object, thus defining spatial data and the intensity of (3D) objects in a fast and efficient way. The RGB attribute information of data points can be obtained based on the panoramic camera in the system. In this paper, we present a novel method process to automatically extract pavement markings using multiple attribute information of the laser scanning point cloud from the mobile LiDAR data. This method process utilizes a differential grayscale of RGB color, laser pulse reflection intensity, and the differential intensity to identify and extract pavement markings. We utilized point cloud density to remove the noise and used morphological operations to eliminate the errors. In the application, we tested our method process on different sections of roads in Beijing, China, and Buffalo, NY, USA. The results indicated that both correctness (p) and completeness (r) were higher than 90%. The method process of this research can be applied to extract pavement markings from huge point cloud data produced by mobile LiDAR.

  1. Experimental and computational studies on molecularly imprinted solid-phase extraction for gonyautoxins 2,3 from dinoflagellate Alexandrium minutum.

    Science.gov (United States)

    Lian, Ziru; Li, Hai-Bei; Wang, Jiangtao

    2016-08-01

    An innovative and effective extraction procedure based on molecularly imprinted solid-phase extraction (MISPE) was developed for the isolation of gonyautoxins 2,3 (GTX2,3) from Alexandrium minutum sample. Molecularly imprinted polymer microspheres were prepared by suspension polymerization and and were employed as sorbents for the solid-phase extraction of GTX2,3. An off-line MISPE protocol was optimized. Subsequently, the extract samples from A. minutum were analyzed. The results showed that the interference matrices in the extract were obviously cleaned up by MISPE procedures. This outcome enabled the direct extraction of GTX2,3 in A. minutum samples with extraction efficiency as high as 83 %, rather significantly, without any need for a cleanup step prior to the extraction. Furthermore, computational approach also provided direct evidences of the high selective isolation of GTX2,3 from the microalgal extracts.

  2. AUTOMATIC EXTRACTION OF ROCK JOINTS FROM LASER SCANNED DATA BY MOVING LEAST SQUARES METHOD AND FUZZY K-MEANS CLUSTERING

    Directory of Open Access Journals (Sweden)

    S. Oh

    2012-09-01

    Full Text Available Recent development of laser scanning device increased the capability of representing rock outcrop in a very high resolution. Accurate 3D point cloud model with rock joint information can help geologist to estimate stability of rock slope on-site or off-site. An automatic plane extraction method was developed by computing normal directions and grouping them in similar direction. Point normal was calculated by moving least squares (MLS method considering every point within a given distance to minimize error to the fitting plane. Normal directions were classified into a number of dominating clusters by fuzzy K-means clustering. Region growing approach was exploited to discriminate joints in a point cloud. Overall procedure was applied to point cloud with about 120,000 points, and successfully extracted joints with joint information. The extraction procedure was implemented to minimize number of input parameters and to construct plane information into the existing point cloud for less redundancy and high usability of the point cloud itself.

  3. Influence of Extractant and Soil Type on Molecular Characteristics of Humic Substances From Two Brazilian Soils

    Directory of Open Access Journals (Sweden)

    Dick Deborah Pinheiro

    1999-01-01

    Full Text Available In a previous study it was observed that humic substances (HS extracted with NaOH solution and with Na4P2O7 solution presented different molecular weights, and also that the extracted HS yield by each method varied between an Oxisol and a Mollisol from South Brazil. In the present study, we further investigated the organic matter in these soils by characterizing HS extracted with 0.5 mol L-1 NaOH and with neutral 0.15 mol L-1 Na4P2O7 solutions from the above mentioned samples, using elemental analysis and nuclear magnetic ressonance spectroscopy (liquid state ¹H- and 13C-NMR, and by relating the molecular differences to the extraction method and soil type. HS extracted with pyrophosphate were more humified, showing a higher aromaticity and higher carboxylic content. The NaOH-extracted HS were more aliphatic and contained a higher O-alkyl proportion, which is indicative of a less humified nature than the pyrophosphate-extracted HS.

  4. Automatic extraction of Manhattan-World building masses from 3D laser range scans.

    Science.gov (United States)

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich

    2012-10-01

    We propose a novel approach for the reconstruction of urban structures from 3D point clouds with an assumption of Manhattan World (MW) building geometry; i.e., the predominance of three mutually orthogonal directions in the scene. Our approach works in two steps. First, the input points are classified according to the MW assumption into four local shape types: walls, edges, corners, and edge corners. The classified points are organized into a connected set of clusters from which a volume description is extracted. The MW assumption allows us to robustly identify the fundamental shape types, describe the volumes within the bounding box, and reconstruct visible and occluded parts of the sampled structure. We show results of our reconstruction that has been applied to several synthetic and real-world 3D point data sets of various densities and from multiple viewpoints. Our method automatically reconstructs 3D building models from up to 10 million points in 10 to 60 seconds.

  5. Molecular imprinting solid phase extraction for selective detection of methidathion in olive oil.

    Science.gov (United States)

    Bakas, Idriss; Oujji, Najwa Ben; Moczko, Ewa; Istamboulie, Georges; Piletsky, Sergey; Piletska, Elena; Ait-Ichou, Ihya; Ait-Addi, Elhabib; Noguer, Thierry; Rouillon, Régis

    2012-07-13

    A specific adsorbent for extraction of methidathion from olive oil was developed. The design of the molecularly imprinted polymer (MIP) was based on the results of the computational screening of the library of polymerisable functional monomers. MIP was prepared by thermal polymerisation using N,N'-methylene bisacrylamide (MBAA) as a functional monomer and ethylene glycol dimethacrylate (EGDMA) as a cross-linker. The polymers based on the itaconic acid (IA), methacrylic acid (MAA) and 2-(trifluoromethyl)acryl acid (TFMAA) functional monomers and one control polymer which was made without functional monomers with cross-linker EGDMA were also synthesised and tested. The performance of each polymer was compared using corresponding imprinting factor. As it was predicted by molecular modelling the best results were obtained for the MIP prepared with MBAA. The obtained MIP was optimised in solid-phase extraction coupled with high performance liquid chromatography (MISPE-HPLC-UV) and tested for the rapid screening of methidathion in olive oil. The proposed method allowed the efficient extraction of methidathion for concentrations ranging from 0.1 to 9 mg L(-1) (r(2)=0.996). The limits of detection (LOD) and quantification (LOQ) in olive oil were 0.02 mg L(-1) and 0.1 mg L(-1), respectively. MIPs extraction was much more effective than traditional C18 reverse-phase solid phase extraction. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Extractive Summarisation of Medical Documents

    OpenAIRE

    Abeed Sarker; Diego Molla; Cecile Paris

    2012-01-01

    Background Evidence Based Medicine (EBM) practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medic...

  7. Automatic emotional expression analysis from eye area

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet

    2015-02-01

    Eyes play an important role in expressing emotions in nonverbal communication. In the present study, emotional expression classification was performed based on the features that were automatically extracted from the eye area. Fırst, the face area and the eye area were automatically extracted from the captured image. Afterwards, the parameters to be used for the analysis through discrete wavelet transformation were obtained from the eye area. Using these parameters, emotional expression analysis was performed through artificial intelligence techniques. As the result of the experimental studies, 6 universal emotions consisting of expressions of happiness, sadness, surprise, disgust, anger and fear were classified at a success rate of 84% using artificial neural networks.

  8. Analysis of small molecular phase in coal involved in pyrolysis and solvent extraction by PGC

    Energy Technology Data Exchange (ETDEWEB)

    Jie Feng; Wen-Ying Li; Ke-Chang Xie [Taiyuan University of Technology, Taiyuan (China). Key Laboratory of Coal Science and Technology

    2004-06-01

    The small molecular phase, which strongly affects coal's reactivity, is the main part of the structure unit in coal. At present, its composition and structure features have not been clearly understood. In this paper, a flash pyrolysis technique with on-line GC (PGC) was used to investigate the properties of the small molecular phase from six kinds of rank coal in China. Experiments were divided into two parts: one is PGC of parent coal; another is PGC of coal extracts from NMP + CS{sub 2} (75:1) solvent extraction at 373 K. Results show that the small molecular phase mainly consists of C12-C16 compounds that could be integrally released when the heating rate was greater than 10 K/ms and the final pyrolysis temperature was 1373 K; other compounds may be the products of decomposition and polymerization from this small molecular phase during pyrolysis. 13 refs., 7 figs., 1 tab.

  9. Comparison of RNA Extraction Methods for Molecular Analysis of Oral Cytology

    Directory of Open Access Journals (Sweden)

    Mônica Ghislaine Oliveira Alves

    2016-01-01

    Full Text Available Objective of work: The aim of this study was to compare three methods of RNA extraction for molecular analysis of oral cytology to establish the best technique, considering its concentration and purity for molecular tests of oral lesions such as real-time reverse transcriptase reaction. Material and methods: The sample included exfoliative cytology from the oral cavity mucosa of patients with no visible clinical changes, using Orcellex Rovers Brush®. The extraction of total RNA was performed using the following three techniques: 30 samples were extracted by Trizol® technique, 30 by the DirectzolTM RNA Miniprep system and 30 by the RNeasy mini Kit. The absorbance was measured by spectrophotometer to estimate the purity. The estimated RNA concentration was obtained by multiplying the value of A260 (ng/mL by 40. Statistical analysis of the obtained data was performed using GraphPad Prism 5.03 software with Student t, analysis of variance and Bonferroni tests, considering p ≤0.05. Results: Trizol® group revealed higher average concentration, followed by Direct-zolTM and Rneasy group. It was observed that the RNA Direct-zolTM group had the highest purity, followed by RNeasy and Trizol® groups, allowing for the two ratios. Conclusion: Considering all aspects, concentration, purity and time spent in the procedures, the Direct-zolTM group showed the best results.

  10. Molecularly imprinted polymers based stir bar sorptive extraction for determination of cefaclor and cefalexin in environmental water.

    Science.gov (United States)

    Peng, Jun; Liu, Donghao; Shi, Tian; Tian, Huairu; Hui, Xuanhong; He, Hua

    2017-07-01

    Although stir bar sportive extraction was thought to be a highly efficiency and simple pretreatment approach, its wide application was limited by low selectivity, short service life, and relatively high cost. In order to improve the performance of the stir bar, molecular imprinted polymers and magnetic carbon nanotubes were combined in the present study. In addition, two monomers were utilized to intensify the selectivity of molecularly imprinted polymers. Fourier transform infrared spectroscopy, scanning electron microscopy, and selectivity experiments showed that the molecularly imprinted polymeric stir bar was successfully prepared. Then micro-extraction based on the obtained stir bar was coupled with HPLC for determination of trace cefaclor and cefalexin in environmental water. This approach had the advantages of stir bar sportive extraction, high selectivity of molecular imprinted polymers, and high sorption efficiency of carbon nanotubes. To utilize this pretreatment approach, pH, extraction time, stirring speed, elution solvent, and elution time were optimized. The LOD and LOQ of cefaclor were found to be 3.5 ng · mL -1 and 12.0 ng · mL -1 , respectively; the LOD and LOQ of cefalexin were found to be 3.0 ng · mL -1 and 10.0 ng · mL -1 , respectively. The recoveries of cefaclor and cefalexin were 86.5 ~ 98.6%. The within-run precision and between-run precision were acceptable (relative standard deviation bar did not decrease dramatically. This demonstrated that the molecularly imprinted polymeric stir bar based micro-extraction was a convenient, efficient, low-cost, and a specific method for enrichment of cefaclor and cefalexin in environmental samples.

  11. Contribution of molecular modeling and of structure-activity relations to the liquid-liquid extraction. Application to the case of U(VI) extraction by monoamides

    International Nuclear Information System (INIS)

    Rabbe, C.

    1996-01-01

    In France, spent fuels are in most cases reprocessed. The aim of the reprocessing is to separate the recyclable fissile materials (for instance, uranium and plutonium) of radioactive wastes. The industrial process used until now is the Purex (Plutonium Uranium Refining by EXtraction) process. Recently (1991), the CEA has undertaken researches on the fields of separation and transmutation of long-lived radionuclides as minor actinides. Some molecules with an amide function have been at first considered especially for the uranium extraction. In order to rationalize the research of new extracting molecules, some molecular modeling methods (quantum chemistry calculations, molecular mechanics) have been used. In fact, there are three determining parameters for a molecule to be a good extractant: it has to own: 1) one or several sites which present a sufficient electron density in order that the metallic cation be complexed 2) the smallest possible substituents to avoid interferences with the complexation 3) a sufficient lipophilic effect. (O.M.). 139 refs., 43 figs., 36 tabs

  12. Analysis of Technique to Extract Data from the Web for Improved Performance

    Science.gov (United States)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  13. Methods to extract information on the atomic and molecular states from scientific abstracts

    International Nuclear Information System (INIS)

    Sasaki, Akira; Ueshima, Yutaka; Yamagiwa, Mitsuru; Murata, Masaki; Kanamaru, Toshiyuki; Shirado, Tamotsu; Isahara, Hitoshi

    2005-01-01

    We propose a new application of information technology to recognize and extract expressions of atomic and molecular states from electrical forms of scientific abstracts. Present results will help scientists to understand atomic states as well as the physics discussed in the articles. Combining with the internet search engines, it will make one possible to collect not only atomic and molecular data but broader scientific information over a wide range of research fields. (author)

  14. Forensic Automatic Speaker Recognition Based on Likelihood Ratio Using Acoustic-phonetic Features Measured Automatically

    Directory of Open Access Journals (Sweden)

    Huapeng Wang

    2015-01-01

    Full Text Available Forensic speaker recognition is experiencing a remarkable paradigm shift in terms of the evaluation framework and presentation of voice evidence. This paper proposes a new method of forensic automatic speaker recognition using the likelihood ratio framework to quantify the strength of voice evidence. The proposed method uses a reference database to calculate the within- and between-speaker variability. Some acoustic-phonetic features are extracted automatically using the software VoiceSauce. The effectiveness of the approach was tested using two Mandarin databases: A mobile telephone database and a landline database. The experiment's results indicate that these acoustic-phonetic features do have some discriminating potential and are worth trying in discrimination. The automatic acoustic-phonetic features have acceptable discriminative performance and can provide more reliable results in evidence analysis when fused with other kind of voice features.

  15. Selective isolation of gonyautoxins 1,4 from the dinoflagellate Alexandrium minutum based on molecularly imprinted solid-phase extraction.

    Science.gov (United States)

    Lian, Ziru; Wang, Jiangtao

    2017-09-15

    Gonyautoxins 1,4 (GTX1,4) from Alexandrium minutum samples were isolated selectively and recognized specifically by an innovative and effective extraction procedure based on molecular imprinting technology. Novel molecularly imprinted polymer microspheres (MIPMs) were prepared by double-templated imprinting strategy using caffeine and pentoxifylline as dummy templates. The synthesized polymers displayed good affinity to GTX1,4 and were applied as sorbents. Further, an off-line molecularly imprinted solid-phase extraction (MISPE) protocol was optimized and an effective approach based on the MISPE coupled with HPLC-FLD was developed for selective isolation of GTX1,4 from the cultured A. minutum samples. The separation method showed good extraction efficiency (73.2-81.5%) for GTX1,4 and efficient removal of interferences matrices was also achieved after the MISPE process for the microalgal samples. The outcome demonstrated the superiority and great potential of the MISPE procedure for direct separation of GTX1,4 from marine microalgal extracts. Copyright © 2017. Published by Elsevier Ltd.

  16. Validation of the ICU-DaMa tool for automatically extracting variables for minimum dataset and quality indicators: The importance of data quality assessment.

    Science.gov (United States)

    Sirgo, Gonzalo; Esteban, Federico; Gómez, Josep; Moreno, Gerard; Rodríguez, Alejandro; Blanch, Lluis; Guardiola, Juan José; Gracia, Rafael; De Haro, Lluis; Bodí, María

    2018-04-01

    Big data analytics promise insights into healthcare processes and management, improving outcomes while reducing costs. However, data quality is a major challenge for reliable results. Business process discovery techniques and an associated data model were used to develop data management tool, ICU-DaMa, for extracting variables essential for overseeing the quality of care in the intensive care unit (ICU). To determine the feasibility of using ICU-DaMa to automatically extract variables for the minimum dataset and ICU quality indicators from the clinical information system (CIS). The Wilcoxon signed-rank test and Fisher's exact test were used to compare the values extracted from the CIS with ICU-DaMa for 25 variables from all patients attended in a polyvalent ICU during a two-month period against the gold standard of values manually extracted by two trained physicians. Discrepancies with the gold standard were classified into plausibility, conformance, and completeness errors. Data from 149 patients were included. Although there were no significant differences between the automatic method and the manual method, we detected differences in values for five variables, including one plausibility error and two conformance and completeness errors. Plausibility: 1) Sex, ICU-DaMa incorrectly classified one male patient as female (error generated by the Hospital's Admissions Department). Conformance: 2) Reason for isolation, ICU-DaMa failed to detect a human error in which a professional misclassified a patient's isolation. 3) Brain death, ICU-DaMa failed to detect another human error in which a professional likely entered two mutually exclusive values related to the death of the patient (brain death and controlled donation after circulatory death). Completeness: 4) Destination at ICU discharge, ICU-DaMa incorrectly classified two patients due to a professional failing to fill out the patient discharge form when thepatients died. 5) Length of continuous renal replacement

  17. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    Jiang, Hao; Yamamoto, Shinji; Imao, Masanao.

    1995-01-01

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  18. Extraction of tributyltin by magnetic molecularly imprinted polymers

    International Nuclear Information System (INIS)

    Zhu, Shanshan; Pan, Daodong; Hu, Futao; Gan, Ning; Li, Yi; Cao, Yuting; Wu, Dazhen; Yang, Ting

    2013-01-01

    We have prepared core-shell magnetic molecularly imprinted polymer nanoparticles for recognition and extraction of tributyl tin (TBT). The use of particles strongly improves the imprinting effect and leads to fast adsorption kinetics and high adsorption capacities. The functional monomer acrylamide was grafted to the surface of Fe 3 O 4 nanospheres in two steps, and MIP layers were then formed on the surface by creating a MIP layer on the surface consisting of poly(ethyleneglycol dimethacrylate) with a TBT template. The particles were characterized in terms of morphological, magnetic, adsorption, and recognition properties. We then have developed a method for the extraction of TBT from spiked mussel (Mytilidae), and its determination by liquid chromatography-tandem mass spectrometry. The method has a limit of detection of 1.0 ng g −1 (n = 5) of TBT, with a linear response between 5.0 and 1,000 ng g −1 . The proposed method was successfully applied to the determination of trace TBT in marine food samples with recoveries in the range of 78.3–95.6 %. (author)

  19. Automatic extraction of myocardial mass and volumes using parametric images from dynamic nongated PET

    DEFF Research Database (Denmark)

    Harms, Hendrik Johannes; Hansson, Nils Henrik Stubkjær; Tolbod, Lars Poulsen

    2016-01-01

    Dynamic cardiac positron emission tomography (PET) is used to quantify molecular processes in vivo. However, measurements of left-ventricular (LV) mass and volumes require electrocardiogram (ECG)-gated PET data. The aim of this study was to explore the feasibility of measuring LV geometry using non......-gated dynamic cardiac PET. METHODS: Thirty-five patients with aortic-valve stenosis and 10 healthy controls (HC) underwent a 27-min 11C-acetate PET/CT scan and cardiac magnetic resonance imaging (CMR). HC were scanned twice to assess repeatability. Parametric images of uptake rate K1 and the blood pool were......LV and WT only and an overestimation for LVEF at lower values. Intra- and inter-observer correlations were >0.95 for all PET measurements. PET repeatability accuracy in HC was comparable to CMR. CONCLUSION: LV mass and volumes are accurately and automatically generated from dynamic 11C-acetate PET without...

  20. Extractive Summarisation of Medical Documents

    Directory of Open Access Journals (Sweden)

    Abeed Sarker

    2012-09-01

    Full Text Available Background Evidence Based Medicine (EBM practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medical documents. MethodWe use a corpus that is specifically designed for summarisation in the EBM domain. We use approximately half the corpus for deriving important statistics associated with the best possible extractive summaries. We take into account factors such as sentence position, length, sentence content, and the type of the query posed. Using the statistics from the first set, we evaluate our approach on a separate set. Evaluation of the qualities of the generated summaries is performed automatically using ROUGE, which is a popular tool for evaluating automatic summaries. Results Our summarisation approach outperforms all baselines (best baseline score: 0.1594; our score 0.1653. Further improvements are achieved when query types are taken into account. Conclusion The quality of extractive summarisation in the medical domain can be significantly improved by incorporating domain knowledge and statistics derived from a specialised corpus. Such techniques can therefore be applied for content selection in end-to-end summarisation systems.

  1. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  2. Potentialities and limits of QSPR and molecular modeling in the design of the extraction solvents used in hydrometallurgy

    International Nuclear Information System (INIS)

    Cote, G.; Chagnes, A.

    2010-01-01

    Due to new challenges, new extraction solvents based on innovative extractants are needed in hydrometallurgy for specific tasks. Thus, the aim of the present paper is to discuss the potentialities and limits of QSPR and molecular modeling for identifying new extractants. QSPR methods may have useful applications in such a complex problem as the design of ligands for metal separation. Nevertheless, the degree of reliability of the predictions is still limited and, in the present state of the art, these techniques are likely more useful for optimization within a given family of extractants than to build in silico new reagents. The molecular modeling techniques provide binding energies between target metals and given ligands, as well as optimized chemical structures of the formed complexes. Thus, in principle, the information which can be deduced from the molecular modeling computations is richer than that provided by QSPR methods. Nevertheless, an effort should be made to establish more tangible links between the calculated binding energies and the physical parameters used by the hydrometallurgists, such as the complexation constants in aqueous phase (β MAn ) or better the extraction constants (K ex ). (author)

  3. A simple and cost-effective method of DNA extraction from small formalin-fixed paraffin-embedded tissue for molecular oncologic testing.

    Science.gov (United States)

    Snow, Anthony N; Stence, Aaron A; Pruessner, Jonathan A; Bossler, Aaron D; Ma, Deqin

    2014-01-01

    Extraction of DNA from formalin-fixed, paraffin-embedded (FFPE) tissue is a critical step in molecular oncologic testing. As molecular oncology testing becomes more important for prognostic and therapeutic decision making and tissue specimens become smaller due to earlier detection of suspicious lesions and the use of fine needle aspiration methods for tissue collection, it becomes more challenging for the typical molecular pathology laboratory to obtain reliable test results. We developed a DNA extraction method to obtain sufficient quantity and high quality genomic DNA from limited FFPE tissue for molecular oncology testing using a combination of H&E stained slides, a matrix capture method and the Qiagen DNA column. THREE DNA EXTRACTION METHODS WERE COMPARED: our standard procedure of manually scraping tissue from unstained slides followed by DNA extraction using the QIAamp FFPE column (Qiagen, Valencia, CA), a glue capture method (Pinpoint Solution, Zymo Research Corp, Inc) on H&E stained slides followed by DNA extraction using either the QIAamp column or the column included with the Pinpoint kit (Zymo Research). The DNA extraction protocol was optimized. Statistical analysis was performed using the paired two-sample student's t-test. The combination of the matrix capture method with the QIAamp column gave an equivalent amount of DNA as our standard extraction method using the unstained slides and a 4.6-fold higher DNA yield than using the Zymo column included in the Pinpoint Slide Solution kit. Several molecular tests were performed and DNA purified using the new method gave the same results as for the previous methods. Using H&E stained slides allows visual confirmation of tumor cells during microdissection. The Pinpoint solution made removal of specific tissue from the slides easier and reduced the risk of contamination and tissue loss. This DNA extraction method is simple, cost-effective, and blends with our current workflow requiring no additional equipment.

  4. Knowledge environments representing molecular entities for the virtual physiological human.

    Science.gov (United States)

    Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M

    2008-09-13

    In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.

  5. A molecular imprint-coated stirrer bar for selective extraction of caffeine, theobromine and theophylline

    International Nuclear Information System (INIS)

    Zhu, Quanfei; Ma, Chao; Chen, Huaixia; Wu, Yaqi; Huang, Jianlin

    2014-01-01

    We have prepared a novel caffeine imprinted polymer on a stir bar that can be used for selective extraction of caffeine, theobromine and theophylline from beverages. The polymerization time and quantities of reagents (template, cross-linker, porogenic solvent) were optimized. The morphology of the molecularly imprinted polymer-coating was studied by scanning electron microscopy and Fourier transform IR spectroscopy. A rapid and sensitive method was worked out for the extraction of caffeine, theobromine and theophylline from beverages by using the molecularly imprinted stir bar followed by HPLC analysis. The effects of extraction solvent, stirring speed, desorption solvent, adsorption and desorption time were optimized. The method displays a linear response in the 5–150 μg L −1 caffeine concentration range, with a correlation coefficient of >0.9904. The recoveries for three analytes in tea, carbonated and functional beverages were 91–108 %, 90–110 % and 93–109 %, with relative standard deviations ranging from 3.6–5.7 %, 3.5–7.9 % and 3.2–7.9 %, respectively. (author)

  6. Automatic Compound Annotation from Mass Spectrometry Data Using MAGMa.

    NARCIS (Netherlands)

    Ridder, L.O.; Hooft, van der J.J.J.; Verhoeven, S.

    2014-01-01

    The MAGMa software for automatic annotation of mass spectrometry based fragmentation data was applied to 16 MS/MS datasets of the CASMI 2013 contest. Eight solutions were submitted in category 1 (molecular formula assignments) and twelve in category 2 (molecular structure assignment). The MS/MS

  7. Automatically identifying gene/protein terms in MEDLINE abstracts.

    Science.gov (United States)

    Yu, Hong; Hatzivassiloglou, Vasileios; Rzhetsky, Andrey; Wilbur, W John

    2002-01-01

    Natural language processing (NLP) techniques are used to extract information automatically from computer-readable literature. In biology, the identification of terms corresponding to biological substances (e.g., genes and proteins) is a necessary step that precedes the application of other NLP systems that extract biological information (e.g., protein-protein interactions, gene regulation events, and biochemical pathways). We have developed GPmarkup (for "gene/protein-full name mark up"), a software system that automatically identifies gene/protein terms (i.e., symbols or full names) in MEDLINE abstracts. As a part of marking up process, we also generated automatically a knowledge source of paired gene/protein symbols and full names (e.g., LARD for lymphocyte associated receptor of death) from MEDLINE. We found that many of the pairs in our knowledge source do not appear in the current GenBank database. Therefore our methods may also be used for automatic lexicon generation. GPmarkup has 73% recall and 93% precision in identifying and marking up gene/protein terms in MEDLINE abstracts. A random sample of gene/protein symbols and full names and a sample set of marked up abstracts can be viewed at http://www.cpmc.columbia.edu/homepages/yuh9001/GPmarkup/. Contact. hy52@columbia.edu. Voice: 212-939-7028; fax: 212-666-0140.

  8. [The molecular mechanisms of curcuma wenyujin extract-mediated inhibitory effects on human esophageal carcinoma cells in vitro].

    Science.gov (United States)

    Jing, Zhao; Zou, Hai-Zhou; Xu, Fang

    2012-09-01

    To study the molecular mechanisms of Curcuma Wenyujin extract-mediated inhibitory effects on human esophageal carcinoma cells. The Curcuma Wenyujin extract was obtained by supercritical carbon dioxide extraction. TE-1 cells were divided into 4 groups after adherence. 100 microL RMPI-1640 culture medium containing 0.1% DMSO was added in Group 1 as the control group. 100 microL 25, 50, and 100 mg/L Curcuma Wenyujin extract complete culture medium was respectively added in the rest 3 groups as the low, middle, and high dose Curcuma Wenyujin extract groups. The effects of different doses of Curcuma Wenyujin extract (25, 50, and 100 mg/L) on the proliferation of human esophageal carcinoma cell line TE-1 in vitro were analyzed by MTT assay. The gene expression profile was identified by cDNA microarrays in esophageal carcinoma TE-1 cells exposed to Curcuma Wenyujin extract for 48 h. The differential expression genes were further analyzed by Gene Ontology function analysis. Compared with the control group, MTT results showed that Curcuma Wenyujin extract significantly inhibited the proliferation of TE-1 cells in a dose-dependent manner (PCurcuma Wenyujin extract could inhibit the growth of human esophageal carcinoma cell line TE-1 in vitro. The molecular mechanisms might be associated with regulating genes expressions at multi-levels.

  9. Nanostructured conducting molecularly imprinted polymer for selective extraction of salicylate from urine and serum samples by electrochemically controlled solid-phase micro-extraction

    Energy Technology Data Exchange (ETDEWEB)

    Ameli, Akram [Department of Chemistry, Faculty of Science, Tarbiat Modares University, P.O. Box 14115-175, Tehran (Iran, Islamic Republic of); Alizadeh, Naader, E-mail: alizaden@modares.ac.ir [Department of Chemistry, Faculty of Science, Tarbiat Modares University, P.O. Box 14115-175, Tehran (Iran, Islamic Republic of)

    2011-11-30

    Highlights: Black-Right-Pointing-Pointer Overoxidized polypyrrole templated with salicylate has been utilized as conducting molecular imprinted polymer for EC-SPME. Black-Right-Pointing-Pointer This first study reported on conducting molecular imprinted polymer was used to EC-SPME of salicylate. Black-Right-Pointing-Pointer Proposed method, is particularly effective in sample clean-up and selective monitoring of salicylate in physiological samples. - Abstract: Overoxidized polypyrrole (OPPy) films templated with salicylate (SA) have been utilized as conducting molecular imprinted polymers (CMIPs) for potential-induced selective solid-phase micro-extraction processes. Various important fabrication factors for controlling the performance of the OPPy films have been investigated using fluorescence spectrometry. Several key parameters such as applied potential for uptake, release, pH of uptake and release solution were varied to achieve the optimum micro-extraction procedure. The film template with SA exhibited excellent selectivity over some interference. The calibration graphs were linear in the ranges of 5 Multiplication-Sign 10{sup -8} to 5 Multiplication-Sign 10{sup -4} and 1.2 Multiplication-Sign 10{sup -6} to 5 Multiplication-Sign 10{sup -4} mol mL{sup -1} and the detection limit was 4 Multiplication-Sign 10{sup -8} mol L{sup -1}. The OPPy film as the solid-phase micro-extraction absorbent has been applied for the selective clean-up and quantification of trace amounts of SA from physiological samples. The results of scanning electron microscopy (SEM) have confirmed the nano-structure morphologies of the films.

  10. Nanostructured conducting molecularly imprinted polymer for selective extraction of salicylate from urine and serum samples by electrochemically controlled solid-phase micro-extraction

    International Nuclear Information System (INIS)

    Ameli, Akram; Alizadeh, Naader

    2011-01-01

    Highlights: ► Overoxidized polypyrrole templated with salicylate has been utilized as conducting molecular imprinted polymer for EC-SPME. ► This first study reported on conducting molecular imprinted polymer was used to EC-SPME of salicylate. ► Proposed method, is particularly effective in sample clean-up and selective monitoring of salicylate in physiological samples. - Abstract: Overoxidized polypyrrole (OPPy) films templated with salicylate (SA) have been utilized as conducting molecular imprinted polymers (CMIPs) for potential-induced selective solid-phase micro-extraction processes. Various important fabrication factors for controlling the performance of the OPPy films have been investigated using fluorescence spectrometry. Several key parameters such as applied potential for uptake, release, pH of uptake and release solution were varied to achieve the optimum micro-extraction procedure. The film template with SA exhibited excellent selectivity over some interference. The calibration graphs were linear in the ranges of 5 × 10 −8 to 5 × 10 −4 and 1.2 × 10 −6 to 5 × 10 −4 mol mL −1 and the detection limit was 4 × 10 −8 mol L −1 . The OPPy film as the solid-phase micro-extraction absorbent has been applied for the selective clean-up and quantification of trace amounts of SA from physiological samples. The results of scanning electron microscopy (SEM) have confirmed the nano-structure morphologies of the films.

  11. AsteriX: a Web server to automatically extract ligand coordinates from figures in PDF articles.

    Science.gov (United States)

    Lounnas, V; Vriend, G

    2012-02-27

    Coordinates describing the chemical structures of small molecules that are potential ligands for pharmaceutical targets are used at many stages of the drug design process. The coordinates of the vast majority of ligands can be obtained from either publicly accessible or commercial databases. However, interesting ligands sometimes are only available from the scientific literature, in which case their coordinates need to be reconstructed manually--a process that consists of a series of time-consuming steps. We present a Web server that helps reconstruct the three-dimensional (3D) coordinates of ligands for which a two-dimensional (2D) picture is available in a PDF file. The software, called AsteriX, analyses every picture contained in the PDF file and attempts to determine automatically whether or not it contains ligands. Areas in pictures that may contain molecular structures are processed to extract connectivity and atom type information that allow coordinates to be subsequently reconstructed. The AsteriX Web server was tested on a series of articles containing a large diversity in graphical representations. In total, 88% of 3249 ligand structures present in the test set were identified as chemical diagrams. Of these, about half were interpreted correctly as 3D structures, and a further one-third required only minor manual corrections. It is principally impossible to always correctly reconstruct 3D coordinates from pictures because there are many different protocols for drawing a 2D image of a ligand, but more importantly a wide variety of semantic annotations are possible. The AsteriX Web server therefore includes facilities that allow the users to augment partial or partially correct 3D reconstructions. All 3D reconstructions are submitted, checked, and corrected by the users domain at the server and are freely available for everybody. The coordinates of the reconstructed ligands are made available in a series of formats commonly used in drug design research. The

  12. A nanoscale study of charge extraction in organic solar cells: the impact of interfacial molecular configurations.

    Science.gov (United States)

    Tang, Fu-Ching; Wu, Fu-Chiao; Yen, Chia-Te; Chang, Jay; Chou, Wei-Yang; Gilbert Chang, Shih-Hui; Cheng, Horng-Long

    2015-01-07

    In the optimization of organic solar cells (OSCs), a key problem lies in the maximization of charge carriers from the active layer to the electrodes. Hence, this study focused on the interfacial molecular configurations in efficient OSC charge extraction by theoretical investigations and experiments, including small molecule-based bilayer-heterojunction (sm-BLHJ) and polymer-based bulk-heterojunction (p-BHJ) OSCs. We first examined a well-defined sm-BLHJ model system of OSC composed of p-type pentacene, an n-type perylene derivative, and a nanogroove-structured poly(3,4-ethylenedioxythiophene) (NS-PEDOT) hole extraction layer. The OSC with NS-PEDOT shows a 230% increment in the short circuit current density compared with that of the conventional planar PEDOT layer. Our theoretical calculations indicated that small variations in the microscopic intermolecular interaction among these interfacial configurations could induce significant differences in charge extraction efficiency. Experimentally, different interfacial configurations were generated between the photo-active layer and the nanostructured charge extraction layer with periodic nanogroove structures. In addition to pentacene, poly(3-hexylthiophene), the most commonly used electron-donor material system in p-BHJ OSCs was also explored in terms of its possible use as a photo-active layer. Local conductive atomic force microscopy was used to measure the nanoscale charge extraction efficiency at different locations within the nanogroove, thus highlighting the importance of interfacial molecular configurations in efficient charge extraction. This study enriches understanding regarding the optimization of the photovoltaic properties of several types of OSCs by conducting appropriate interfacial engineering based on organic/polymer molecular orientations. The ultimate power conversion efficiency beyond at least 15% is highly expected when the best state-of-the-art p-BHJ OSCs are combined with present arguments.

  13. Automatic characterization of dynamics in Absence Epilepsy

    DEFF Research Database (Denmark)

    Petersen, Katrine N. H.; Nielsen, Trine N.; Kjær, Troels W.

    2013-01-01

    Dynamics of the spike-wave paroxysms in Childhood Absence Epilepsy (CAE) are automatically characterized using novel approaches. Features are extracted from scalograms formed by Continuous Wavelet Transform (CWT). Detection algorithms are designed to identify an estimate of the temporal development...

  14. Automatic color preference correction for color reproduction

    Science.gov (United States)

    Tsukada, Masato; Funayama, Chisato; Tajima, Johji

    2000-12-01

    The reproduction of natural objects in color images has attracted a great deal of attention. Reproduction more pleasing colors of natural objects is one of the methods available to improve image quality. We developed an automatic color correction method to maintain preferred color reproduction for three significant categories: facial skin color, green grass and blue sky. In this method, a representative color in an object area to be corrected is automatically extracted from an input image, and a set of color correction parameters is selected depending on the representative color. The improvement in image quality for reproductions of natural image was more than 93 percent in subjective experiments. These results show the usefulness of our automatic color correction method for the reproduction of preferred colors.

  15. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, V [The Johns Hopkins University, Computer Science. Baltimore, MD (United States); Jacobs, MA [The Johns Hopkins University School of Medicine, Dept of Radiology and Oncology. Baltimore, MD (United States)

    2016-06-15

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  16. SU-F-R-05: Multidimensional Imaging Radiomics-Geodesics: A Novel Manifold Learning Based Automatic Feature Extraction Method for Diagnostic Prediction in Multiparametric Imaging

    International Nuclear Information System (INIS)

    Parekh, V; Jacobs, MA

    2016-01-01

    Purpose: Multiparametric radiological imaging is used for diagnosis in patients. Potentially extracting useful features specific to a patient’s pathology would be crucial step towards personalized medicine and assessing treatment options. In order to automatically extract features directly from multiparametric radiological imaging datasets, we developed an advanced unsupervised machine learning algorithm called the multidimensional imaging radiomics-geodesics(MIRaGe). Methods: Seventy-six breast tumor patients underwent 3T MRI breast imaging were used for this study. We tested the MIRaGe algorithm to extract features for classification of breast tumors into benign or malignant. The MRI parameters used were T1-weighted, T2-weighted, dynamic contrast enhanced MR imaging (DCE-MRI) and diffusion weighted imaging(DWI). The MIRaGe algorithm extracted the radiomics-geodesics features (RGFs) from multiparametric MRI datasets. This enable our method to learn the intrinsic manifold representations corresponding to the patients. To determine the informative RGF, a modified Isomap algorithm(t-Isomap) was created for a radiomics-geodesics feature space(tRGFS) to avoid overfitting. Final classification was performed using SVM. The predictive power of the RGFs was tested and validated using k-fold cross validation. Results: The RGFs extracted by the MIRaGe algorithm successfully classified malignant lesions from benign lesions with a sensitivity of 93% and a specificity of 91%. The top 50 RGFs identified as the most predictive by the t-Isomap procedure were consistent with the radiological parameters known to be associated with breast cancer diagnosis and were categorized as kinetic curve characterizing RGFs, wash-in rate characterizing RGFs, wash-out rate characterizing RGFs and morphology characterizing RGFs. Conclusion: In this paper, we developed a novel feature extraction algorithm for multiparametric radiological imaging. The results demonstrated the power of the MIRa

  17. Automatic Sleep Staging using Multi-dimensional Feature Extraction and Multi-kernel Fuzzy Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yanjun Zhang

    2014-01-01

    Full Text Available This paper employed the clinical Polysomnographic (PSG data, mainly including all-night Electroencephalogram (EEG, Electrooculogram (EOG and Electromyogram (EMG signals of subjects, and adopted the American Academy of Sleep Medicine (AASM clinical staging manual as standards to realize automatic sleep staging. Authors extracted eighteen different features of EEG, EOG and EMG in time domains and frequency domains to construct the vectors according to the existing literatures as well as clinical experience. By adopting sleep samples self-learning, the linear combination of weights and parameters of multiple kernels of the fuzzy support vector machine (FSVM were learned and the multi-kernel FSVM (MK-FSVM was constructed. The overall agreement between the experts' scores and the results presented was 82.53%. Compared with previous results, the accuracy of N1 was improved to some extent while the accuracies of other stages were approximate, which well reflected the sleep structure. The staging algorithm proposed in this paper is transparent, and worth further investigation.

  18. Statistical Analysis of Automatic Seed Word Acquisition to Improve Harmful Expression Extraction in Cyberbullying Detection

    Directory of Open Access Journals (Sweden)

    Suzuha Hatakeyama

    2016-04-01

    Full Text Available We study the social problem of cyberbullying, defined as a new form of bullying that takes place in the Internet space. This paper proposes a method for automatic acquisition of seed words to improve performance of the original method for the cyberbullying detection by Nitta et al. [1]. We conduct an experiment exactly in the same settings to find out that the method based on a Web mining technique, lost over 30% points of its performance since being proposed in 2013. Thus, we hypothesize on the reasons for the decrease in the performance and propose a number of improvements, from which we experimentally choose the best one. Furthermore, we collect several seed word sets using different approaches, evaluate and their precision. We found out that the influential factor in extraction of harmful expressions is not the number of seed words, but the way the seed words were collected and filtered.

  19. Feasibility of Automatic Extraction of Electronic Health Data to Evaluate a Status Epilepticus Clinical Protocol.

    Science.gov (United States)

    Hafeez, Baria; Paolicchi, Juliann; Pon, Steven; Howell, Joy D; Grinspan, Zachary M

    2016-05-01

    Status epilepticus is a common neurologic emergency in children. Pediatric medical centers often develop protocols to standardize care. Widespread adoption of electronic health records by hospitals affords the opportunity for clinicians to rapidly, and electronically evaluate protocol adherence. We reviewed the clinical data of a small sample of 7 children with status epilepticus, in order to (1) qualitatively determine the feasibility of automated data extraction and (2) demonstrate a timeline-style visualization of each patient's first 24 hours of care. Qualitatively, our observations indicate that most clinical data are well labeled in structured fields within the electronic health record, though some important information, particularly electroencephalography (EEG) data, may require manual abstraction. We conclude that a visualization that clarifies a patient's clinical course can be automatically created using the patient's electronic clinical data, supplemented with some manually abstracted data. Future work could use this timeline to evaluate adherence to status epilepticus clinical protocols. © The Author(s) 2015.

  20. Automatic segmentation and 3-dimensional display based on the knowledge of head MRI images

    International Nuclear Information System (INIS)

    Suzuki, Hidetomo; Toriwaki, Jun-ichiro.

    1987-01-01

    In this paper we present a procedure which automatically extracts soft tissues, such as subcutaneous fat, brain, and cerebral ventricle, from the multislice MRI images of head region, and displays their 3-dimensional images. Segmentation of soft tissues is done by use of an iterative thresholding. In order to select the optimum threshold value automatically, we introduce a measure to evaluate the goodness of segmentation into this procedure. When the measure satisfies given conditions, iteration of thresholding terminates, and the final result of segmentation is extracted by using the current threshold value. Since this procedure can execute segmentation and calculation of the goodness measure in each slice automatically, it remarkably decreases efforts of users. Moreover, the 3-dimensional display of the segmented tissues shows that this procedure can extract the shape of each soft tissue with reasonable precision for clinical use. (author)

  1. Molecular dynamics simulation of cyclodextrin aggregation and extraction of Anthracene from non-aqueous liquid phase

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xinzhe [Shenzhen Key Laboratory for Coastal Ocean Dynamic and Environment, Division of Ocean Science and Technology, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China); School of Environment, Tsinghua University, Beijing 100084 (China); Wu, Guozhong [Shenzhen Key Laboratory for Coastal Ocean Dynamic and Environment, Division of Ocean Science and Technology, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China); Chen, Daoyi, E-mail: chen.daoyi@sz.tsinghua.edu.cn [Shenzhen Key Laboratory for Coastal Ocean Dynamic and Environment, Division of Ocean Science and Technology, Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055 (China)

    2016-12-15

    Cyclodextrin (CD) extraction is widely used for the remediation of polycyclic aromatic hydrocarbons (PAH) pollution, but it remains unclear about the influence of CD aggregation on the PAH transport from non-aqueous liquid phase to water. The atomistic adsorption and complexation of PAHs (32 anthracenes) by CD aggregates (48 β-cyclodextrins) were studied by molecular dynamics simulations at hundreds of nanoseconds time scale. Results indicated that high temperature promoted the βCD aggregation in bulk oil, which was not found in bulk water. Nevertheless, the fractions of anthracenes entrapped inside the βCDs cavity in both scenarios were significantly increased when temperature increased from 298 to 328 K. Free energy calculation for the sub-steps of CD extraction demonstrated that the anthracenes could be extracted when the βCDs arrived at the water-oil interface or after the βCDs entered the bulk oil. The former was kinetic-controlled while the latter was thermodynamic-limited process. Results also highlighted the formation of porous structures by CD aggregates in water, which was able to sequestrate PAH clusters with the size obviously larger than the cavity diameter of individual CD. This provided an opportunity for the extraction of recalcitrant PAHs with molecular size larger than anthracenes by cyclodextrins.

  2. Using activity-related behavioural features towards more effective automatic stress detection.

    Directory of Open Access Journals (Sweden)

    Dimitris Giakoumis

    Full Text Available This paper introduces activity-related behavioural features that can be automatically extracted from a computer system, with the aim to increase the effectiveness of automatic stress detection. The proposed features are based on processing of appropriate video and accelerometer recordings taken from the monitored subjects. For the purposes of the present study, an experiment was conducted that utilized a stress-induction protocol based on the stroop colour word test. Video, accelerometer and biosignal (Electrocardiogram and Galvanic Skin Response recordings were collected from nineteen participants. Then, an explorative study was conducted by following a methodology mainly based on spatiotemporal descriptors (Motion History Images that are extracted from video sequences. A large set of activity-related behavioural features, potentially useful for automatic stress detection, were proposed and examined. Experimental evaluation showed that several of these behavioural features significantly correlate to self-reported stress. Moreover, it was found that the use of the proposed features can significantly enhance the performance of typical automatic stress detection systems, commonly based on biosignal processing.

  3. Preparation and evaluation of molecularly imprinted solid-phase micro-extraction fibers for selective extraction of phthalates in an aqueous sample

    International Nuclear Information System (INIS)

    He Juan; Lv Ruihe; Zhan Haijun; Wang Huizhi; Cheng Jie; Lu Kui; Wang Fengcheng

    2010-01-01

    A novel molecularly imprinted polymer (MIP) that was applied to a solid-phase micro-extraction (SPME) device, which could be coupled directly to gas chromatograph and mass spectrometer (GC/MS), was prepared using dibutyl phthalate (DBP) as the template molecule. The characteristics and application of this fiber were investigated. Electron microscope images indicated that the MIP-coated solid-phase micro-extraction (MI-SPME) fibers were homogeneous and porous. The extraction yield of DBP with the MI-SPME fibers was higher than that of the non-imprinted polymer (NIP)-coated SPME (NI-SPME) fibers. The MI-SPME fibers had a higher selectivity to other phthalates that had similar structures as DBP. A method was developed for the determination of phthalates using MI-SPME fibers coupled with GC/MS. The extraction conditions were optimized. Detection limits for the phthalate samples were within the range of 2.17-20.84 ng L -1 . The method was applied to five kinds of phthalates dissolved in spiked aqueous samples and resulted in recoveries of up to 94.54-105.34%, respectively. Thus, the MI-SPME fibers are suitable for the extraction of trace phthalates in complicated samples.

  4. Biochemical and molecular evidences for the antitumor potential of Ginkgo biloba leaves extract in rodents.

    Science.gov (United States)

    Ahmed, Hanaa H; Shousha, Wafaa Gh; El-Mezayen, Hatem A; El-Toumy, Sayed A; Sayed, Alaa H; Ramadan, Aesha R

    2017-01-01

    Hepatocellular carcinoma (HCC) is one of the deadliest primary cancers, with a 5-year survival rate of 10% or less. This study was undertaken to elucidate the underlying biochemical and molecular mechanisms in favor of N-nitrosodiethylamine-induced hepatocellular carcinoma. Furthermore, the aim of this work was extended to explore the efficacy of Ginkgo biloba leaves extract in deterioration of HCC in rats. In the current study, HCC group experienced significant downregulation of ING-3 gene expression and upregulation of Foxp-1 gene expression in liver. Treatment of HCC groups with Ginkgo biloba leaves extract resulted in upregulation of ING-3 and downregulation of Foxp-1 gene expression in liver. In addition, there was significant increase in serum alpha-fetoprotein (AFP), carcinoembryonic antigen (CEA) and glypican-3 (GPC-3) levels in HCC group versus the negative control group. In contrast, the groups with HCC subjected to either high or low dose of Ginkgo biloba leaves extract elicited significant reduction (Panaplasia. Interestingly, treatment with Ginkgo biloba leaves extract elicited marked improvement in the histological feature of liver tissue in HCC groups. In conclusion, this research indicated that the carcinogenic potency of N-nitrosodiethylamine targeted multiple systems on the cellular and molecular levels. In addition, the results of the current study shed light on the promising anticancer activity of Ginkgo biloba leaves extract in treatment of hepatocellular carcinoma induced chemically in the experimental model through its apoptotic and antiproliferative properties.

  5. Evaluation of semi-automatic arterial stenosis quantification

    International Nuclear Information System (INIS)

    Hernandez Hoyos, M.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne; Univ. de los Andes, Bogota; Serfaty, J.M.; Douek, P.C.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne; Hopital Cardiovasculaire et Pneumologique L. Pradel, Bron; Maghiar, A.; Mansard, C.; Orkisz, M.; Magnin, I.; Universite Claude Bernard Lyon 1, 69 - Villeurbanne

    2006-01-01

    Object: To assess the accuracy and reproducibility of semi-automatic vessel axis extraction and stenosis quantification in 3D contrast-enhanced Magnetic Resonance Angiography (CE-MRA) of the carotid arteries (CA). Materials and methods: A total of 25 MRA datasets was used: 5 phantoms with known stenoses, and 20 patients (40 CAs) drawn from a multicenter trial database. Maracas software extracted vessel centerlines and quantified the stenoses, based on boundary detection in planes perpendicular to the centerline. Centerline accuracy was visually scored. Semi-automatic measurements were compared with: (1) theoretical phantom morphometric values, and (2) stenosis degrees evaluated by two independent radiologists. Results: Exploitable centerlines were obtained in 97% of CA and in all phantoms. In phantoms, the software achieved a better agreement with theoretic stenosis degrees (weighted kappa Κ W = 0.91) than the radiologists (Κ W = 0.69). In patients, agreement between software and radiologists varied from Κ W =0.67 to 0.90. In both, Maracas was substantially more reproducible than the readers. Mean operating time was within 1 min/ CA. Conclusion: Maracas software generates accurate 3D centerlines of vascular segments with minimum user intervention. Semi-automatic quantification of CA stenosis is also accurate, except in very severe stenoses that cannot be segmented. It substantially reduces the inter-observer variability. (orig.)

  6. Automatic vertebral identification using surface-based registration

    Science.gov (United States)

    Herring, Jeannette L.; Dawant, Benoit M.

    2000-06-01

    This work introduces an enhancement to currently existing methods of intra-operative vertebral registration by allowing the portion of the spinal column surface that correctly matches a set of physical vertebral points to be automatically selected from several possible choices. Automatic selection is made possible by the shape variations that exist among lumbar vertebrae. In our experiments, we register vertebral points representing physical space to spinal column surfaces extracted from computed tomography images. The vertebral points are taken from the posterior elements of a single vertebra to represent the region of surgical interest. The surface is extracted using an improved version of the fully automatic marching cubes algorithm, which results in a triangulated surface that contains multiple vertebrae. We find the correct portion of the surface by registering the set of physical points to multiple surface areas, including all vertebral surfaces that potentially match the physical point set. We then compute the standard deviation of the surface error for the set of points registered to each vertebral surface that is a possible match, and the registration that corresponds to the lowest standard deviation designates the correct match. We have performed our current experiments on two plastic spine phantoms and one patient.

  7. Fast automatic segmentation of anatomical structures in x-ray computed tomography images to improve fluorescence molecular tomography reconstruction.

    Science.gov (United States)

    Freyer, Marcus; Ale, Angelique; Schulz, Ralf B; Zientkowska, Marta; Ntziachristos, Vasilis; Englmeier, Karl-Hans

    2010-01-01

    The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.

  8. Determination of rifampicin in human plasma by high-performance liquid chromatography coupled with ultraviolet detection after automatized solid-liquid extraction.

    Science.gov (United States)

    Louveau, B; Fernandez, C; Zahr, N; Sauvageon-Martre, H; Maslanka, P; Faure, P; Mourah, S; Goldwirt, L

    2016-12-01

    A precise and accurate high-performance liquid chromatography (HPLC) quantification method of rifampicin in human plasma was developed and validated using ultraviolet detection after an automatized solid-phase extraction. The method was validated with respect to selectivity, extraction recovery, linearity, intra- and inter-day precision, accuracy, lower limit of quantification and stability. Chromatographic separation was performed on a Chromolith RP 8 column using a mixture of 0.05 m acetate buffer pH 5.7-acetonitrile (35:65, v/v) as mobile phase. The compounds were detected at a wavelength of 335 nm with a lower limit of quantification of 0.05 mg/L in human plasma. Retention times for rifampicin and 6,7-dimethyl-2,3-di(2-pyridyl) quinoxaline used as internal standard were respectively 3.77 and 4.81 min. This robust and exact method was successfully applied in routine for therapeutic drug monitoring in patients treated with rifampicin. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Molecular Affinity of Mabolo Extracts to an Octopamine Receptor of a Fruit Fly

    Directory of Open Access Journals (Sweden)

    Francoise Neil D. Dacanay

    2017-10-01

    Full Text Available Essential oils extracted from plants are composed of volatile organic compounds that can affect insect behavior. Identifying the active components of the essential oils to their biochemical target is necessary to design novel biopesticides. In this study, essential oils extracted from Diospyros discolor (Willd. were analyzed using gas chromatography mass spectroscopy (GC-MS to create an untargeted metabolite profile. Subsequently, a conformational ensemble of the Drosophila melanogaster octopamine receptor in mushroom bodies (OAMB was created from a molecular dynamics simulation to resemble a flexible receptor for docking studies. GC-MS analysis revealed the presence of several metabolites, i.e. mostly aromatic esters. Interestingly, these aromatic esters were found to exhibit relatively higher binding affinities to OAMB than the receptor’s natural agonist, octopamine. The molecular origin of this observed enhanced affinity is the π -stacking interaction between the aromatic moieties of the residues and ligands. This strategy, computational inspection in tandem with untargeted metabolomics, may provide insights in screening the essential oils as potential OAMB inhibitors.

  10. Oocytes Polar Body Detection for Automatic Enucleation

    Directory of Open Access Journals (Sweden)

    Di Chen

    2016-02-01

    Full Text Available Enucleation is a crucial step in cloning. In order to achieve automatic blind enucleation, we should detect the polar body of the oocyte automatically. The conventional polar body detection approaches have low success rate or low efficiency. We propose a polar body detection method based on machine learning in this paper. On one hand, the improved Histogram of Oriented Gradient (HOG algorithm is employed to extract features of polar body images, which will increase success rate. On the other hand, a position prediction method is put forward to narrow the search range of polar body, which will improve efficiency. Experiment results show that the success rate is 96% for various types of polar bodies. Furthermore, the method is applied to an enucleation experiment and improves the degree of automatic enucleation.

  11. Large-scale automatic extraction of side effects associated with targeted anticancer drugs from full-text oncological articles.

    Science.gov (United States)

    Xu, Rong; Wang, QuanQiu

    2015-06-01

    Targeted anticancer drugs such as imatinib, trastuzumab and erlotinib dramatically improved treatment outcomes in cancer patients, however, these innovative agents are often associated with unexpected side effects. The pathophysiological mechanisms underlying these side effects are not well understood. The availability of a comprehensive knowledge base of side effects associated with targeted anticancer drugs has the potential to illuminate complex pathways underlying toxicities induced by these innovative drugs. While side effect association knowledge for targeted drugs exists in multiple heterogeneous data sources, published full-text oncological articles represent an important source of pivotal, investigational, and even failed trials in a variety of patient populations. In this study, we present an automatic process to extract targeted anticancer drug-associated side effects (drug-SE pairs) from a large number of high profile full-text oncological articles. We downloaded 13,855 full-text articles from the Journal of Oncology (JCO) published between 1983 and 2013. We developed text classification, relationship extraction, signaling filtering, and signal prioritization algorithms to extract drug-SE pairs from downloaded articles. We extracted a total of 26,264 drug-SE pairs with an average precision of 0.405, a recall of 0.899, and an F1 score of 0.465. We show that side effect knowledge from JCO articles is largely complementary to that from the US Food and Drug Administration (FDA) drug labels. Through integrative correlation analysis, we show that targeted drug-associated side effects positively correlate with their gene targets and disease indications. In conclusion, this unique database that we built from a large number of high-profile oncological articles could facilitate the development of computational models to understand toxic effects associated with targeted anticancer drugs. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Development of Filtered Bispectrum for EEG Signal Feature Extraction in Automatic Emotion Recognition Using Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Prima Dewi Purnamasari

    2017-05-01

    Full Text Available The development of automatic emotion detection systems has recently gained significant attention due to the growing possibility of their implementation in several applications, including affective computing and various fields within biomedical engineering. Use of the electroencephalograph (EEG signal is preferred over facial expression, as people cannot control the EEG signal generated by their brain; the EEG ensures a stronger reliability in the psychological signal. However, because of its uniqueness between individuals and its vulnerability to noise, use of EEG signals can be rather complicated. In this paper, we propose a methodology to conduct EEG-based emotion recognition by using a filtered bispectrum as the feature extraction subsystem and an artificial neural network (ANN as the classifier. The bispectrum is theoretically superior to the power spectrum because it can identify phase coupling between the nonlinear process components of the EEG signal. In the feature extraction process, to extract the information contained in the bispectrum matrices, a 3D pyramid filter is used for sampling and quantifying the bispectrum value. Experiment results show that the mean percentage of the bispectrum value from 5 × 5 non-overlapped 3D pyramid filters produces the highest recognition rate. We found that reducing the number of EEG channels down to only eight in the frontal area of the brain does not significantly affect the recognition rate, and the number of data samples used in the training process is then increased to improve the recognition rate of the system. We have also utilized a probabilistic neural network (PNN as another classifier and compared its recognition rate with that of the back-propagation neural network (BPNN, and the results show that the PNN produces a comparable recognition rate and lower computational costs. Our research shows that the extracted bispectrum values of an EEG signal using 3D filtering as a feature extraction

  13. Automatic flow-through dynamic extraction: A fast tool to evaluate char-based remediation of multi-element contaminated mine soils.

    Science.gov (United States)

    Rosende, María; Beesley, Luke; Moreno-Jimenez, Eduardo; Miró, Manuel

    2016-02-01

    An automatic in-vitro bioaccessibility test based upon dynamic microcolumn extraction in a programmable flow setup is herein proposed as a screening tool to evaluate bio-char based remediation of mine soils contaminated with trace elements as a compelling alternative to conventional phyto-availability tests. The feasibility of the proposed system was evaluated by extracting the readily bioaccessible pools of As, Pb and Zn in two contaminated mine soils before and after the addition of two biochars (9% (w:w)) of diverse source origin (pine and olive). Bioaccessible fractions under worst-case scenarios were measured using 0.001 mol L(-1) CaCl2 as extractant for mimicking plant uptake, and analysis of the extracts by inductively coupled optical emission spectrometry. The t-test of comparison of means revealed an efficient metal (mostly Pb and Zn) immobilization by the action of olive pruning-based biochar against the bare (control) soil at the 0.05 significance level. In-vitro flow-through bioaccessibility tests are compared for the first time with in-vivo phyto-toxicity assays in a microcosm soil study. By assessing seed germination and shoot elongation of Lolium perenne in contaminated soils with and without biochar amendments the dynamic flow-based bioaccessibility data proved to be in good agreement with the phyto-availability tests. Experimental results indicate that the dynamic extraction method is a viable and economical in-vitro tool in risk assessment explorations to evaluate the feasibility of a given biochar amendment for revegetation and remediation of metal contaminated soils in a mere 10 min against 4 days in case of phyto-toxicity assays. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Molecularly Imprinted Polymers as Extracting Media for the Chromatographic Determination of Antibiotics in Milk

    Directory of Open Access Journals (Sweden)

    Dimitrios Bitas

    2018-02-01

    Full Text Available Milk-producing animals are typically kept stationary in overcrowded large-scale farms and in most cases under unsanitary conditions, which promotes the development of infections. In order to maintain sufficient health status among the herd or promote growth and increase production, farmers administer preventative antibiotic doses to the animals through their feed. However, many antibiotics used in cattle farms are intended for the treatment of bacterial infections in humans. This results in the development of antibiotic-resistant bacteria which pose a great risk for public health. Additionally, antibiotic residues are found in milk and dairy products, with potential toxic effects for the consumers. Hence the need of antibiotic residues monitoring in milk arises. Analytical methods were developed for the determination of antibiotics in milk, with key priority given to the analyte extraction and preconcentration step. Extraction can benefit from the production of molecularly imprinted polymers (MIPs that can be applied as sorbents for the extraction of specific antibiotics. This review focuses on the principals of molecular imprinting technology and synthesis methods of MIPs, as well as the application of MIPs and MIPs composites for the chromatographic determination of various antibiotic categories in milk found in the recent literature.

  15. Critical comparison of the on-line and off-line molecularly imprinted solid-phase extraction of patulin coupled with liquid chromatography.

    Science.gov (United States)

    Lhotská, Ivona; Holznerová, Anežka; Solich, Petr; Šatínský, Dalibor

    2017-12-01

    Reaching trace amounts of mycotoxin contamination requires sensitive and selective analytical tools for their determination. Improving the selectivity of sample pretreatment steps covering new and modern extraction techniques is one way to achieve it. Molecularly imprinted polymers as selective sorbent for extraction undoubtedly meet these criteria. The presented work is focused on the hyphenation of on-line molecularly imprinted solid-phase extraction with a chromatography system using a column-switching approach. Making a critical comparison with a simultaneously developed off-line extraction procedure, evaluation of pros and cons of each method, and determining the reliability of both methods on a real sample analysis were carried out. Both high-performance liquid chromatography methods, using off-line extraction on molecularly imprinted polymer and an on-line column-switching approach, were validated, and the validation results were compared against each other. Although automation leads to significant time savings, fewer human errors, and required no handling of toxic solvents, it reached worse detection limits (15 versus 6 μg/L), worse recovery values (68.3-123.5 versus 81.2-109.9%), and worse efficiency throughout the entire clean-up process in comparison with the off-line extraction method. The difficulties encountered, the compromises made during the optimization of on-line coupling and their critical evaluation are presented in detail. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. IADE: a system for intelligent automatic design of bioisosteric analogs

    Science.gov (United States)

    Ertl, Peter; Lewis, Richard

    2012-11-01

    IADE, a software system supporting molecular modellers through the automatic design of non-classical bioisosteric analogs, scaffold hopping and fragment growing, is presented. The program combines sophisticated cheminformatics functionalities for constructing novel analogs and filtering them based on their drug-likeness and synthetic accessibility using automatic structure-based design capabilities: the best candidates are selected according to their similarity to the template ligand and to their interactions with the protein binding site. IADE works in an iterative manner, improving the fitness of designed molecules in every generation until structures with optimal properties are identified. The program frees molecular modellers from routine, repetitive tasks, allowing them to focus on analysis and evaluation of the automatically designed analogs, considerably enhancing their work efficiency as well as the area of chemical space that can be covered. The performance of IADE is illustrated through a case study of the design of a nonclassical bioisosteric analog of a farnesyltransferase inhibitor—an analog that has won a recent "Design a Molecule" competition.

  17. Application of molecular sieves in the fractionation of lemongrass oil from high-pressure carbon dioxide extraction

    Directory of Open Access Journals (Sweden)

    L. Paviani

    2006-06-01

    Full Text Available The aim of this work was to study the feasibility of simultaneous process of high-pressure extraction and fractionation of lemongrass essential oil using molecular sieves. For this purpose, a high-pressure laboratory-scale extraction unit coupled with a column with four different stationary phases for fractionation: ZSM5 zeolite, MCM-41 mesoporous material, alumina and silica was employed. Additionally, the effect of carbon dioxide extraction variables on the global yield and chemical composition of the essential oil was also studied in a temperature range of 293 to 313 K and a pressure range of 100 to 200 bar. The volatile organic compounds of the extracts were identified by a gas chromatograph coupled with a mass spectrometer detector (GC/MS. The results indicated that the extraction process variables and the stationary phase exerted an effect on both the extraction yield and the chemical composition of the extracts.

  18. Automatic segmentation of mandible in panoramic x-ray

    OpenAIRE

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-01-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of t...

  19. Determination of organophosphorus pesticides using molecularly imprinted polymer solid phase extraction

    International Nuclear Information System (INIS)

    Mohd Marsin Sanagi; Syairah Salleh; Wan Aini Wan Ibrahim

    2011-01-01

    Molecularly imprinted polymer solid phase extraction (MIP-SPE) method has been developed for the determination of organophosphorus pesticides (OPPs) in water samples. The MIP was prepared by thermo-polymerization method using methacrylic acid (MAA) as functional monomer, ethylene glycol dimethacrylate (EGDMA) as crosslinker, acetonitrile as porogenic solvent and quinalphos as the template molecule. The three OPPs (diazinon, quinalphos and chloropyrifos) were selected as target analytes as they are widely used in agriculture sector. Various parameters affecting the extraction efficiency of the imprinted polymers have been evaluated to optimize the selective preconcentration of OPPs from aqueous samples. The characteristics of the MIP-SPE method were validated by high performance liquid chromatography (HPLC). The accuracy and selectivity of the MIP-SPE process developed were verified using non-imprinted polymer solid phase extraction (NIP-SPE) and a commercial C 18 -SPE was used for comparison. The recoveries of the target analytes obtained using the MIPs as the solid phase sorbent ranged from 83% to 98% (RSDs 1.05 - 1.98 %; n=3) for water sample. The developed MIP-SPE method demonstrates that it could be applied for the determination of OPPs in water samples. (author)

  20. Automatic measurement of cusps in 2.5D dental images

    Science.gov (United States)

    Wolf, Mattias; Paulus, Dietrich W.; Niemann, Heinrich

    1996-01-01

    Automatic reconstruction of occlusal surfaces of teeth is an application which might become more and more urgent due to the toxicity of amalgam. Modern dental chairside equipment is currently restricted to the production of inlays. The automatic reconstruction of the occlusal surface is presently not possible. For manufacturing an occlusal surface it is required to extract features from which it is possible to reconstruct destroyed teeth. In this paper, we demonstrate how intact upper molars can be automatically extracted in dental range and intensity images. After normalization of the 3D location, the sizes of the cusps are detected and the distances between them are calculated. In the presented approach, the detection of the upper molar is based on a knowledge-based segmentation which includes anatomic knowledge. After the segmentation of the interesting tooth the central fossa is calculated. The normalization of the spatial location is archieved by aligning the detected fossa with a reference axis. After searching the cusp tips in the range image the image is resized. The methods have been successfully tested on 60 images. The results have been compared with the results of a dentist's evaluation on a sample of 20 images. The results will be further used for automatic production of tooth inlays.

  1. Automatic measurement for solid state track detectors

    International Nuclear Information System (INIS)

    Ogura, Koichi

    1982-01-01

    Since in solid state track detectors, their tracks are measured with a microscope, observers are forced to do hard works that consume time and labour. This causes to obtain poor statistic accuracy or to produce personal error. Therefore, many researches have been done to aim at simplifying and automating track measurement. There are two categories in automating the measurement: simple counting of the number of tracks and the requirements to know geometrical elements such as the size of tracks or their coordinates as well as the number of tracks. The former is called automatic counting and the latter automatic analysis. The method to generally evaluate the number of tracks in automatic counting is the estimation of the total number of tracks in the total detector area or in a field of view of a microscope. It is suitable for counting when the track density is higher. The method to count tracks one by one includes the spark counting and the scanning microdensitometer. Automatic analysis includes video image analysis in which the high quality images obtained with a high resolution video camera are processed with a micro-computer, and the tracks are automatically recognized and measured by feature extraction. This method is described in detail. In many kinds of automatic measurements reported so far, frequently used ones are ''spark counting'' and ''video image analysis''. (Wakatsuki, Y.)

  2. Preparation of molecularly imprinted polymers for strychnine by precipitation polymerization and multistep swelling and polymerization and their application for the selective extraction of strychnine from nux-vomica extract powder.

    Science.gov (United States)

    Nakamura, Yukari; Matsunaga, Hisami; Haginaka, Jun

    2016-04-01

    Monodisperse molecularly imprinted polymers for strychnine were prepared by precipitation polymerization and multistep swelling and polymerization, respectively. In precipitation polymerization, methacrylic acid and divinylbenzene were used as a functional monomer and crosslinker, respectively, while in multistep swelling and polymerization, methacrylic acid and ethylene glycol dimethacrylate were used as a functional monomer and crosslinker, respectively. The retention and molecular recognition properties of the molecularly imprinted polymers prepared by both methods for strychnine were evaluated using a mixture of sodium phosphate buffer and acetonitrile as a mobile phase by liquid chromatography. In addition to shape recognition, ionic and hydrophobic interactions could affect the retention of strychnine in low acetonitrile content. Furthermore, molecularly imprinted polymers prepared by both methods could selectively recognize strychnine among solutes tested. The retention factors and imprinting factors of strychnine on the molecularly imprinted polymer prepared by precipitation polymerization were 220 and 58, respectively, using 20 mM sodium phosphate buffer (pH 6.0)/acetonitrile (50:50, v/v) as a mobile phase, and those on the molecularly imprinted polymer prepared by multistep swelling and polymerization were 73 and 4.5. These results indicate that precipitation polymerization is suitable for the preparation of a molecularly imprinted polymer for strychnine. Furthermore, the molecularly imprinted polymer could be successfully applied for selective extraction of strychnine in nux-vomica extract powder. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Automatic digitization of SMA data

    Science.gov (United States)

    Väänänen, Mika; Tanskanen, Eija

    2017-04-01

    In the 1970's and 1980's the Scandinavian Magnetometer Array produced large amounts of excellent data from over 30 stations In Norway, Sweden and Finland. 620 film reels and 20 kilometers of film have been preserved and the longest time series produced in the campaign span almost uninterrupted for five years, but the data has never seen widespread use due to the choice of medium. Film is a difficult medium to digitize efficiently. Previously events of interest were searched for by hand and digitization was done by projecting the film on paper and plotting it by hand. We propose a method of automatically digitizing geomagnetic data stored on film and extracting the numerical values from the digitized data. The automatic digitization process helps in preserving old, valuable data that might otherwise go unused.

  4. Physics of Automatic Target Recognition

    CERN Document Server

    Sadjadi, Firooz

    2007-01-01

    Physics of Automatic Target Recognition addresses the fundamental physical bases of sensing, and information extraction in the state-of-the art automatic target recognition field. It explores both passive and active multispectral sensing, polarimetric diversity, complex signature exploitation, sensor and processing adaptation, transformation of electromagnetic and acoustic waves in their interactions with targets, background clutter, transmission media, and sensing elements. The general inverse scattering, and advanced signal processing techniques and scientific evaluation methodologies being used in this multi disciplinary field will be part of this exposition. The issues of modeling of target signatures in various spectral modalities, LADAR, IR, SAR, high resolution radar, acoustic, seismic, visible, hyperspectral, in diverse geometric aspects will be addressed. The methods for signal processing and classification will cover concepts such as sensor adaptive and artificial neural networks, time reversal filt...

  5. Automatic extraction and processing of small RNAs on a multi-well/multi-channel (M&M) chip.

    Science.gov (United States)

    Zhong, Runtao; Flack, Kenneth; Zhong, Wenwan

    2012-12-07

    The study of the regulatory roles in small RNAs can be accelerated by techniques that permit simple, low-cost, and rapid extraction of small RNAs from a small number of cells. In order to ensure highly specific and sensitive detection, the extracted RNAs should be free of the background nucleic acids and present stably in a small volume. To meet these criteria, we designed a multi-well/multi-channel (M&M) chip to carry out automatic and selective isolation of small RNAs via solid-phase extraction (SPE), followed by reverse-transcription (RT) to convert them to the more stable cDNAs in a final volume of 2 μL. Droplets containing buffers for RNA binding, washing, and elution were trapped in microwells, which were connected by one channel, and suspended in mineral oil. The silica magnetic particles (SMPs) for SPE were moved along the channel from well to well, i.e. in between droplets, by a fixed magnet and a translation stage, allowing the nucleic acid fragments to bind to the SMPs, be washed, and then be eluted for RT reaction within 15 minutes. RNAs shorter than 63 nt were selectively enriched from cell lysates, with recovery comparable to that of a commercial kit. Physical separation of the droplets on our M&M chip allowed the usage of multiple channels for parallel processing of multiple samples. It also permitted smooth integration with on-chip RT-PCR, which simultaneously detected the target microRNA, mir-191, expressed in fewer than 10 cancer cells. Our results have demonstrated that the M&M chip device is a valuable and cost-saving platform for studying small RNA expression patterns in a limited number of cells with reasonable sample throughput.

  6. Molecular Allergen-Specific IgE Assays as a Complement to Allergen Extract-Based Sensitization Assessment

    NARCIS (Netherlands)

    Aalberse, Rob C.; Aalberse, Joost A.

    2015-01-01

    Molecular allergen-based component-resolved diagnostic IgE antibody tests have emerged in the form of singleplex assays and multiplex arrays. They use both native and recombinant allergen molecules, sometimes in combination with each other, to supplement allergen extract-based IgE antibody analyses.

  7. Using Probe Vehicle Data for Automatic Extraction of Road Traffic Parameters

    Directory of Open Access Journals (Sweden)

    Roman Popescu Maria Alexandra

    2016-12-01

    Full Text Available Through this paper the author aims to study and find solutions for automatic detection of traffic light position and for automatic calculation of the waiting time at traffic light. The first objective serves mainly the road transportation field, mainly because it removes the need for collaboration with local authorities to establish a national network of traffic lights. The second objective is important not only for companies which are providing navigation solutions, but especially for authorities, institutions, companies operating in road traffic management systems. Real-time dynamic determination of traffic queue length and of waiting time at traffic lights allow the creation of dynamic systems, intelligent and flexible, adapted to actual traffic conditions, and not to generic, theoretical models. Thus, cities can approach the Smart City concept by boosting, efficienting and greening the road transport, promoted in Europe through the Horizon 2020, Smart Cities, Urban Mobility initiative.

  8. A framework for automatic feature extraction from airborne light detection and ranging data

    Science.gov (United States)

    Yan, Jianhua

    Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly

  9. Development of automatic extraction of the corpus callosum from magnetic resonance imaging of the head and examination of the early dementia objective diagnostic technique in feature analysis

    International Nuclear Information System (INIS)

    Kodama, Naoki; Kaneko, Tomoyuki

    2005-01-01

    We examined the objective diagnosis of dementia based on changes in the corpus callosum. We examined midsagittal head MR images of 17 early dementia patients (2 men and 15 women; mean age, 77.2±3.3 years) and 18 healthy elderly controls (2 men and 16 women; mean age, 73.8±6.5 years), 35 subjects altogether. First, the corpus callosum was automatically extracted from the MR images. Next, early dementia was compared with the healthy elderly individuals using 5 features of the straight-line methods, 5 features of the Run-Length Matrix, and 6 features of the Co-occurrence Matrix from the corpus callosum. Automatic extraction of the corpus callosum showed an accuracy rate of 84.1±3.7%. A statistically significant difference was found in 6 of the 16 features between early dementia patients and healthy elderly controls. Discriminant analysis using the 6 features demonstrated a sensitivity of 88.2% and specificity of 77.8%, with an overall accuracy of 82.9%. These results indicate that feature analysis based on changes in the corpus callosum can be used as an objective diagnostic technique for early dementia. (author)

  10. Extracting the diffusion tensor from molecular dynamics simulation with Milestoning

    International Nuclear Information System (INIS)

    Mugnai, Mauro L.; Elber, Ron

    2015-01-01

    We propose an algorithm to extract the diffusion tensor from Molecular Dynamics simulations with Milestoning. A Kramers-Moyal expansion of a discrete master equation, which is the Markovian limit of the Milestoning theory, determines the diffusion tensor. To test the algorithm, we analyze overdamped Langevin trajectories and recover a multidimensional Fokker-Planck equation. The recovery process determines the flux through a mesh and estimates local kinetic parameters. Rate coefficients are converted to the derivatives of the potential of mean force and to coordinate dependent diffusion tensor. We illustrate the computation on simple models and on an atomically detailed system—the diffusion along the backbone torsions of a solvated alanine dipeptide

  11. A semi-supervised learning framework for biomedical event extraction based on hidden topics.

    Science.gov (United States)

    Zhou, Deyu; Zhong, Dayou

    2015-05-01

    Scientists have devoted decades of efforts to understanding the interaction between proteins or RNA production. The information might empower the current knowledge on drug reactions or the development of certain diseases. Nevertheless, due to the lack of explicit structure, literature in life science, one of the most important sources of this information, prevents computer-based systems from accessing. Therefore, biomedical event extraction, automatically acquiring knowledge of molecular events in research articles, has attracted community-wide efforts recently. Most approaches are based on statistical models, requiring large-scale annotated corpora to precisely estimate models' parameters. However, it is usually difficult to obtain in practice. Therefore, employing un-annotated data based on semi-supervised learning for biomedical event extraction is a feasible solution and attracts more interests. In this paper, a semi-supervised learning framework based on hidden topics for biomedical event extraction is presented. In this framework, sentences in the un-annotated corpus are elaborately and automatically assigned with event annotations based on their distances to these sentences in the annotated corpus. More specifically, not only the structures of the sentences, but also the hidden topics embedded in the sentences are used for describing the distance. The sentences and newly assigned event annotations, together with the annotated corpus, are employed for training. Experiments were conducted on the multi-level event extraction corpus, a golden standard corpus. Experimental results show that more than 2.2% improvement on F-score on biomedical event extraction is achieved by the proposed framework when compared to the state-of-the-art approach. The results suggest that by incorporating un-annotated data, the proposed framework indeed improves the performance of the state-of-the-art event extraction system and the similarity between sentences might be precisely

  12. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique.

    Science.gov (United States)

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Shaw, Philip J; Ukosakit, Kittipat; Tragoonrung, Somvong; Tongsima, Sissades

    2015-01-01

    DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. This work presents an automated genotyping tool from DNA

  13. Automatic DNA Diagnosis for 1D Gel Electrophoresis Images using Bio-image Processing Technique

    Science.gov (United States)

    2015-01-01

    Background DNA gel electrophoresis is a molecular biology technique for separating different sizes of DNA fragments. Applications of DNA gel electrophoresis include DNA fingerprinting (genetic diagnosis), size estimation of DNA, and DNA separation for Southern blotting. Accurate interpretation of DNA banding patterns from electrophoretic images can be laborious and error prone when a large number of bands are interrogated manually. Although many bio-imaging techniques have been proposed, none of them can fully automate the typing of DNA owing to the complexities of migration patterns typically obtained. Results We developed an image-processing tool that automatically calls genotypes from DNA gel electrophoresis images. The image processing workflow comprises three main steps: 1) lane segmentation, 2) extraction of DNA bands and 3) band genotyping classification. The tool was originally intended to facilitate large-scale genotyping analysis of sugarcane cultivars. We tested the proposed tool on 10 gel images (433 cultivars) obtained from polyacrylamide gel electrophoresis (PAGE) of PCR amplicons for detecting intron length polymorphisms (ILP) on one locus of the sugarcanes. These gel images demonstrated many challenges in automated lane/band segmentation in image processing including lane distortion, band deformity, high degree of noise in the background, and bands that are very close together (doublets). Using the proposed bio-imaging workflow, lanes and DNA bands contained within are properly segmented, even for adjacent bands with aberrant migration that cannot be separated by conventional techniques. The software, called GELect, automatically performs genotype calling on each lane by comparing with an all-banding reference, which was created by clustering the existing bands into the non-redundant set of reference bands. The automated genotype calling results were verified by independent manual typing by molecular biologists. Conclusions This work presents an

  14. Automatic identification of temporal sequences in chewing sounds

    NARCIS (Netherlands)

    Amft, O.D.; Kusserow, M.; Tröster, G.

    2007-01-01

    Chewing is an essential part of food intake. The analysis and detection of food patterns is an important component of an automatic dietary monitoring system. However chewing is a time-variable process depending on food properties. We present an automated methodology to extract sub-sequences of

  15. Study of molecularly imprinted solid-phase extraction of gonyautoxins 2,3 in the cultured dinoflagellate Alexandrium tamarense by high-performance liquid chromatography with fluorescence detection

    International Nuclear Information System (INIS)

    Lian, Zi-Ru; Wang, Jiang-Tao

    2013-01-01

    A highly selective sample cleanup procedure combined with molecularly imprinted solid-phase extraction (MISPE) was developed for the isolation of gonyautoxins 2,3 (GTX2,3) from Alexandrium tamarense sample. The molecularly imprinted polymer microspheres (MIPMs) were prepared by suspension polymerization using caffeine as the dummy template molecule, methacrylic acid as the functional monomer, ethylene glycol dimethacrylate as the cross-linker and polyvinyl alcohol as the dispersive reagent. The polymer microspheres were used as a selective sorbent for the solid-phase extraction of gonyautoxins 2,3. An off-line MISPE method followed by high-performance liquid chromatography (HPLC) with fluorescence detection for the analysis of gonyautoxins 2,3 was established. Finally, the extract samples from Alexandrium tamarense were analyzed. The results showed the imprinted polymer microspheres exhibited high affinity and selectivity for gonyautoxins 2,3. The interference matrix in the extract were obviously cleaned by MISPE and the extraction efficiency of gonyautoxins 2,3 in the sample ranged from 81.74% to 85.86%. -- Graphical abstract: This is the SEM photograph of molecularly imprinted polymer microspheres (MIPMs). MIPMs were prepared by suspension polymerization and used as selective sorbents for the solid-phase extraction of gonyautoxins 2,3. An off-line MISPE method followed by high-performance liquid chromatography with fluorescence detection for the analysis of gonyautoxins 2,3 was established. The extract samples from Alexandrium tamarense were analyzed by molecularly imprinted solid-phase extraction. Highlights: •The molecularly imprinted polymer microspheres (MIPMs) for GTX2,3 were prepared. •The characteristics and regeneration property of MIPMs were studied. •An off-line method using MIPMs as solid-phase extraction (SPE) sorbents was developed. •GTX2,3 from Alexandrium tamarense extract was successfully isolated by MIPMs-SPE. -- MIPMs for GTX2,3 were

  16. Separation and determination of citrinin in corn using HPLC fluorescence detection assisted by molecularly imprinted solid phase extraction clean-up

    Science.gov (United States)

    A liquid chromatography based method to detect citrinin in corn was developed using molecularly imprinted solid phase extraction (MISPE) sample clean-up. Molecularly imprinted polymers were synthesized using 1,4-dihydroxy-2-naphthoic acid as the template and an amine functional monomer. Density func...

  17. An Automatic Building Extraction and Regularisation Technique Using LiDAR Point Cloud Data and Orthoimage

    Directory of Open Access Journals (Sweden)

    Syed Ali Naqi Gilani

    2016-03-01

    Full Text Available The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object’s size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2, building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian in contrast to the ISPRS benchmark, where it does better or equal to the counterparts.

  18. Automatic extraction of the cingulum bundle in diffusion tensor tract-specific analysis. Feasibility study in Parkinson's disease with and without dementia

    International Nuclear Information System (INIS)

    Ito, Kenji; Masutani, Yoshitaka; Suzuki, Yuichi; Ino, Kenji; Kunimatsu, Akira; Ohtomo, Kuni; Kamagata, Koji; Yasmin, Hasina; Aoki, Shigeki

    2013-01-01

    Tract-specific analysis (TSA) measures diffusion parameters along a specific fiber that has been extracted by fiber tracking using manual regions of interest (ROIs), but TSA is limited by its requirement for manual operation, poor reproducibility, and high time consumption. We aimed to develop a fully automated extraction method for the cingulum bundle (CB) and to apply the method to TSA in neurobehavioral disorders such as Parkinson's disease (PD). We introduce the voxel classification (VC) and auto diffusion tensor fiber-tracking (AFT) methods of extraction. The VC method directly extracts the CB, skipping the fiber-tracking step, whereas the AFT method uses fiber tracking from automatically selected ROIs. We compared the results of VC and AFT to those obtained by manual diffusion tensor fiber tracking (MFT) performed by 3 operators. We quantified the Jaccard similarity index among the 3 methods in data from 20 subjects (10 normal controls [NC] and 10 patients with Parkinson's disease dementia [PDD]). We used all 3 extraction methods (VC, AFT, and MFT) to calculate the fractional anisotropy (FA) values of the anterior and posterior CB for 15 NC subjects, 15 with PD, and 15 with PDD. The Jaccard index between results of AFT and MFT, 0.72, was similar to the inter-operator Jaccard index of MFT. However, the Jaccard indices between VC and MFT and between VC and AFT were lower. Consequently, the VC method classified among 3 different groups (NC, PD, and PDD), whereas the others classified only 2 different groups (NC, PD or PDD). For TSA in Parkinson's disease, the VC method can be more useful than the AFT and MFT methods for extracting the CB. In addition, the results of patient data analysis suggest that a reduction of FA in the posterior CB may represent a useful biological index for monitoring PD and PDD. (author)

  19. Automatic Recognition Method for Optical Measuring Instruments Based on Machine Vision

    Institute of Scientific and Technical Information of China (English)

    SONG Le; LIN Yuchi; HAO Liguo

    2008-01-01

    Based on a comprehensive study of various algorithms, the automatic recognition of traditional ocular optical measuring instruments is realized. Taking a universal tools microscope (UTM) lens view image as an example, a 2-layer automatic recognition model for data reading is established after adopting a series of pre-processing algorithms. This model is an optimal combination of the correlation-based template matching method and a concurrent back propagation (BP) neural network. Multiple complementary feature extraction is used in generating the eigenvectors of the concurrent network. In order to improve fault-tolerance capacity, rotation invariant features based on Zernike moments are extracted from digit characters and a 4-dimensional group of the outline features is also obtained. Moreover, the operating time and reading accuracy can be adjusted dynamically by setting the threshold value. The experimental result indicates that the newly developed algorithm has optimal recognition precision and working speed. The average reading ratio can achieve 97.23%. The recognition method can automatically obtain the results of optical measuring instruments rapidly and stably without modifying their original structure, which meets the application requirements.

  20. Extraction of inhibitor-free metagenomic DNA from polluted sediments, compatible with molecular diversity analysis using adsorption and ion-exchange treatments.

    Science.gov (United States)

    Desai, Chirayu; Madamwar, Datta

    2007-03-01

    PCR inhibitor-free metagenomic DNA of high quality and high yield was extracted from highly polluted sediments using a simple remediation strategy of adsorption and ion-exchange chromatography. Extraction procedure was optimized with series of steps, which involved gentle mechanical lysis, treatment with powdered activated charcoal (PAC) and ion-exchange chromatography with amberlite resin. Quality of the extracted DNA for molecular diversity analysis was tested by amplifying bacterial 16S rDNA (16S rRNA gene) with eubacterial specific universal primers (8f and 1492r), cloning of the amplified 16S rDNA and ARDRA (amplified rDNA restriction analysis) of the 16S rDNA clones. The presence of discrete differences in ARDRA banding profiles provided evidence for expediency of the DNA extraction protocol in molecular diversity studies. A comparison of the optimized protocol with commercial Ultraclean Soil DNA isolation kit suggested that method described in this report would be more efficient in removing metallic and organic inhibitors, from polluted sediment samples.

  1. Computational and experimental investigation of molecular imprinted polymers for selective extraction of dimethoate and its metabolite omethoate from olive oil.

    Science.gov (United States)

    Bakas, Idriss; Oujji, Najwa Ben; Moczko, Ewa; Istamboulie, Georges; Piletsky, Sergey; Piletska, Elena; Ait-Addi, Elhabib; Ait-Ichou, Ihya; Noguer, Thierry; Rouillon, Régis

    2013-01-25

    This work presents the development of molecularly imprinted polymers (MIPs) for the selective extraction of dimethoate from olive oil. Computational simulations allowed selecting itaconic acid as the monomer showing the highest affinity towards dimethoate. Experimental validation confirmed modelling predictions and showed that the polymer based on IA as functional monomer and omethoate as template molecule displays the highest selectivity for the structurally similar pesticides dimethoate, omethoate and monocrotophos. Molecularly imprinted solid phase extraction (MISPE) method was developed and applied to the clean-up of olive oil extracts. It was found that the most suitable solvents for loading, washing and elution step were respectively hexane, hexane-dichloromethane (85:15%) and methanol. The developed MIPSE was successfully applied to extraction of dimethoate from olive oil, with recovery rates up to 94%. The limits of detection and quantification of the described method were respectively 0.012 and 0.05 μg g(-1). Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Information retrieval and terminology extraction in online resources for patients with diabetes.

    Science.gov (United States)

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

  3. Automatic detection of adverse events to predict drug label changes using text and data mining techniques.

    Science.gov (United States)

    Gurulingappa, Harsha; Toldo, Luca; Rajput, Abdul Mateen; Kors, Jan A; Taweel, Adel; Tayrouz, Yorki

    2013-11-01

    The aim of this study was to assess the impact of automatically detected adverse event signals from text and open-source data on the prediction of drug label changes. Open-source adverse effect data were collected from FAERS, Yellow Cards and SIDER databases. A shallow linguistic relation extraction system (JSRE) was applied for extraction of adverse effects from MEDLINE case reports. Statistical approach was applied on the extracted datasets for signal detection and subsequent prediction of label changes issued for 29 drugs by the UK Regulatory Authority in 2009. 76% of drug label changes were automatically predicted. Out of these, 6% of drug label changes were detected only by text mining. JSRE enabled precise identification of four adverse drug events from MEDLINE that were undetectable otherwise. Changes in drug labels can be predicted automatically using data and text mining techniques. Text mining technology is mature and well-placed to support the pharmacovigilance tasks. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Automatic extraction of forward stroke volume using dynamic PET/CT: a dual-tracer and dual-scanner validation in patients with heart valve disease.

    Science.gov (United States)

    Harms, Hendrik Johannes; Tolbod, Lars Poulsen; Hansson, Nils Henrik Stubkjær; Kero, Tanja; Orndahl, Lovisa Holm; Kim, Won Yong; Bjerner, Tomas; Bouchelouche, Kirsten; Wiggers, Henrik; Frøkiær, Jørgen; Sörensen, Jens

    2015-12-01

    The aim of this study was to develop and validate an automated method for extracting forward stroke volume (FSV) using indicator dilution theory directly from dynamic positron emission tomography (PET) studies for two different tracers and scanners. 35 subjects underwent a dynamic (11)C-acetate PET scan on a Siemens Biograph TruePoint-64 PET/CT (scanner I). In addition, 10 subjects underwent both dynamic (15)O-water PET and (11)C-acetate PET scans on a GE Discovery-ST PET/CT (scanner II). The left ventricular (LV)-aortic time-activity curve (TAC) was extracted automatically from PET data using cluster analysis. The first-pass peak was isolated by automatic extrapolation of the downslope of the TAC. FSV was calculated as the injected dose divided by the product of heart rate and the area under the curve of the first-pass peak. Gold standard FSV was measured using phase-contrast cardiovascular magnetic resonance (CMR). FSVPET correlated highly with FSVCMR (r = 0.87, slope = 0.90 for scanner I, r = 0.87, slope = 1.65, and r = 0.85, slope = 1.69 for scanner II for (15)O-water and (11)C-acetate, respectively) although a systematic bias was observed for both scanners (p dynamic PET/CT and cluster analysis. Results are almost identical for (11)C-acetate and (15)O-water. A scanner-dependent bias was observed, and a scanner calibration factor is required for multi-scanner studies. Generalization of the method to other tracers and scanners requires further validation.

  5. Flavonoids-Rich Orthosiphon stamineus Extract as New Candidate for Angiotensin I-Converting Enzyme Inhibition: A Molecular Docking Study

    Directory of Open Access Journals (Sweden)

    Armaghan Shafaei

    2016-11-01

    Full Text Available This study aims to evaluate the in vitro angiotensin-converting enzyme (ACE inhibition activity of different extracts of Orthosiphon stamineus (OS leaves and their main flavonoids, namely rosmarinic acid (RA, sinensetin (SIN, eupatorin (EUP and 3′-hydroxy-5,6,7,4′-tetramethoxyflavone (TMF. Furthermore, to identify possible mechanisms of action based on structure–activity relationships and molecular docking. The in vitro ACE inhibition activity relied on determining hippuric acid (HA formation from ACE-specific substrate (hippuryl-histidyl-leucine (HHL by the action of ACE enzyme. A High Performance Liquid Chromatography method combined with UV detection was developed and validated for measurement the concentration of produced HA. The chelation ability of OS extract and its reference compounds was evaluated by tetramethylmurexide reagent. Furthermore, molecular docking study was performed by LeadIT-FlexX: BioSolveIT’s LeadIT program. OS ethanolic extract (OS-E exhibited highest inhibition and lowest IC50 value (45.77 ± 1.17 µg/mL against ACE compared to the other extracts. Among the tested reference compounds, EUP with IC50 15.35 ± 4.49 µg/mL had highest inhibition against ACE and binding ability with Zn (II (56.03% ± 1.26% compared to RA, TMF and SIN. Molecular docking studies also confirmed that flavonoids inhibit ACE via interaction with the zinc ion and this interaction is stabilized by other interactions with amino acids in the active site. In this study, we have demonstrated that changes in flavonoids active core affect their capacity to inhibit ACE. Moreover, we showed that ACE inhibition activity of flavonoids compounds is directly related to their ability to bind with zinc ion in the active site of ACE enzyme. It was also revealed that OS extract contained high amount of flavonoids other than RA, TMF, SIN and EUP. As such, application of OS extract is useful as inhibitors of ACE.

  6. Flavonoids-Rich Orthosiphon stamineus Extract as New Candidate for Angiotensin I-Converting Enzyme Inhibition: A Molecular Docking Study.

    Science.gov (United States)

    Shafaei, Armaghan; Sultan Khan, Md Shamsuddin; F A Aisha, Abdalrahim; Abdul Majid, Amin Malik Shah; Hamdan, Mohammad Razak; Mordi, Mohd Nizam; Ismail, Zhari

    2016-11-09

    This study aims to evaluate the in vitro angiotensin-converting enzyme (ACE) inhibition activity of different extracts of Orthosiphon stamineus (OS) leaves and their main flavonoids, namely rosmarinic acid (RA), sinensetin (SIN), eupatorin (EUP) and 3'-hydroxy-5,6,7,4'-tetramethoxyflavone (TMF). Furthermore, to identify possible mechanisms of action based on structure-activity relationships and molecular docking. The in vitro ACE inhibition activity relied on determining hippuric acid (HA) formation from ACE-specific substrate (hippuryl-histidyl-leucine (HHL)) by the action of ACE enzyme. A High Performance Liquid Chromatography method combined with UV detection was developed and validated for measurement the concentration of produced HA. The chelation ability of OS extract and its reference compounds was evaluated by tetramethylmurexide reagent. Furthermore, molecular docking study was performed by LeadIT-FlexX : BioSolveIT's LeadIT program. OS ethanolic extract (OS-E) exhibited highest inhibition and lowest IC 50 value (45.77 ± 1.17 µg/mL) against ACE compared to the other extracts. Among the tested reference compounds, EUP with IC 50 15.35 ± 4.49 µg/mL had highest inhibition against ACE and binding ability with Zn (II) (56.03% ± 1.26%) compared to RA, TMF and SIN. Molecular docking studies also confirmed that flavonoids inhibit ACE via interaction with the zinc ion and this interaction is stabilized by other interactions with amino acids in the active site. In this study, we have demonstrated that changes in flavonoids active core affect their capacity to inhibit ACE. Moreover, we showed that ACE inhibition activity of flavonoids compounds is directly related to their ability to bind with zinc ion in the active site of ACE enzyme. It was also revealed that OS extract contained high amount of flavonoids other than RA, TMF, SIN and EUP. As such, application of OS extract is useful as inhibitors of ACE.

  7. Contribution of molecular modeling and of structure-activity relations to the liquid-liquid extraction. Application to the case of U(VI) extraction by monoamides; Apport de la modelisation moleculaire et des relations structure -activite a l`extraction liquide-liquide. Application au cas de l`extraction d`U(VI) par les monoamides

    Energy Technology Data Exchange (ETDEWEB)

    Rabbe, C.

    1996-06-07

    In France, spent fuels are in most cases reprocessed. The aim of the reprocessing is to separate the recyclable fissile materials (for instance, uranium and plutonium) of radioactive wastes. The industrial process used until now is the Purex (Plutonium Uranium Refining by EXtraction) process. Recently (1991), the CEA has undertaken researches on the fields of separation and transmutation of long-lived radionuclides as minor actinides. Some molecules with an amide function have been at first considered especially for the uranium extraction. In order to rationalize the research of new extracting molecules, some molecular modeling methods (quantum chemistry calculations, molecular mechanics) have been used. In fact, there are three determining parameters for a molecule to be a good extractant: it has to own: (1) one or several sites which present a sufficient electron density in order that the metallic cation be complexed (2) the smallest possible substituents to avoid interferences with the complexation (3) a sufficient lipophilic effect. (O.M.). 139 refs., 43 figs., 36 tabs.

  8. Selective extraction and determination of chlorogenic acids as combined quality markers in herbal medicines using molecularly imprinted polymers based on a mimic template.

    Science.gov (United States)

    Ji, Wenhua; Zhang, Mingming; Yan, Huijiao; Zhao, Hengqiang; Mu, Yan; Guo, Lanping; Wang, Xiao

    2017-12-01

    We describe a solid-phase extraction adsorbent based on molecularly imprinted polymers (MIPs), prepared with use of a mimic template. The MIPs were used for the selective extraction and determination of three chlorogenic acids as combined quality markers for Lonicera japonica and Lianhua qingwen granules. The morphologies and surface groups of the MIPs were assessed by scanning electron microscopy, Brunauer-Emmett-Teller surface area analysis, and Fourier transform infrared spectroscopy. The adsorption isotherms, kinetics, and selectivity of the MIPs were systematically compared with those of non-molecularly imprinted polymers. The MIPs showed high selectivity toward three structurally similar chlorogenic acids (chlorogenic acid, cryptochlorogenic acid, and neochlorogenic acid). A procedure using molecularly imprinted solid-phase extraction coupled with high-performance liquid chromatography was established for the determination of three chlorogenic acids from Lonicera japonica and Lianhua qingwen granules. The recoveries of the chlorogenic acids ranged from 93.1% to 101.4%. The limits of detection and limits of quantification for the three chlorogenic acids were 0.003 mg g -1 and 0.01 mg g -1 , respectively. The newly developed method is thus a promising technique for the enrichment and determination of chlorogenic acids from herbal medicines. Graphical Abstract Mimic molecularly imprinted polymers for the selective extraction of chlorogenic acids.

  9. Molecularly imprinted polymer for selective extraction of malachite green from seawater and seafood coupled with high-performance liquid chromatographic determination

    International Nuclear Information System (INIS)

    Lian Ziru; Wang Jiangtao

    2012-01-01

    Highlights: ► The malachite green molecularly imprinted polymer (MG-MIP) was prepared. ► The characteristics and regeneration property of MIP were studied. ► An off-line method for MG was developed using MIP as solid-phase extraction. ► The MG concentrations from seawater and seafood samples were determined. - Abstract: In this paper, a highly selective sample cleanup procedure combining molecular imprinting technique (MIT) and solid-phase extraction (SPE) was developed for the isolation of malachite green in seawater and seafood samples. The molecularly imprinted polymer (MIP) was prepared using malachite green as the template molecule, methacrylic acid as the functional monomer and ethylene glycol dimethacrylate as the cross-linking monomer. The imprinted polymer and non-imprinted polymer were characterized by scanning electron microscope and static adsorption experiments. The MIP showed a high adsorption capacity and was used as selective sorbent for the SPE of malachite green. An off-line molecularly imprinted solid-phase extraction (MISPE) method followed by high-performance liquid chromatography with diodearray detection for the analysis of malachite green in seawater and seafood samples was also established. Finally, five samples were determined. The results showed that malachite green concentration in one seawater sample was at 1.30 μg L −1 and the RSD (n = 3) was 4.15%.

  10. Doping control in Japan. An automated extraction procedure for the doping test.

    Science.gov (United States)

    Nakajima, T.; Matsumoto, T.

    1976-01-01

    Horse racing in Japan consists of two systems, the National (10 racecourses) and the Regional public racing (32 racecourses) having about 2,500 racing meetings in total per year. Urine or saliva samples for dope testing are collected by the officials from thw winner, second and third, and transported to the laboratory in a frozen state. In 1975, 76, 117 samples were analyzed by this laboratory. The laboratory provides the following four methods of analysis, which are variously combined by request. (1) Method for detection of drugs extracted by chloroform from alkalinized sample. (2) Methods for detection of camphor and its derivatives. (3) Method for detection of barbiturates. (4) Method for detection of ethanol. These methods consist of screening, mainly by thin layer chromatography and confirmatory tests using ultra violet spectrophotometry, gas chromatography and mass spectrometry combined with gas chromatography. In the screening test of doping drugs, alkalinized samples are extracted with chloroform. In order to automate the extraction procedure, the authors contrived a new automatic extractor. They also devised a means of pH adjustment of horse urine by using buffer solution and an efficient mechanism of evaporation of organic solvent. Analytical data obtained by the automatic extractor are presented in this paper. In 1972, we started research work to automate the extraction procedure in method (1) above, and the Automatic Extractor has been in use in routine work since last July. One hundred and twnety samples per hour are extracted automatically by three automatic extractors. The analytical data using this apparatus is presented below. PMID:1000163

  11. Semi-Automatic Construction of Skeleton Concept Maps from Case Judgments

    OpenAIRE

    Boer, A.; Sijtsma, B.; Winkels, R.; Lettieri, N.

    2014-01-01

    This paper proposes an approach to generating Skeleton Conceptual Maps (SCM) semi automatically from legal case documents provided by the United Kingdom’s Supreme Court. SCM are incomplete knowledge representations for the purpose of scaffolding learning. The proposed system intends to provide students with a tool to pre-process text and to extract knowledge from documents in a time saving manner. A combination of natural language processing methods and proposition extraction algorithms are u...

  12. Selective Dispersive Solid Phase Extraction of Ser-traline Using Surface Molecularly Imprinted Polymer Grafted on SiO2/Graphene Oxide

    Directory of Open Access Journals (Sweden)

    Faezeh Khalilian

    2017-01-01

    Full Text Available A surface molecularly imprinted dispersive solid phase extraction coupled with liquid chromatography–ultraviolet detection is proposed as a selective and fast clean-up technique for the determination of sertraline in biological sample. Surface sertraline-molecular imprinted polymer was grafted and synthesized on the SiO2/graphene oxide surface. Firstly SiO2 was coated on synthesized graphene oxide sheet using sol-gel technique. Prior to polymerization, the vinyl group was incorporated on to the surface of SiO2/graphene oxide to direct selective polymerization on the surface. Methacrylic acid, ethylene glycol dimethacrylate and ethanol were used as monomer, cross-linker and progen, respectively. Non-imprinted polymer was also prepared for comparing purposes. The properties of the molecular imprinted polymer were characterized using field emission-scanning electron microscopy and Fourier transform infrared spectroscopy methods. The surface molecular imprinted polymer was utilized as an adsorbent of dispersive solid phase extraction for separation and preconcentration of sertraline. The effects of the different parameters influencing the extraction efficiency, such as sample pH were investigated and optimized. The specificity of the molecular imprinted polymer over the non-imprinted polymer was examined in absence and presence of competitive drugs. Sertraline calibration curve showed linearity in the ranges 1–500 µg L-1. The limits of detection and quantification under optimized conditions were obtained 0.2 and 0.5 µg L-1. The within-day and between-day relative standard deviations (n=3 were 4.3 and 7.1%, respectively. Furthermore, the relative recoveries for spiked biological samples were above 92%.

  13. LEARNING VECTOR QUANTIZATION FOR ADAPTED GAUSSIAN MIXTURE MODELS IN AUTOMATIC SPEAKER IDENTIFICATION

    Directory of Open Access Journals (Sweden)

    IMEN TRABELSI

    2017-05-01

    Full Text Available Speaker Identification (SI aims at automatically identifying an individual by extracting and processing information from his/her voice. Speaker voice is a robust a biometric modality that has a strong impact in several application areas. In this study, a new combination learning scheme has been proposed based on Gaussian mixture model-universal background model (GMM-UBM and Learning vector quantization (LVQ for automatic text-independent speaker identification. Features vectors, constituted by the Mel Frequency Cepstral Coefficients (MFCC extracted from the speech signal are used to train the New England subset of the TIMIT database. The best results obtained (90% for gender- independent speaker identification, 97 % for male speakers and 93% for female speakers for test data using 36 MFCC features.

  14. Genetic diversity assessment of sesame core collection in China by phenotype and molecular markers and extraction of a mini-core collection

    Directory of Open Access Journals (Sweden)

    Zhang Yanxin

    2012-11-01

    Full Text Available Abstract Background Sesame (Sesamum indicum L. is one of the four major oil crops in China. A sesame core collection (CC was established in China in 2000, but no complete study on its genetic diversity has been carried out at either the phenotypic or molecular level. To provide technical guidance, a theoretical basis for further collection, effective protection, reasonable application, and a complete analysis of sesame genetic resources, a genetic diversity assessment of the sesame CC in China was conducted using phenotypic and molecular data and by extracting a sesame mini-core collection (MC. Results Results from a genetic diversity assessment of sesame CC in China were significantly inconsistent at the phenotypic and molecular levels. A Mantel test revealed the insignificant correlation between phenotype and molecular marker information (r = 0.0043, t = 0.1320, P = 0.5525. The Shannon-Weaver diversity index (I and Nei genetic diversity index (h were higher (I = 0.9537, h = 0.5490 when calculated using phenotypic data from the CC than when using molecular data (I = 0.3467, h = 0.2218. A mini-core collection (MC containing 184 accessions was extracted based on both phenotypic and molecular data, with a low mean difference percentage (MD, 1.64%, low variance difference percentage (VD, 22.58%, large variable rate of coefficient of variance (VR, 114.86%, and large coincidence rate of range (CR, 95.76%. For molecular data, the diversity indices and the polymorphism information content (PIC for the MC were significantly higher than for the CC. Compared to an alternative random sampling strategy, the advantages of capturing genetic diversity and validation by extracting a MC using an advanced maximization strategy were proven. Conclusions This study provides a comprehensive characterization of the phenotypic and molecular genetic diversities of the sesame CC in China. A MC was extracted using both phenotypic and molecular data. Low MD% and VD%, and

  15. Genetic diversity assessment of sesame core collection in China by phenotype and molecular markers and extraction of a mini-core collection

    Science.gov (United States)

    2012-01-01

    Background Sesame (Sesamum indicum L.) is one of the four major oil crops in China. A sesame core collection (CC) was established in China in 2000, but no complete study on its genetic diversity has been carried out at either the phenotypic or molecular level. To provide technical guidance, a theoretical basis for further collection, effective protection, reasonable application, and a complete analysis of sesame genetic resources, a genetic diversity assessment of the sesame CC in China was conducted using phenotypic and molecular data and by extracting a sesame mini-core collection (MC). Results Results from a genetic diversity assessment of sesame CC in China were significantly inconsistent at the phenotypic and molecular levels. A Mantel test revealed the insignificant correlation between phenotype and molecular marker information (r = 0.0043, t = 0.1320, P = 0.5525). The Shannon-Weaver diversity index (I) and Nei genetic diversity index (h) were higher (I = 0.9537, h = 0.5490) when calculated using phenotypic data from the CC than when using molecular data (I = 0.3467, h = 0.2218). A mini-core collection (MC) containing 184 accessions was extracted based on both phenotypic and molecular data, with a low mean difference percentage (MD, 1.64%), low variance difference percentage (VD, 22.58%), large variable rate of coefficient of variance (VR, 114.86%), and large coincidence rate of range (CR, 95.76%). For molecular data, the diversity indices and the polymorphism information content (PIC) for the MC were significantly higher than for the CC. Compared to an alternative random sampling strategy, the advantages of capturing genetic diversity and validation by extracting a MC using an advanced maximization strategy were proven. Conclusions This study provides a comprehensive characterization of the phenotypic and molecular genetic diversities of the sesame CC in China. A MC was extracted using both phenotypic and molecular data. Low MD% and VD%, and large VR% and CR

  16. Molecular characterization and enzymatic hydrolysis of naringin extracted from kinnow peel waste.

    Science.gov (United States)

    Puri, Munish; Kaur, Aneet; Schwarz, Wolfgang H; Singh, Satbir; Kennedy, J F

    2011-01-01

    Kinnow peel, a waste rich in glycosylated phenolic substances, is the principal by-product of the citrus fruit processing industry and its disposal is becoming a major problem. This peel is rich in naringin and may be used for rhamnose production by utilizing α-L-rhamnosidase (EC 3.2.1.40), an enzyme that catalyzes the cleavage of terminal rhamnosyl groups from naringin to yield prunin and rhamnose. In this work, infrared (IR) spectroscopy confirmed molecular characteristics of naringin extracted from kinnow peel waste. Further, recombinant α-L-rhamnosidase purified from Escherichia coli cells using immobilized metal-chelate affinity chromatography (IMAC) was used for naringin hydrolysis. The purified enzyme was inhibited by Hg2+ (1 mM), 4-hydroxymercuribenzoate (0.1 mM) and cyanamide (0.1 mM). The purified enzyme established hydrolysis of naringin extracted from kinnow peel and thus endorses its industrial applicability for producing rhamnose. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Molecularly imprinted macroporous monoliths for solid-phase extraction: Effect of pore size and column length on recognition properties.

    Science.gov (United States)

    Vlakh, E G; Stepanova, M A; Korneeva, Yu M; Tennikova, T B

    2016-09-01

    The series of macroporous monolithic molecularly imprinted monoliths differed by pore size, column length (volume) and amount of template used for imprinting was synthesized using methacrylic acid and glycerol dimethacrylate as co-monomers and antibiotic ciprofloxacin as a template. The prepared monoliths were characterized regarding to their permeability, pore size, porosity, and resistance to the flow of a mobile phase. The surface morphology was also analyzed. The slight dependence of imprinting factor on flow rate, as well as its independence on pore size of macroporous molecularly imprinted monolithic media was observed. The column obtained at different conditions exhibited different affinity of ciprofloxacin to the imprinted sites that was characterized with Kdiss values in the range of 10(-5)-10(-4)M. The solid-phase extraction of ciprofloxacin from such biological liquids as human blood serum, human urine and cow milk serum was performed using the developed monolithic columns. In all cases, the extraction was found to be 95.0-98.6%. Additionally, the comparison of extraction of three fluoroqinolone analogues, e.g. ciprofloxacin, levofloxacin and moxifloxacin, from human blood plasma was carried out. Contrary to ciprofloxacin extracted with more than 95%, this parameter did not exceed 40% for its analogues. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Solid phase extraction of penicillins from milk by using sacrificial silica beads as a support for a molecular imprint

    International Nuclear Information System (INIS)

    Giovannoli, Cristina; Anfossi, Laura; Biagioli, Flavia; Passini, Cinzia; Baggiani, Claudio

    2013-01-01

    We have prepared molecularly imprinted beads with molecular recognition capability for target molecules containing the penicillanic acid substructure. They were prepared by (a) grafting mesoporous silica beads with 6-aminopenicillanic acid as the mimic template, (b) filling the pores with a polymerized mixture of methacrylic acid and trimethylolpropane trimethacrylate, and (c) removing the silica support with ammonium fluoride. The resulting imprinted beads showed good molecular recognition capability for various penicillanic species, while antibiotics such as cephalosporins or chloramphenicol were poorly recognized. The imprinted beads were used to extract penicillin V, nafcillin, oxacillin, cloxacillin and dicloxacillin from skimmed and deproteinized milk in the concentration range of 5–100 μg·L −1 . The extracts were then analyzed by micellar electrokinetic chromatography by applying reverse polarity staking as an in-capillary preconcentration step, and this resulted in a fast and affordable method within the MRL levels, characterized by minimal pretreatment steps and recoveries of 64–90 %. (author)

  19. Using the Echo Nest's automatically extracted music features for a musicological purpose

    DEFF Research Database (Denmark)

    Andersen, Jesper Steen

    2014-01-01

    This paper sums up the preliminary observations and challenges encountered during my first engaging with the music intelligence company Echo Nest's automatically derived data of more than 35 million songs. The overall purpose is to investigate whether musicologists can draw benefit from Echo Nest...

  20. Flight Extraction and Phase Identification for Large Automatic Dependent Surveillance–Broadcast Datasets

    NARCIS (Netherlands)

    Sun, J.; Ellerbroek, J.; Hoekstra, J.M.

    2017-01-01

    AUTOMATIC dependent surveillance–broadcast (ADS-B) [1,2] is widely implemented in modern commercial aircraft and will become mandatory equipment in 2020. Flight state information such as position, velocity, and vertical rate are broadcast by tens of thousands of aircraft around the world constantly

  1. Application of a molecularly imprinted polymer for the extraction of kukoamine a from potato peels.

    Science.gov (United States)

    Piletska, Elena V; Burns, Rosemary; Terry, Leon A; Piletsky, Sergey A

    2012-01-11

    A molecularly imprinted polymer (MIP) for the purification of N(1),N(12)-bis(dihydrocaffeoyl)spermine (kukoamine A) was computationally designed and tested. The properties of the polymer were characterized. The protocol of the solid phase extraction (SPE) of kukoamine A from potato peels was optimized. A HPLC-MS method for the quantification of kukoamine A was developed and used for all optimization studies. The capacity of the MIP in relation to kukoamine A from the potato peels extract was estimated at 54 mg/g of the polymer. The kukoamine A purified from potato extract using MIP was exceptionally pure (≈ 90%). Although the corresponding blank polymer was less selective than the MIP for the extraction of kukoamine A from the potato extract, it was shown that the blank polymer could be effectively used for the purification of the crude synthetic kukoamine (polymer capacity = 80 mg of kukoamine A/g of the adsorbent, kukoamine A purity ≈ 86%). Therefore, selective adsorbents could be computationally designed for other plant products, allowing their purification in quantities that would be sufficient for more detailed studies and potential practical applications.

  2. Elucidation of the structure of organic solutions in solvent extraction by combining molecular dynamics and X-ray scattering

    International Nuclear Information System (INIS)

    Ferru, G.; Gomes Rodrigues, D.; Berthon, L.; Guilbaud, P.; Diat, O.; Bauduin, P.

    2014-01-01

    Knowledge of the supramolecular structure of the organic phase containing amphiphilic ligand molecules is mandatory for full comprehension of ionic separation during solvent extraction. Existing structural models are based on simple geometric aggregates, but no consensus exists on the interaction potentials. Herein, we show that molecular dynamics crossed with scattering techniques offers key insight into the complex fluid involving weak interactions without any long range ordering. Two systems containing mono- or diamide extractants in heptane and contacted with an aqueous phase were selected as examples to demonstrate the advantages of coupling the two approaches for furthering fundamental studies on solvent extraction. (authors)

  3. Automatic Planning of External Search Engine Optimization

    Directory of Open Access Journals (Sweden)

    Vita Jasevičiūtė

    2015-07-01

    Full Text Available This paper describes an investigation of the external search engine optimization (SEO action planning tool, dedicated to automatically extract a small set of most important keywords for each month during whole year period. The keywords in the set are extracted accordingly to external measured parameters, such as average number of searches during the year and for every month individually. Additionally the position of the optimized web site for each keyword is taken into account. The generated optimization plan is similar to the optimization plans prepared manually by the SEO professionals and can be successfully used as a support tool for web site search engine optimization.

  4. Extraction of High Quality RNA from Cannabis sativa Bast Fibres: A Vademecum for Molecular Biologists

    Directory of Open Access Journals (Sweden)

    Gea Guerriero

    2016-07-01

    Full Text Available In plants there is no universal protocol for RNA extraction, since optimizations are required depending on the species, tissues and developmental stages. Some plants/tissues are rich in secondary metabolites or synthesize thick cell walls, which hinder an efficient RNA extraction. One such example is bast fibres, long extraxylary cells characterized by a thick cellulosic cell wall. Given the economic importance of bast fibres, which are used in the textile sector, as well as in biocomposites as green substitutes of glass fibres, it is desirable to better understand their development from a molecular point of view. This knowledge favours the development of biotechnological strategies aimed at improving specific properties of bast fibres. To be able to perform high-throughput analyses, such as, for instance, transcriptomics of bast fibres, RNA extraction is a crucial and limiting step. We here detail a protocol enabling the rapid extraction of high quality RNA from the bast fibres of textile hemp, Cannabis sativa L., a multi-purpose fibre crop standing in the spotlight of research.

  5. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  6. ScholarLens: extracting competences from research publications for the automatic generation of semantic user profiles

    Directory of Open Access Journals (Sweden)

    Bahar Sateli

    2017-07-01

    Full Text Available Motivation Scientists increasingly rely on intelligent information systems to help them in their daily tasks, in particular for managing research objects, like publications or datasets. The relatively young research field of Semantic Publishing has been addressing the question how scientific applications can be improved through semantically rich representations of research objects, in order to facilitate their discovery and re-use. To complement the efforts in this area, we propose an automatic workflow to construct semantic user profiles of scholars, so that scholarly applications, like digital libraries or data repositories, can better understand their users’ interests, tasks, and competences, by incorporating these user profiles in their design. To make the user profiles sharable across applications, we propose to build them based on standard semantic web technologies, in particular the Resource Description Framework (RDF for representing user profiles and Linked Open Data (LOD sources for representing competence topics. To avoid the cold start problem, we suggest to automatically populate these profiles by analyzing the publications (co-authored by users, which we hypothesize reflect their research competences. Results We developed a novel approach, ScholarLens, which can automatically generate semantic user profiles for authors of scholarly literature. For modeling the competences of scholarly users and groups, we surveyed a number of existing linked open data vocabularies. In accordance with the LOD best practices, we propose an RDF Schema (RDFS based model for competence records that reuses existing vocabularies where appropriate. To automate the creation of semantic user profiles, we developed a complete, automated workflow that can generate semantic user profiles by analyzing full-text research articles through various natural language processing (NLP techniques. In our method, we start by processing a set of research articles for a

  7. Automatic computer aided analysis algorithms and system for adrenal tumors on CT images.

    Science.gov (United States)

    Chai, Hanchao; Guo, Yi; Wang, Yuanyuan; Zhou, Guohui

    2017-12-04

    The adrenal tumor will disturb the secreting function of adrenocortical cells, leading to many diseases. Different kinds of adrenal tumors require different therapeutic schedules. In the practical diagnosis, it highly relies on the doctor's experience to judge the tumor type by reading the hundreds of CT images. This paper proposed an automatic computer aided analysis method for adrenal tumors detection and classification. It consisted of the automatic segmentation algorithms, the feature extraction and the classification algorithms. These algorithms were then integrated into a system and conducted on the graphic interface by using MATLAB Graphic user interface (GUI). The accuracy of the automatic computer aided segmentation and classification reached 90% on 436 CT images. The experiments proved the stability and reliability of this automatic computer aided analytic system.

  8. Extraction: a system for automatic eddy current diagnosis of steam generator tubes in nuclear power plants

    International Nuclear Information System (INIS)

    Georgel, B.; Zorgati, R.

    1994-01-01

    Improving speed and quality of Eddy Current non-destructive testing of steam generator tubes leads to automatize all processes that contribute to diagnosis. This paper describes how we use signal processing, pattern recognition and artificial intelligence to build a software package that is able to automatically provide an efficient diagnosis. (authors). 2 figs., 5 refs

  9. Simultaneous extraction and determination of phthalate esters in aqueous solution by yolk-shell magnetic mesoporous carbon-molecularly imprinted composites based on solid-phase extraction coupled with gas chromatography-mass spectrometry.

    Science.gov (United States)

    Yang, Rui; Liu, Yuxin; Yan, Xiangyang; Liu, Shaomin

    2016-12-01

    A rapid, sensitive and accurate method for the simultaneous extraction and determination of five types of trace phthalate esters (PAEs) in environmental water and beverage samples using magnetic molecularly imprinted solid-phase extraction (MMIP-SPE) coupled with gas chromatography-mass spectrometry (GC-MS) was developed. A novel type of molecularly imprinted polymers on the surface of yolk-shell magnetic mesoporous carbon (Fe 3 O 4 @void@C-MIPs) was used as an efficient adsorbent for selective adsorption of phthalate esters based on magnetic solid-phase extraction (MSPE). The real samples were first preconcentrated by Fe 3 O 4 @void@C-MIPs, subsequently extracted by eluent and finally determined by GC-MS after magnetic separation. Several variables affecting the extraction efficiency of the analytes, including the type and volume of the elution solvent, amount of adsorbent, extraction time, desorption time and pH of the sample solution, were investigated and optimized. Validation experiments indicated that the developed method presented good linearity (R 2 >0.9961), satisfactory precision (RSD<6.7%), and high recovery (86.1-103.1%). The limits of detection ranged from 1.6ng/L to 5.2ng/L and the enrichment factor was in the range of 822-1423. The results indicated that the novel method had the advantages of convenience, good sensitivity, and high efficiency, and it could also be successfully applied to the analysis of PAEs in real samples. Copyright © 2016. Published by Elsevier B.V.

  10. Molecularly Imprinted Polymers (MIP for Selective Solid Phase Extraction of Celecoxib in Urine Samples Followed by High Performance Liquid Chromatography

    Directory of Open Access Journals (Sweden)

    Saeedeh Ansari

    2017-09-01

    Full Text Available In this study, for the analysis of human urine samples, a novel method explained for the determination of celecoxib, a nonsteroidal anti-inflammatory drug (NSAID, using molecularly imprinted solid-phase extraction (MISPE coupled with high-performance liquid chromatography (HPLC. The synthesis of the MIP was performed by precipitation polymerization in methacrylic acid (MAA, ethylene glycol dimethacrylate (EGDMA, chloroform, 2,2′-azobisisobutyronitrile (AIBN and celecoxib as the functional monomer, cross-linker monomer, solvent, initiator and target drug, respectively. The celecoxib imprinted polymer was utilized as a specific sorbent for the solid phase extraction (SPE of celecoxib from samples. The molecularly imprinted polymer (MIP performance was compared with the synthesized non-molecularly imprinted polymer (NIP. Scanning electron microscopy (SEM, FT-IR spectroscopy, UV-VIS spectrophotometry and thermogravimetric analysis (TGA/DTG were used for characterizing the synthesized polymers. Moreover, the MISPE procedure parameters such as pH, eluent solvent flow rate, eluent volume and sorbent mass that probably influence the extraction process have been optimized to achieve the highest celecoxib extraction efficiency. The relative standard deviation (RSD %, recovery percent, limit of detection (LOD and limit of quantification (LOQ of this proposed method were 1.12%, 96%, 8 µg L-1 and 26.7 µg L-1, respectively. The proposed MISPE-HPLC-UV method can be used for the separation and enrichment of trace amounts of celecoxib in human urine and biological samples.

  11. A Simple Thermoplastic Substrate Containing Hierarchical Silica Lamellae for High-Molecular-Weight DNA Extraction.

    Science.gov (United States)

    Zhang, Ye; Zhang, Yi; Burke, Jeffrey M; Gleitsman, Kristin; Friedrich, Sarah M; Liu, Kelvin J; Wang, Tza-Huei

    2016-12-01

    An inexpensive, magnetic thermoplastic nanomaterial is developed utilizing a hierarchical layering of micro- and nanoscale silica lamellae to create a high-surface-area and low-shear substrate capable of capturing vast amounts of ultrahigh-molecular-weight DNA. Extraction is performed via a simple 45 min process and is capable of achieving binding capacities up to 1 000 000 times greater than silica microparticles. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Road Network Extraction from Dsm by Mathematical Morphology and Reasoning

    Science.gov (United States)

    Li, Yan; Wu, Jianliang; Zhu, Lin; Tachibana, Kikuo

    2016-06-01

    The objective of this research is the automatic extraction of the road network in a scene of the urban area from a high resolution digital surface model (DSM). Automatic road extraction and modeling from remote sensed data has been studied for more than one decade. The methods vary greatly due to the differences of data types, regions, resolutions et al. An advanced automatic road network extraction scheme is proposed to address the issues of tedium steps on segmentation, recognition and grouping. It is on the basis of a geometric road model which describes a multiple-level structure. The 0-dimension element is intersection. The 1-dimension elements are central line and side. The 2-dimension element is plane, which is generated from the 1-dimension elements. The key feature of the presented approach is the cross validation for the three road elements which goes through the entire procedure of their extraction. The advantage of our model and method is that linear elements of the road can be derived directly, without any complex, non-robust connection hypothesis. An example of Japanese scene is presented to display the procedure and the performance of the approach.

  13. Derivation of groundwater flow-paths based on semi-automatic extraction of lineaments from remote sensing data

    Directory of Open Access Journals (Sweden)

    U. Mallast

    2011-08-01

    Full Text Available In this paper we present a semi-automatic method to infer groundwater flow-paths based on the extraction of lineaments from digital elevation models. This method is especially adequate in remote and inaccessible areas where in-situ data are scarce. The combined method of linear filtering and object-based classification provides a lineament map with a high degree of accuracy. Subsequently, lineaments are differentiated into geological and morphological lineaments using auxiliary information and finally evaluated in terms of hydro-geological significance. Using the example of the western catchment of the Dead Sea (Israel/Palestine, the orientation and location of the differentiated lineaments are compared to characteristics of known structural features. We demonstrate that a strong correlation between lineaments and structural features exists. Using Euclidean distances between lineaments and wells provides an assessment criterion to evaluate the hydraulic significance of detected lineaments. Based on this analysis, we suggest that the statistical analysis of lineaments allows a delineation of flow-paths and thus significant information on groundwater movements. To validate the flow-paths we compare them to existing results of groundwater models that are based on well data.

  14. Chemometric strategy for automatic chromatographic peak detection and background drift correction in chromatographic data.

    Science.gov (United States)

    Yu, Yong-Jie; Xia, Qiao-Ling; Wang, Sheng; Wang, Bing; Xie, Fu-Wei; Zhang, Xiao-Bing; Ma, Yun-Ming; Wu, Hai-Long

    2014-09-12

    Peak detection and background drift correction (BDC) are the key stages in using chemometric methods to analyze chromatographic fingerprints of complex samples. This study developed a novel chemometric strategy for simultaneous automatic chromatographic peak detection and BDC. A robust statistical method was used for intelligent estimation of instrumental noise level coupled with first-order derivative of chromatographic signal to automatically extract chromatographic peaks in the data. A local curve-fitting strategy was then employed for BDC. Simulated and real liquid chromatographic data were designed with various kinds of background drift and degree of overlapped chromatographic peaks to verify the performance of the proposed strategy. The underlying chromatographic peaks can be automatically detected and reasonably integrated by this strategy. Meanwhile, chromatograms with BDC can be precisely obtained. The proposed method was used to analyze a complex gas chromatography dataset that monitored quality changes in plant extracts during storage procedure. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Automatic Detection of Storm Damages Using High-Altitude Photogrammetric Imaging

    Science.gov (United States)

    Litkey, P.; Nurminen, K.; Honkavaara, E.

    2013-05-01

    The risks of storms that cause damage in forests are increasing due to climate change. Quickly detecting fallen trees, assessing the amount of fallen trees and efficiently collecting them are of great importance for economic and environmental reasons. Visually detecting and delineating storm damage is a laborious and error-prone process; thus, it is important to develop cost-efficient and highly automated methods. Objective of our research project is to investigate and develop a reliable and efficient method for automatic storm damage detection, which is based on airborne imagery that is collected after a storm. The requirements for the method are the before-storm and after-storm surface models. A difference surface is calculated using two DSMs and the locations where significant changes have appeared are automatically detected. In our previous research we used four-year old airborne laser scanning surface model as the before-storm surface. The after-storm DSM was provided from the photogrammetric images using the Next Generation Automatic Terrain Extraction (NGATE) algorithm of Socet Set software. We obtained 100% accuracy in detection of major storm damages. In this investigation we will further evaluate the sensitivity of the storm-damage detection process. We will investigate the potential of national airborne photography, that is collected at no-leaf season, to automatically produce a before-storm DSM using image matching. We will also compare impact of the terrain extraction algorithm to the results. Our results will also promote the potential of national open source data sets in the management of natural disasters.

  16. Automatized Parameterization of DFTB Using Particle Swarm Optimization.

    Science.gov (United States)

    Chou, Chien-Pin; Nishimura, Yoshifumi; Fan, Chin-Chai; Mazur, Grzegorz; Irle, Stephan; Witek, Henryk A

    2016-01-12

    We present a novel density-functional tight-binding (DFTB) parametrization toolkit developed to optimize the parameters of various DFTB models in a fully automatized fashion. The main features of the algorithm, based on the particle swarm optimization technique, are discussed, and a number of initial pilot applications of the developed methodology to molecular and solid systems are presented.

  17. Molecularly imprinted solid-phase extraction for selective extraction of bisphenol analogues in beverages and canned food.

    Science.gov (United States)

    Yang, Yunjia; Yu, Jianlong; Yin, Jie; Shao, Bing; Zhang, Jing

    2014-11-19

    This study aimed to develop a selective analytical method for the simultaneous determination of seven bisphenol analogues in beverage and canned food samples by using a new molecularly imprinted polymer (MIP) as a sorbent for solid-phase extraction (SPE). Liquid chromatography coupled to triple-quadruple tandem mass spectrometry (LC-MS/MS) was used to identify and quantify the target analytes. The MIP-SPE method exhibited a higher level of selectivity and purification than the traditional SPE method. The developed procedures were further validated in terms of accuracy, precision, and sensitivity. The obtained recoveries varied from 50% to 103% at three fortification levels and yielded a relative standard deviation (RSD, %) of less than 15% for all of the analytes. The limits of quantification (LOQ) for the seven analytes varied from 0.002 to 0.15 ng/mL for beverage samples and from 0.03 to 1.5 ng/g for canned food samples. This method was used to analyze real samples that were collected from a supermarket in Beijing. Overall, the results revealed that bisphenol A and bisphenol F were the most frequently detected bisphenols in the beverage and canned food samples and that their concentrations were closely associated with the type of packaging material. This study provides an alternative method of traditional SPE extraction for screening bisphenol analogues in food matrices.

  18. Automatic modulation recognition of communication signals

    CERN Document Server

    Azzouz, Elsayed Elsayed

    1996-01-01

    Automatic modulation recognition is a rapidly evolving area of signal analysis. In recent years, interest from the academic and military research institutes has focused around the research and development of modulation recognition algorithms. Any communication intelligence (COMINT) system comprises three main blocks: receiver front-end, modulation recogniser and output stage. Considerable work has been done in the area of receiver front-ends. The work at the output stage is concerned with information extraction, recording and exploitation and begins with signal demodulation, that requires accurate knowledge about the signal modulation type. There are, however, two main reasons for knowing the current modulation type of a signal; to preserve the signal information content and to decide upon the suitable counter action, such as jamming. Automatic Modulation Recognition of Communications Signals describes in depth this modulation recognition process. Drawing on several years of research, the authors provide a cr...

  19. Automatic Transformation of MPI Programs to Asynchronous, Graph-Driven Form

    Energy Technology Data Exchange (ETDEWEB)

    Baden, Scott B [University of California, San Diego; Weare, John H [University of California, San Diego; Bylaska, Eric J [Pacific Northwest National Laboratory

    2013-04-30

    The goals of this project are to develop new, scalable, high-fidelity algorithms for atomic-level simulations and program transformations that automatically restructure existing applications, enabling them to scale forward to Petascale systems and beyond. The techniques enable legacy MPI application code to exploit greater parallelism though increased latency hiding and improved workload assignment. The techniques were successfully demonstrated on high-end scalable systems located at DOE laboratories. Besides the automatic MPI program transformations efforts, the project also developed several new scalable algorithms for ab-initio molecular dynamics, including new massively parallel algorithms for hybrid DFT and new parallel in time algorithms for molecular dynamics and ab-initio molecular dynamics. These algorithms were shown to scale to very large number of cores, and they were designed to work in the latency hiding framework developed in this project. The effectiveness of the developments was enhanced by the direct application to real grand challenge simulation problems covering a wide range of technologically important applications, time scales and accuracies. These included the simulation of the electronic structure of mineral/fluid interfaces, the very accurate simulation of chemical reactions in microsolvated environments, and the simulation of chemical behavior in very large enzyme reactions.

  20. A robust approach to extract biomedical events from literature.

    Science.gov (United States)

    Bui, Quoc-Chinh; Sloot, Peter M A

    2012-10-15

    The abundance of biomedical literature has attracted significant interest in novel methods to automatically extract biomedical relations from the literature. Until recently, most research was focused on extracting binary relations such as protein-protein interactions and drug-disease relations. However, these binary relations cannot fully represent the original biomedical data. Therefore, there is a need for methods that can extract fine-grained and complex relations known as biomedical events. In this article we propose a novel method to extract biomedical events from text. Our method consists of two phases. In the first phase, training data are mapped into structured representations. Based on that, templates are used to extract rules automatically. In the second phase, extraction methods are developed to process the obtained rules. When evaluated against the Genia event extraction abstract and full-text test datasets (Task 1), we obtain results with F-scores of 52.34 and 53.34, respectively, which are comparable to the state-of-the-art systems. Furthermore, our system achieves superior performance in terms of computational efficiency. Our source code is available for academic use at http://dl.dropbox.com/u/10256952/BioEvent.zip.

  1. Synthesis of a nanoporous molecularly imprinted polymers for dibutyl Phthalate extracted from Trichoderma Harzianum

    Directory of Open Access Journals (Sweden)

    Maede Shahiri Tabarestani

    2016-07-01

    Full Text Available In this study, molecularly imprinted polymers were synthesized for dibutyl phthalate as a bioactive chemical compound with antifungal activity which produced by Trichoderma Harzianum (JX1738521. The molecularly imprinted polymers were synthesized via precipitation polymerization method from methacrylic acid, dibutyl phthalate and trimetylolpropantrimethacrylate as a functional monomer, template and cross-linker, respectively. After removal of the template by the eluent from the MIPs, the leached nanoparticles of the MIPs had a good binding capacity as equal 830 mg/g. The polymer particles have been evaluated by field emission scan electron microscopy and Brunauer–Emmett–Teller  techniques. The excellent specific surface area in the molecularly imprinted polymers as equal to 690.301 m2/g comparatively to non-imprinted polymers (ca. 89.894 m2/g, confirms that the nanoporous MIPs were synthesized, successfully. The results indicated that the nanoporous MIPs can be used in solid phase extraction. This is a novel method for separation of the bioactive compounds from fungi secondary metabolites in biological control.

  2. Combining Pickering Emulsion Polymerization with Molecular Imprinting to Prepare Polymer Microspheres for Selective Solid-Phase Extraction of Malachite Green

    Directory of Open Access Journals (Sweden)

    Weixin Liang

    2017-08-01

    Full Text Available Malachite green (MG is currently posing a carcinogenic threat to the safety of human lives; therefore, it is highly desirable to develop an effective method for fast trace detection of MG. Herein, for the first time, this paper presents a systematic study on polymer microspheres, being prepared by combined Pickering emulsion polymerization and molecular imprinting, to detect and purify MG. The microspheres, molecularly imprinted with MG, show enhanced adsorption selectivity to MG, despite a somewhat lowered adsorption capacity, as compared to the counterpart without molecular imprinting. Structural features and adsorption performance of these microspheres are elucidated by different characterizations and kinetic and thermodynamic analyses. The surface of the molecularly imprinted polymer microspheres (M-PMs exhibits regular pores of uniform pore size distribution, endowing M-PMs with impressive adsorption selectivity to MG. In contrast, the microspheres without molecular imprinting show a larger average particle diameter and an uneven porous surface (with roughness and a large pore size, causing a lower adsorption selectivity to MG despite a higher adsorption capacity. Various adsorption conditions are investigated, such as pH and initial concentration of the solution with MG, for optimizing the adsorption performance of M-PMs in selectively tackling MG. The adsorption kinetics and thermodynamics are deeply discussed and analyzed, so as to provide a full picture of the adsorption behaviors of the polymer microspheres with and without the molecular imprinting. Significantly, M-PMs show promising solid-phase extraction column applications for recovering MG in a continuous extraction manner.

  3. Automatic segmentation of liver structure in CT images

    International Nuclear Information System (INIS)

    Bae, K.T.; Giger, M.L.; Chen, C.; Kahn, C.E. Jr.

    1993-01-01

    The segmentation and three-dimensional representation of the liver from a computed tomography (CT) scan is an important step in many medical applications, such as in the surgical planning for a living-donor liver transplant and in the automatic detection and documentation of pathological states. A method is being developed to automatically extract liver structure from abdominal CT scans using a priori information about liver morphology and digital image-processing techniques. Segmentation is performed sequentially image-by-image (slice-by-slice), starting with a reference image in which the liver occupies almost the entire right half of the abdomen cross section. Image processing techniques include gray-level thresholding, Gaussian smoothing, and eight-point connectivity tracking. For each case, the shape, size, and pixel density distribution of the liver are recorded for each CT image and used in the processing of other CT images. Extracted boundaries of the liver are smoothed using mathematical morphology techniques and B-splines. Computer-determined boundaries were compared with those drawn by a radiologist. The boundary descriptions from the two methods were in agreement, and the calculated areas were within 10%

  4. Automatic lip reading by using multimodal visual features

    Science.gov (United States)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  5. Automatic generation of data merging program codes.

    OpenAIRE

    Hyensook, Kim; Oussena, Samia; Zhang, Ying; Clark, Tony

    2010-01-01

    Data merging is an essential part of ETL (Extract-Transform-Load) processes to build a data warehouse system. To avoid rewheeling merging techniques, we propose a Data Merging Meta-model (DMM) and its transformation into executable program codes in the manner of model driven engineering. DMM allows defining relationships of different model entities and their merging types in conceptual level. Our formalized transformation described using ATL (ATLAS Transformation Language) enables automatic g...

  6. Highly sensitive determination of polycyclic aromatic hydrocarbons in ambient air dust by gas chromatography-mass spectrometry after molecularly imprinted polymer extraction

    Energy Technology Data Exchange (ETDEWEB)

    Krupadam, Reddithota J.; Bhagat, Bhagyashree; Khan, Muntazir S. [National Environmental Engineering Research Institute, Nagpur (India)

    2010-08-15

    A method based on solid-phase extraction with a molecularly imprinted polymer (MIP) has been developed to determine five probable human carcinogenic polycyclic aromatic hydrocarbons (PAHs) in ambient air dust by gas chromatography-mass spectrometry (GC-MS). Molecularly imprinted poly(vinylpyridine-co-ethylene glycol dimethacrylate) was chosen as solid-phase extraction (SPE) material for PAHs. The conditions affecting extraction efficiency, for example surface properties, concentration of PAHs, and equilibration times were evaluated and optimized. Under optimum conditions, pre-concentration factors for MIP-SPE ranged between 80 and 93 for 10 mL ambient air dust leachate. PAHs recoveries from MIP-SPE after extraction from air dust were between 85% and 97% and calibration graphs of the PAHs showed a good linearity between 10 and 1000 ng L{sup -1} (r=0.99). The extraction efficiency of MIP for PAHs was compared with that of commercially available SPE materials - powdered activated carbon (PAC) and polystyrene-divinylbenzene resin (XAD) - and it was shown that the extraction capacity of the MIP was better than that of the other two SPE materials. Organic matter in air dust had no effect on MIP extraction, which produced a clean extract for GC-MS analysis. The detection limit of the method proposed in this article is 0.15 ng L{sup -1} for benzo[a]pyrene, which is a marker molecule of air pollution. The method has been applied to the determination of probable carcinogenic PAHs in air dust of industrial zones and satisfactory results were obtained. (orig.)

  7. Highly sensitive determination of polycyclic aromatic hydrocarbons in ambient air dust by gas chromatography-mass spectrometry after molecularly imprinted polymer extraction.

    Science.gov (United States)

    Krupadam, Reddithota J; Bhagat, Bhagyashree; Khan, Muntazir S

    2010-08-01

    A method based on solid--phase extraction with a molecularly imprinted polymer (MIP) has been developed to determine five probable human carcinogenic polycyclic aromatic hydrocarbons (PAHs) in ambient air dust by gas chromatography-mass spectrometry (GC-MS). Molecularly imprinted poly(vinylpyridine-co-ethylene glycol dimethacrylate) was chosen as solid-phase extraction (SPE) material for PAHs. The conditions affecting extraction efficiency, for example surface properties, concentration of PAHs, and equilibration times were evaluated and optimized. Under optimum conditions, pre-concentration factors for MIP-SPE ranged between 80 and 93 for 10 mL ambient air dust leachate. PAHs recoveries from MIP-SPE after extraction from air dust were between 85% and 97% and calibration graphs of the PAHs showed a good linearity between 10 and 1000 ng L(-1) (r = 0.99). The extraction efficiency of MIP for PAHs was compared with that of commercially available SPE materials--powdered activated carbon (PAC) and polystyrene-divinylbenzene resin (XAD)--and it was shown that the extraction capacity of the MIP was better than that of the other two SPE materials. Organic matter in air dust had no effect on MIP extraction, which produced a clean extract for GC-MS analysis. The detection limit of the method proposed in this article is 0.15 ng L(-1) for benzo[a]pyrene, which is a marker molecule of air pollution. The method has been applied to the determination of probable carcinogenic PAHs in air dust of industrial zones and satisfactory results were obtained.

  8. Automatic and creative skills in reading Automatic and creative skills in reading

    Directory of Open Access Journals (Sweden)

    Leonor Scliar Cabral

    2008-04-01

    Full Text Available In this article I will discuss the automatic and creative skills in reading, focusing on the differences between 1 processes involved while learning how to read and processes employed by the proficient reader and 2 knowledge for using language and metalinguistic awareness. The arguments will derive mainly from the definition of reading as a process where the receivers combine the information extracted from the written material with their specialized knowledge activated during this process (i.e. linguistic systems and correspondent rules and enciclopedic knowledge in order to comprehend, interpret and internalize structured new information and/or to experience aesthetic pleasure. Evidence to illustrate the arguments comes from experiments (1 with pre-school children and beginning readers on narrativity and on the dichotic paradigm, and with illiterate and literate adults with diferent levels of proficiency of reading in a task of erasing an initial syllable and an initial consonant. In this article I will discuss the automatic and creative skills in reading, focusing on the differences between 1 processes involved while learning how to read and processes employed by the proficient reader and 2 knowledge for using language and metalinguistic awareness. The arguments will derive mainly from the definition of reading as a process where the receivers combine the information extracted from the written material with their specialized knowledge activated during this process (i.e. linguistic systems and correspondent rules and enciclopedic knowledge in order to comprehend, interpret and internalize structured new information and/or to experience aesthetic pleasure. Evidence to illustrate the arguments comes from experiments (1 with pre-school children and beginning readers on narrativity and on the dichotic paradigm, and with illiterate and literate adults with diferent levels of proficiency of reading in a task of erasing an initial syllable

  9. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    Science.gov (United States)

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  10. The fate of injectant coal in blast furnaces: The origin of extractable materials of high molecular mass in blast furnace carryover dusts

    Energy Technology Data Exchange (ETDEWEB)

    Dong, S.N.; Wu, L.; Paterson, N.; Herod, A.A.; Dugwell, D.R.; Kandiyoti, R. [University of London Imperial College of Science & Technology, London (United Kingdom). Dept. of Chemical Engineering

    2005-07-01

    The aim of the work was to investigate the fate of injectant coal in blast furnaces and the origin of extractable materials in blast furnace carryover dusts. Two sets of samples including injectant coal and the corresponding carryover dusts from a full sized blast furnace and a pilot scale rig have been examined. The samples were extracted using 1-methyl-2-pyrrolidinone (NMP) solvent and the extracts studied by size exclusion chromatography (SEC). The blast furnace carryover dust extracts contained high molecular weight carbonaceous material, of apparent mass corresponding to 10{sup 7}-10{sup 8} u, by polystyrene calibration. In contrast, the feed coke and char prepared in a wire mesh reactor under high temperature conditions did not give any extractable material. Meanwhile, controlled combustion experiments in a high-pressure wire mesh reactor suggest that the extent of combustion of injectant coal in the blast furnace tuyeres and raceways is limited by time of exposure and very low oxygen concentration. It is thus likely that the extractable, soot-like material in the blast furnace dust originated in tars is released by the injectant coal. Our results suggest that the unburned tars were thermally altered during the upward path within the furnace, giving rise to the formation of heavy molecular weight (soot-like) materials.

  11. Temporally rendered automatic cloud extraction (TRACE) system

    Science.gov (United States)

    Bodrero, Dennis M.; Yale, James G.; Davis, Roger E.; Rollins, John M.

    1999-10-01

    Smoke/obscurant testing requires that 2D cloud extent be extracted from visible and thermal imagery. These data are used alone or in combination with 2D data from other aspects to make 3D calculations of cloud properties, including dimensions, volume, centroid, travel, and uniformity. Determining cloud extent from imagery has historically been a time-consuming manual process. To reduce time and cost associated with smoke/obscurant data processing, automated methods to extract cloud extent from imagery were investigated. The TRACE system described in this paper was developed and implemented at U.S. Army Dugway Proving Ground, UT by the Science and Technology Corporation--Acuity Imaging Incorporated team with Small Business Innovation Research funding. TRACE uses dynamic background subtraction and 3D fast Fourier transform as primary methods to discriminate the smoke/obscurant cloud from the background. TRACE has been designed to run on a PC-based platform using Windows. The PC-Windows environment was chosen for portability, to give TRACE the maximum flexibility in terms of its interaction with peripheral hardware devices such as video capture boards, removable media drives, network cards, and digital video interfaces. Video for Windows provides all of the necessary tools for the development of the video capture utility in TRACE and allows for interchangeability of video capture boards without any software changes. TRACE is designed to take advantage of future upgrades in all aspects of its component hardware. A comparison of cloud extent determined by TRACE with manual method is included in this paper.

  12. An Overview of Biomolecular Event Extraction from Scientific Documents.

    Science.gov (United States)

    Vanegas, Jorge A; Matos, Sérgio; González, Fabio; Oliveira, José L

    2015-01-01

    This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  13. An Overview of Biomolecular Event Extraction from Scientific Documents

    Directory of Open Access Journals (Sweden)

    Jorge A. Vanegas

    2015-01-01

    Full Text Available This paper presents a review of state-of-the-art approaches to automatic extraction of biomolecular events from scientific texts. Events involving biomolecules such as genes, transcription factors, or enzymes, for example, have a central role in biological processes and functions and provide valuable information for describing physiological and pathogenesis mechanisms. Event extraction from biomedical literature has a broad range of applications, including support for information retrieval, knowledge summarization, and information extraction and discovery. However, automatic event extraction is a challenging task due to the ambiguity and diversity of natural language and higher-level linguistic phenomena, such as speculations and negations, which occur in biological texts and can lead to misunderstanding or incorrect interpretation. Many strategies have been proposed in the last decade, originating from different research areas such as natural language processing, machine learning, and statistics. This review summarizes the most representative approaches in biomolecular event extraction and presents an analysis of the current state of the art and of commonly used methods, features, and tools. Finally, current research trends and future perspectives are also discussed.

  14. Subcritical water extraction combined with molecular imprinting technology for sample preparation in the detection of triazine herbicides.

    Science.gov (United States)

    Zhao, Fengnian; Wang, Shanshan; She, Yongxin; Zhang, Chao; Zheng, Lufei; Jin, Maojun; Shao, Hua; Jin, Fen; Du, Xinwei; Wang, Jing

    2017-09-15

    A selective, environmentally friendly, and cost-effective sample extraction method based on a combination of subcritical water extraction (SWE) and molecularly imprinted solid-phase extraction (MISPE) was developed for the determination of eight triazine herbicides in soil samples by liquid chromatography-tandem mass spectrometry (LC-MS/MS). In SWE, the highest extraction yields of triazine herbicides were obtained under 150°C for 15min using 20% ethanol as the organic modifier. Addition of MIP during SWE increased the extraction efficiency, and using MIP as a selective SPE sorbent improved the enrichment capability. Soil samples were treated with the optimized extraction MIP/SWE-MISPE method and analyzed by LC-MS/MS. The novel technique was then applied to soil samples for the determination of triazine herbicides, and better recoveries (78.9%-101%) were obtained compared with using SWE-MISPE (30%-67%). Moreover, this newly developed method displayed good linearity (R 2 >0.99) and precision (2.7-9.8%), and low enough detection limits (0.4-3.3μgkg -1 ). This combination of SWE and MIP technology is a simple, effective and promising method to selectively extract class-specific compounds in complex samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. ANA, automatic natural learning of a semantic network

    International Nuclear Information System (INIS)

    Enguehard, Chantal

    1992-01-01

    The objective of this research thesis is the automatic extraction of terminology and the study of its automatic structuring in order to produce a semantic network. Such an operation is applied to text corpus representing knowledge on a specific field in order to select the relevant technical vocabulary regarding this field. Thus, the author developed a method and a software for the automatic acquisition of terminology items. The author first gives an overview of systems and methods of document indexing and of thesaurus elaboration, and a brief presentation of the state-of-the-art of learning. Then, he discusses some drawbacks of computer systems of natural language processing which are using large knowledge sources such as grammars and dictionaries. After a presentation of the adopted approach and of some hypotheses, the author defines objects and operators which are necessary for an easier data handling, presents the knowledge acquisition process, and finally precisely describes the system computerization. Some results are assessed and discussed, and limitations and perspectives are commented [fr

  16. A semi-automatic annotation tool for cooking video

    Science.gov (United States)

    Bianco, Simone; Ciocca, Gianluigi; Napoletano, Paolo; Schettini, Raimondo; Margherita, Roberto; Marini, Gianluca; Gianforme, Giorgio; Pantaleo, Giuseppe

    2013-03-01

    In order to create a cooking assistant application to guide the users in the preparation of the dishes relevant to their profile diets and food preferences, it is necessary to accurately annotate the video recipes, identifying and tracking the foods of the cook. These videos present particular annotation challenges such as frequent occlusions, food appearance changes, etc. Manually annotate the videos is a time-consuming, tedious and error-prone task. Fully automatic tools that integrate computer vision algorithms to extract and identify the elements of interest are not error free, and false positive and false negative detections need to be corrected in a post-processing stage. We present an interactive, semi-automatic tool for the annotation of cooking videos that integrates computer vision techniques under the supervision of the user. The annotation accuracy is increased with respect to completely automatic tools and the human effort is reduced with respect to completely manual ones. The performance and usability of the proposed tool are evaluated on the basis of the time and effort required to annotate the same video sequences.

  17. Automatic digital surface model (DSM) generation from aerial imagery data

    Science.gov (United States)

    Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu

    2018-04-01

    Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.

  18. Integrating Information Extraction Agents into a Tourism Recommender System

    Science.gov (United States)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  19. PASBio: predicate-argument structures for event extraction in molecular biology

    Science.gov (United States)

    Wattarujeekrit, Tuangthong; Shah, Parantu K; Collier, Nigel

    2004-01-01

    Background The exploitation of information extraction (IE), a technology aiming to provide instances of structured representations from free-form text, has been rapidly growing within the molecular biology (MB) research community to keep track of the latest results reported in literature. IE systems have traditionally used shallow syntactic patterns for matching facts in sentences but such approaches appear inadequate to achieve high accuracy in MB event extraction due to complex sentence structure. A consensus in the IE community is emerging on the necessity for exploiting deeper knowledge structures such as through the relations between a verb and its arguments shown by predicate-argument structure (PAS). PAS is of interest as structures typically correspond to events of interest and their participating entities. For this to be realized within IE a key knowledge component is the definition of PAS frames. PAS frames for non-technical domains such as newswire are already being constructed in several projects such as PropBank, VerbNet, and FrameNet. Knowledge from PAS should enable more accurate applications in several areas where sentence understanding is required like machine translation and text summarization. In this article, we explore the need to adapt PAS for the MB domain and specify PAS frames to support IE, as well as outlining the major issues that require consideration in their construction. Results We introduce PASBio by extending a model based on PropBank to the MB domain. The hypothesis we explore is that PAS holds the key for understanding relationships describing the roles of genes and gene products in mediating their biological functions. We chose predicates describing gene expression, molecular interactions and signal transduction events with the aim of covering a number of research areas in MB. Analysis was performed on sentences containing a set of verbal predicates from MEDLINE and full text journals. Results confirm the necessity to analyze

  20. A method for real-time implementation of HOG feature extraction

    Science.gov (United States)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  1. Automatic isotope gas analysis of tritium labelled organic materials Pt. 1

    International Nuclear Information System (INIS)

    Gacs, I.; Mlinko, S.

    1978-01-01

    A new automatic procedure developed to convert tritium in HTO hydrogen for subsequent on-line gas counting is described. The water containing tritium is introduced into a column prepared from molecular sieve-5A and heated to 550 deg C. The tritium is transferred by isotopic exchange into hydrogen flowing through the column. The radioactive gas is led into an internal detector for radioactivity measurement. The procedure is free of memory effects, provides quantitative recovery with analytical reproducibility better than 0.5% rel. at a preset number of counts. The experimental and analytical results indicate that isotopic exchange between HTO and hydrogen over a column prepared from alumina or molecular sieve-5A can be successfully applied for the quantitative transfer of tritium from HTO into hydrogen for on-line gas countinq. This provides an analytical procedure for the automatic determination of tritium in water with an analytical reproducibility better than 0.5% rel. The exchange process will also be suitable for rapid tritium transfer from water formed during the decomposition of tritium-labelled organic compounds or biological materials. The application of the procedure in automatic isotope gas analysis of organic materials labelled with tritium will be described in subsequent papers (Parts II and III). (T.G.)

  2. Automatic Anthropometric System Development Using Machine Learning

    Directory of Open Access Journals (Sweden)

    Long The Nguyen

    2016-08-01

    Full Text Available The contactless automatic anthropometric system is proposed for the reconstruction of the 3D-model of the human body using the conventional smartphone. Our approach involves three main steps. The first step is the extraction of 12 anthropological features. Then we determine the most important features. Finally, we employ these features to build the 3D model of the human body and classify them according to gender and the commonly used sizes. 

  3. Molecularly imprinted polymer for selective extraction of malachite green from seawater and seafood coupled with high-performance liquid chromatographic determination.

    Science.gov (United States)

    Lian, Ziru; Wang, Jiangtao

    2012-12-01

    In this paper, a highly selective sample cleanup procedure combining molecular imprinting technique (MIT) and solid-phase extraction (SPE) was developed for the isolation of malachite green in seawater and seafood samples. The molecularly imprinted polymer (MIP) was prepared using malachite green as the template molecule, methacrylic acid as the functional monomer and ethylene glycol dimethacrylate as the cross-linking monomer. The imprinted polymer and non-imprinted polymer were characterized by scanning electron microscope and static adsorption experiments. The MIP showed a high adsorption capacity and was used as selective sorbent for the SPE of malachite green. An off-line molecularly imprinted solid-phase extraction (MISPE) method followed by high-performance liquid chromatography with diodearray detection for the analysis of malachite green in seawater and seafood samples was also established. Finally, five samples were determined. The results showed that malachite green concentration in one seawater sample was at 1.30 μg L⁻¹ and the RSD (n=3) was 4.15%. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  4. Automatic Imitation

    Science.gov (United States)

    Heyes, Cecilia

    2011-01-01

    "Automatic imitation" is a type of stimulus-response compatibility effect in which the topographical features of task-irrelevant action stimuli facilitate similar, and interfere with dissimilar, responses. This article reviews behavioral, neurophysiological, and neuroimaging research on automatic imitation, asking in what sense it is "automatic"…

  5. Alternating sample changer and an automatic sample changer for liquid scintillation counting of alpha-emitting materials

    International Nuclear Information System (INIS)

    Thorngate, J.H.

    1977-08-01

    Two sample changers are described that were designed for liquid scintillation counting of alpha-emitting samples prepared using solvent-extraction chemistry. One operates manually but changes samples without exposing the photomultiplier tube to light, allowing the high voltage to remain on for improved stability. The other is capable of automatically counting up to 39 samples. An electronic control for the automatic sample changer is also described

  6. A Full-Body Layered Deformable Model for Automatic Model-Based Gait Recognition

    Science.gov (United States)

    Lu, Haiping; Plataniotis, Konstantinos N.; Venetsanopoulos, Anastasios N.

    2007-12-01

    This paper proposes a full-body layered deformable model (LDM) inspired by manually labeled silhouettes for automatic model-based gait recognition from part-level gait dynamics in monocular video sequences. The LDM is defined for the fronto-parallel gait with 22 parameters describing the human body part shapes (widths and lengths) and dynamics (positions and orientations). There are four layers in the LDM and the limbs are deformable. Algorithms for LDM-based human body pose recovery are then developed to estimate the LDM parameters from both manually labeled and automatically extracted silhouettes, where the automatic silhouette extraction is through a coarse-to-fine localization and extraction procedure. The estimated LDM parameters are used for model-based gait recognition by employing the dynamic time warping for matching and adopting the combination scheme in AdaBoost.M2. While the existing model-based gait recognition approaches focus primarily on the lower limbs, the estimated LDM parameters enable us to study full-body model-based gait recognition by utilizing the dynamics of the upper limbs, the shoulders and the head as well. In the experiments, the LDM-based gait recognition is tested on gait sequences with differences in shoe-type, surface, carrying condition and time. The results demonstrate that the recognition performance benefits from not only the lower limb dynamics, but also the dynamics of the upper limbs, the shoulders and the head. In addition, the LDM can serve as an analysis tool for studying factors affecting the gait under various conditions.

  7. Effect of gamma-irradiation on rice seed DNA. Pt. 1. Yield and molecular size of DNA extracted from irradiated rice seeds

    International Nuclear Information System (INIS)

    Kawamura, Yoko; Konishi, Akihiro; Yamada, Takashi; Saito, Yukio

    1995-01-01

    The effect of gamma-irradiation on the DNA of hulled rice seeds was investigated. The cetyltrimethylammonium bromide (CTAB) method was preferred for the extraction of DNA from rice seeds because of its high quality and good yield. The yield of DNA that was determined by gel electrophoresis, decreased as the irradiation dose increased from 1 kGy. DNA extracted from rice seeds irradiated with a 30 kGy dose showed a molecular size of less than 20 kb, while that from unirradiated rice showed more than 100 kb in electrophoretic profiles. It can be assumed that the decrease in yield was mainly induced by the crosslinking between protein and DNA, and the reduction in molecular size was induced by double-strand breaks. (J.P.N.)

  8. AUTOMATIC SHAPE-BASED TARGET EXTRACTION FOR CLOSE-RANGE PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    X. Guo

    2016-06-01

    Full Text Available In order to perform precise identification and location of artificial coded targets in natural scenes, a novel design of circle-based coded target and the corresponding coarse-fine extraction algorithm are presented. The designed target separates the target box and coding box totally and owns an advantage of rotation invariance. Based on the original target, templates are prepared by three geometric transformations and are used as the input of shape-based template matching. Finally, region growing and parity check methods are used to extract the coded targets as final results. No human involvement is required except for the preparation of templates and adjustment of thresholds in the beginning, which is conducive to the automation of close-range photogrammetry. The experimental results show that the proposed recognition method for the designed coded target is robust and accurate.

  9. Extraction of latent images from printed media

    Science.gov (United States)

    Sergeyev, Vladislav; Fedoseev, Victor

    2015-12-01

    In this paper we propose an automatic technology for extraction of latent images from printed media such as documents, banknotes, financial securities, etc. This technology includes image processing by adaptively constructed Gabor filter bank for obtaining feature images, as well as subsequent stages of feature selection, grouping and multicomponent segmentation. The main advantage of the proposed technique is versatility: it allows to extract latent images made by different texture variations. Experimental results showing performance of the method over another known system for latent image extraction are given.

  10. A Method of Generating Indoor Map Spatial Data Automatically from Architectural Plans

    Directory of Open Access Journals (Sweden)

    SUN Weixin

    2016-06-01

    Full Text Available Taking architectural plans as data source, we proposed a method which can automatically generate indoor map spatial data. Firstly, referring to the spatial data demands of indoor map, we analyzed the basic characteristics of architectural plans, and introduced concepts of wall segment, adjoining node and adjoining wall segment, based on which basic flow of indoor map spatial data automatic generation was further established. Then, according to the adjoining relation between wall lines at the intersection with column, we constructed a repair method for wall connectivity in relation to the column. Utilizing the method of gradual expansibility and graphic reasoning to judge wall symbol local feature type at both sides of door or window, through update the enclosing rectangle of door or window, we developed a repair method for wall connectivity in relation to the door or window and a method for transform door or window into indoor map point feature. Finally, on the basis of geometric relation between adjoining wall segment median lines, a wall center-line extraction algorithm was presented. Taking one exhibition hall's architectural plan as example, we performed experiment and results show that the proposed methods have preferable applicability to deal with various complex situations, and realized indoor map spatial data automatic extraction effectively.

  11. Research on automatic inspection technique of real-time radiography for turbine-blade

    International Nuclear Information System (INIS)

    Zhou, Z.G.; Zhao, S.; An, Z.G.

    2004-01-01

    To inspect turbine blade automatically, with a real-time radiographic system based on X-ray flat panel detector, computerized defect extraction technique is studied on the basis of characteristics of turbine blade's digital radiographic images. At first, in the light of a variety of gray-level in a turbine blade's digital radiographic image, it is divided into six subareas. An adaptive median filter is used to smooth defects in each subarea. Then, the filtrated image is subtracted from the raw image and a difference image with flat background and outstanding defects is obtained. After that, thresholding is applied to the difference image and defects in the turbine blade become obvious. Later on, a morphological opening is used to realize noise reduction. In order to ensure the accuracy of defects, a region growing method is adopted to reconstruct the defects. Finally, the feature data of defects are extracted. The comparison between computerized feature extraction results and human interpretation results indicates that the method mentioned above is effective and efficient, which will lay a good foundation for automatic inspection of turbine-blade with X-ray. (author)

  12. Semi-Automatic Registration of Airborne and Terrestrial Laser Scanning Data Using Building Corner Matching with Boundaries as Reliability Check

    Directory of Open Access Journals (Sweden)

    Liang Cheng

    2013-11-01

    Full Text Available Data registration is a prerequisite for the integration of multi-platform laser scanning in various applications. A new approach is proposed for the semi-automatic registration of airborne and terrestrial laser scanning data with buildings without eaves. Firstly, an automatic calculation procedure for thresholds in density of projected points (DoPP method is introduced to extract boundary segments from terrestrial laser scanning data. A new algorithm, using a self-extending procedure, is developed to recover the extracted boundary segments, which then intersect to form the corners of buildings. The building corners extracted from airborne and terrestrial laser scanning are reliably matched through an automatic iterative process in which boundaries from two datasets are compared for the reliability check. The experimental results illustrate that the proposed approach provides both high reliability and high geometric accuracy (average error of 0.44 m/0.15 m in horizontal/vertical direction for corresponding building corners for the final registration of airborne laser scanning (ALS and tripod mounted terrestrial laser scanning (TLS data.

  13. Resource Lean and Portable Automatic Text Summarization

    OpenAIRE

    Hassel, Martin

    2007-01-01

    Today, with digitally stored information available in abundance, even for many minor languages, this information must by some means be filtered and extracted in order to avoid drowning in it. Automatic summarization is one such technique, where a computer summarizes a longer text to a shorter non-rendundant form. Apart from the major languages of the world there are a lot of languages for which large bodies of data aimed at language technology research to a high degree are lacking. There migh...

  14. Systems Biology-Driven Hypotheses Tested In Vivo: The Need to Advancing Molecular Imaging Tools.

    Science.gov (United States)

    Verma, Garima; Palombo, Alessandro; Grigioni, Mauro; La Monaca, Morena; D'Avenio, Giuseppe

    2018-01-01

    Processing and interpretation of biological images may provide invaluable insights on complex, living systems because images capture the overall dynamics as a "whole." Therefore, "extraction" of key, quantitative morphological parameters could be, at least in principle, helpful in building a reliable systems biology approach in understanding living objects. Molecular imaging tools for system biology models have attained widespread usage in modern experimental laboratories. Here, we provide an overview on advances in the computational technology and different instrumentations focused on molecular image processing and analysis. Quantitative data analysis through various open source software and algorithmic protocols will provide a novel approach for modeling the experimental research program. Besides this, we also highlight the predictable future trends regarding methods for automatically analyzing biological data. Such tools will be very useful to understand the detailed biological and mathematical expressions under in-silico system biology processes with modeling properties.

  15. Formal Specification Based Automatic Test Generation for Embedded Network Systems

    Directory of Open Access Journals (Sweden)

    Eun Hye Choi

    2014-01-01

    Full Text Available Embedded systems have become increasingly connected and communicate with each other, forming large-scaled and complicated network systems. To make their design and testing more reliable and robust, this paper proposes a formal specification language called SENS and a SENS-based automatic test generation tool called TGSENS. Our approach is summarized as follows: (1 A user describes requirements of target embedded network systems by logical property-based constraints using SENS. (2 Given SENS specifications, test cases are automatically generated using a SAT-based solver. Filtering mechanisms to select efficient test cases are also available in our tool. (3 In addition, given a testing goal by the user, test sequences are automatically extracted from exhaustive test cases. We’ve implemented our approach and conducted several experiments on practical case studies. Through the experiments, we confirmed the efficiency of our approach in design and test generation of real embedded air-conditioning network systems.

  16. Automatic Emotional State Detection using Facial Expression Dynamic in Videos

    Directory of Open Access Journals (Sweden)

    Hongying Meng

    2014-11-01

    Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.

  17. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.

    Science.gov (United States)

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2016-10-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets ( e.g. , application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.

  18. Selective extraction of dimethoate from cucumber samples by use of molecularly imprinted microspheres

    Directory of Open Access Journals (Sweden)

    Jiao-Jiao Du

    2015-06-01

    Full Text Available Molecularly imprinted polymers for dimethoate recognition were synthesized by the precipitation polymerization technique using methyl methacrylate (MMA as the functional monomer and ethylene glycol dimethacrylate (EGDMA as the cross-linker. The morphology, adsorption and recognition properties were investigated by scanning electron microscopy (SEM, static adsorption test, and competitive adsorption test. To obtain the best selectivity and binding performance, the synthesis and adsorption conditions of MIPs were optimized through single factor experiments. Under the optimized conditions, the resultant polymers exhibited uniform size, satisfactory binding capacity and significant selectivity. Furthermore, the imprinted polymers were successfully applied as a specific solid-phase extractants combined with high performance liquid chromatography (HPLC for determination of dimethoate residues in the cucumber samples. The average recoveries of three spiked samples ranged from 78.5% to 87.9% with the relative standard deviations (RSDs less than 4.4% and the limit of detection (LOD obtained for dimethoate as low as 2.3 μg/mL. Keywords: Molecularly imprinted polymer, Precipitation polymerization, Dimethoate, Cucumber, HPLC

  19. A technique for automatically extracting useful field of view and central field of view images.

    Science.gov (United States)

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.

  20. A technique for automatically extracting useful field of view and central field of view images

    International Nuclear Information System (INIS)

    Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar

    2016-01-01

    It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints

  1. Automatic radar target recognition of objects falling on railway tracks

    International Nuclear Information System (INIS)

    Mroué, A; Heddebaut, M; Elbahhar, F; Rivenq, A; Rouvaen, J-M

    2012-01-01

    This paper presents an automatic radar target recognition procedure based on complex resonances using the signals provided by ultra-wideband radar. This procedure is dedicated to detection and identification of objects lying on railway tracks. For an efficient complex resonance extraction, a comparison between several pole extraction methods is illustrated. Therefore, preprocessing methods are presented aiming to remove most of the erroneous poles interfering with the discrimination scheme. Once physical poles are determined, a specific discrimination technique is introduced based on the Euclidean distances. Both simulation and experimental results are depicted showing an efficient discrimination of different targets including guided transport passengers

  2. Enhancement of human adaptive immune responses by administration of a high-molecular-weight polysaccharide extract from the cyanobacterium Arthrospira platensis

    DEFF Research Database (Denmark)

    Pedersen, Morten Løbner; Walsted, Anette; Larsen, Rune

    2008-01-01

    The effect of consumption of Immulina, a high-molecular-weight polysaccharide extract from the cyanobacterium Arthrospira platensis, on adaptive immune responses was investigated by evaluation of changes in leukocyte responsiveness to two foreign recall antigens, Candida albicans (CA) and tetanus...

  3. A new uranium automatic analyzer

    International Nuclear Information System (INIS)

    Xia Buyun; Zhu Yaokun; Wang Bin; Cong Peiyuan; Zhang Lan

    1993-01-01

    A new uranium automatic analyzer based on the flow injection analysis (FIA) principle has been developed. It consists of a multichannel peristaltic pump, an injection valve, a photometric detector, a single-chip microprocessor system and electronic circuit. The new designed multifunctional auto-injection valve can automatically change the injection volume of the sample and the channels so that the determination ranges and items can easily be changed. It also can make the instrument vary the FIA operation modes that it has functions of a universal instrument. A chromatographic column with extractant-containing resin was installed in the manifold of the analyzer for the concentration and separation of trace uranium. The 2-(5-bromo-2-pyridylazo)-5-diethyl-aminophenol (Br-PADAP) was used as colour reagent. Uranium was determined in the aqueous solution by adding cetyl-pyridium bromide (CPB). The uranium in the solution in the range 0.02-500 mg · L -1 can be directly determined without any pretreatment. A sample throughput rate of 30-90 h -1 and reproducibility of 1-2% were obtained. The analyzer has been satisfactorily applied to the laboratory and the plant

  4. Real-Time Automatic Fetal Brain Extraction in Fetal MRI by Deep Learning

    OpenAIRE

    Salehi, Seyed Sadegh Mohseni; Hashemi, Seyed Raein; Velasco-Annis, Clemente; Ouaalam, Abdelhakim; Estroff, Judy A.; Erdogmus, Deniz; Warfield, Simon K.; Gholipour, Ali

    2017-01-01

    Brain segmentation is a fundamental first step in neuroimage analysis. In the case of fetal MRI, it is particularly challenging and important due to the arbitrary orientation of the fetus, organs that surround the fetal head, and intermittent fetal motion. Several promising methods have been proposed but are limited in their performance in challenging cases and in real-time segmentation. We aimed to develop a fully automatic segmentation method that independently segments sections of the feta...

  5. Combining automatic table classification and relationship extraction in extracting anticancer drug-side effect pairs from full-text articles.

    Science.gov (United States)

    Xu, Rong; Wang, QuanQiu

    2015-02-01

    Anticancer drug-associated side effect knowledge often exists in multiple heterogeneous and complementary data sources. A comprehensive anticancer drug-side effect (drug-SE) relationship knowledge base is important for computation-based drug target discovery, drug toxicity predication and drug repositioning. In this study, we present a two-step approach by combining table classification and relationship extraction to extract drug-SE pairs from a large number of high-profile oncological full-text articles. The data consists of 31,255 tables downloaded from the Journal of Oncology (JCO). We first trained a statistical classifier to classify tables into SE-related and -unrelated categories. We then extracted drug-SE pairs from SE-related tables. We compared drug side effect knowledge extracted from JCO tables to that derived from FDA drug labels. Finally, we systematically analyzed relationships between anti-cancer drug-associated side effects and drug-associated gene targets, metabolism genes, and disease indications. The statistical table classifier is effective in classifying tables into SE-related and -unrelated (precision: 0.711; recall: 0.941; F1: 0.810). We extracted a total of 26,918 drug-SE pairs from SE-related tables with a precision of 0.605, a recall of 0.460, and a F1 of 0.520. Drug-SE pairs extracted from JCO tables is largely complementary to those derived from FDA drug labels; as many as 84.7% of the pairs extracted from JCO tables have not been included a side effect database constructed from FDA drug labels. Side effects associated with anticancer drugs positively correlate with drug target genes, drug metabolism genes, and disease indications. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Automatic topics segmentation for TV news video

    Science.gov (United States)

    Hmayda, Mounira; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    Automatic identification of television programs in the TV stream is an important task for operating archives. This article proposes a new spatio-temporal approach to identify the programs in TV stream into two main steps: First, a reference catalogue for video features visual jingles built. We operate the features that characterize the instances of the same program type to identify the different types of programs in the flow of television. The role of video features is to represent the visual invariants for each visual jingle using appropriate automatic descriptors for each television program. On the other hand, programs in television streams are identified by examining the similarity of the video signal for visual grammars in the catalogue. The main idea of the identification process is to compare the visual similarity of the video signal features in the flow of television to the catalogue. After presenting the proposed approach, the paper overviews encouraging experimental results on several streams extracted from different channels and compounds of several programs.

  7. Automatic Visualization of Software Requirements: Reactive Systems

    International Nuclear Information System (INIS)

    Castello, R.; Mili, R.; Tollis, I.G.; Winter, V.

    1999-01-01

    In this paper we present an approach that facilitates the validation of high consequence system requirements. This approach consists of automatically generating a graphical representation from an informal document. Our choice of a graphical notation is statecharts. We proceed in two steps: we first extract a hierarchical decomposition tree from a textual description, then we draw a graph that models the statechart in a hierarchical fashion. The resulting drawing is an effective requirements assessment tool that allows the end user to easily pinpoint inconsistencies and incompleteness

  8. Comparative characterization of humic substances extracted from freshwater and peat of different apparent molecular sizes

    Directory of Open Access Journals (Sweden)

    Eliane Sloboda Rigobello

    2017-09-01

    Full Text Available This paper compares the structural characteristics of aquatic humic substances (AHS with humic substances from peat (HSP through different analytical techniques, including elemental analysis, solid state 13C cross polarization/magic-angle-spinning nuclear magnetic resonance spectroscopy (13C CP-MAS NMR, ultraviolet/visible (UV/Vis spectroscopy and Fourier transform infrared (FTIR spectroscopy and total organic carbon (TOC. The AHS were extracted from water collected in a tributary of the Itapanhaú River (Bertioga/SP using XAD 8 resin, and the HSP were extracted from peat collected in the Mogi Guaçu River bank (Luis Antonio/SP with a KOH solution. After dialysis, both AHS and HSP extracts were filtered in membrane of 0.45 µm pore size (Fraction F1: < 0.45 µm and fractioned by ultrafiltration in different apparent molecular sizes (AMS (F2: 100 kDa-0.45 μm; F3: 30 kDa-100 kDa and F4: < 30 kDa. The extracts with the lowest AMS (F3 and F4 showed a higher number of aliphatic carbons than aromatic carbons, a higher concentration of groups containing oxygen and a higher percentage of fulvic acids (FA than humic acids (HA for both AHS and HSP. However, the AHS presented higher FA than HA content in relation to the HSP and distinct structural properties.

  9. TagDust2: a generic method to extract reads from sequencing data.

    Science.gov (United States)

    Lassmann, Timo

    2015-01-28

    Arguably the most basic step in the analysis of next generation sequencing data (NGS) involves the extraction of mappable reads from the raw reads produced by sequencing instruments. The presence of barcodes, adaptors and artifacts subject to sequencing errors makes this step non-trivial. Here I present TagDust2, a generic approach utilizing a library of hidden Markov models (HMM) to accurately extract reads from a wide array of possible read architectures. TagDust2 extracts more reads of higher quality compared to other approaches. Processing of multiplexed single, paired end and libraries containing unique molecular identifiers is fully supported. Two additional post processing steps are included to exclude known contaminants and filter out low complexity sequences. Finally, TagDust2 can automatically detect the library type of sequenced data from a predefined selection. Taken together TagDust2 is a feature rich, flexible and adaptive solution to go from raw to mappable NGS reads in a single step. The ability to recognize and record the contents of raw reads will help to automate and demystify the initial, and often poorly documented, steps in NGS data analysis pipelines. TagDust2 is freely available at: http://tagdust.sourceforge.net .

  10. A needle extraction utilizing a molecularly imprinted-sol-gel xerogel for on-line microextraction of the lung cancer biomarker bilirubin from plasma and urine samples.

    Science.gov (United States)

    Moein, Mohammad Mahdi; Jabbar, Dunia; Colmsjö, Anders; Abdel-Rehim, Mohamed

    2014-10-31

    In the present work, a needle trap utilizing a molecularly imprinted sol-gel xerogel was prepared for the on-line microextraction of bilirubin from plasma and urine samples. Each prepared needle could be used for approximately one hundred extractions before it was discarded. Imprinted and non-imprinted sol-gel xerogel were applied for the extraction of bilirubin from plasma and urine samples. The produced molecularly imprinted sol-gel xerogel polymer showed high binding capacity and fast adsorption/desorption kinetics for bilirubin in plasma and urine samples. The adsorption capacity of molecularly imprinted sol-gel xerogel polymer was approximately 60% higher than that of non-imprinted polymer. The effect of the conditioning, washing and elution solvents, pH, extraction time, adsorption capacity and imprinting factor were investigated. The limit of detection and the lower limit of quantification were set to 1.6 and 5nmolL(-1), respectively using plasma or urine samples. The standard calibration curves were obtained within the concentration range of 5-1000nmolL(-1) in both plasma and urine samples. The coefficients of determination values (R(2)) were ≥0.998 for all runs. The extraction recovery was approximately 80% for BR in the human plasma and urine samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Strategies for the extraction and analysis of non-extractable polyphenols from plants.

    Science.gov (United States)

    Domínguez-Rodríguez, Gloria; Marina, María Luisa; Plaza, Merichel

    2017-09-08

    The majority of studies based on phenolic compounds from plants are focused on the extractable fraction derived from an aqueous or aqueous-organic extraction. However, an important fraction of polyphenols is ignored due to the fact that they remain retained in the residue of extraction. They are the so-called non-extractable polyphenols (NEPs) which are high molecular weight polymeric polyphenols or individual low molecular weight phenolics associated to macromolecules. The scarce information available about NEPs shows that these compounds possess interesting biological activities. That is why the interest about the study of these compounds has been increasing in the last years. Furthermore, the extraction and characterization of NEPs are considered a challenge because the developed analytical methodologies present some limitations. Thus, the present literature review summarizes current knowledge of NEPs and the different methodologies for the extraction of these compounds, with a particular focus on hydrolysis treatments. Besides, this review provides information on the most recent developments in the purification, separation, identification and quantification of NEPs from plants. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Feature extraction and descriptor calculation methods for automatic georeferencing of Philippines' first microsatellite imagery

    Science.gov (United States)

    Tupas, M. E. A.; Dasallas, J. A.; Jiao, B. J. D.; Magallon, B. J. P.; Sempio, J. N. H.; Ramos, M. K. F.; Aranas, R. K. D.; Tamondong, A. M.

    2017-10-01

    The FAST-SIFT corner detector and descriptor extractor combination was used to automatically georeference DIWATA-1 Spaceborne Multispectral Imager images. Features from the Fast Accelerated Segment Test (FAST) algorithm detects corners or keypoints in an image, and these robustly detected keypoints have well-defined positions. Descriptors were computed using Scale-Invariant Feature Transform (SIFT) extractor. FAST-SIFT method effectively SMI same-subscene images detected by the NIR sensor. The method was also tested in stitching NIR images with varying subscene swept by the camera. The slave images were matched to the master image. The keypoints served as the ground control points. Random sample consensus was used to eliminate fall-out matches and ensure accuracy of the feature points from which the transformation parameters were derived. Keypoints are matched based on their descriptor vector. Nearest-neighbor matching is employed based on a metric distance between the descriptors. The metrics include Euclidean and city block, among others. Rough matching outputs not only the correct matches but also the faulty matches. A previous work in automatic georeferencing incorporates a geometric restriction. In this work, we applied a simplified version of the learning method. RANSAC was used to eliminate fall-out matches and ensure accuracy of the feature points. This method identifies if a point fits the transformation function and returns inlier matches. The transformation matrix was solved by Affine, Projective, and Polynomial models. The accuracy of the automatic georeferencing method were determined by calculating the RMSE of interest points, selected randomly, between the master image and transformed slave image.

  13. SEVA Linkers: A Versatile and Automatable DNA Backbone Exchange Standard for Synthetic Biology

    DEFF Research Database (Denmark)

    Kim, Se Hyeuk; Cavaleiro, Mafalda; Rennig, Maja

    2016-01-01

    flexibility, and different researchers prefer and master different molecular technologies. Here, we describe a new, highly versatile and automatable standard “SEVA linkers” for vector exchange. SEVA linkers enable backbone swapping with 20 combinations of classical enzymatic restriction/ligation, Gibson...

  14. COSMO-RS-based extractant screening for phenol extraction as model system

    NARCIS (Netherlands)

    Burghoff, B.; Goetheer, E.L.V.; Haan, A.B. de

    2008-01-01

    The focus of this investigation is the development of a fast and reliable extractant screening approach. Phenol extraction is selected as the model process. A quantum chemical conductor-like screening model for real solvents (COSMO-RS) is combined with molecular design considerations. For this

  15. Comparison of automatic and visual methods used for image segmentation in Endodontics: a microCT study.

    Science.gov (United States)

    Queiroz, Polyane Mazucatto; Rovaris, Karla; Santaella, Gustavo Machado; Haiter-Neto, Francisco; Freitas, Deborah Queiroz

    2017-01-01

    To calculate root canal volume and surface area in microCT images, an image segmentation by selecting threshold values is required, which can be determined by visual or automatic methods. Visual determination is influenced by the operator's visual acuity, while the automatic method is done entirely by computer algorithms. To compare between visual and automatic segmentation, and to determine the influence of the operator's visual acuity on the reproducibility of root canal volume and area measurements. Images from 31 extracted human anterior teeth were scanned with a μCT scanner. Three experienced examiners performed visual image segmentation, and threshold values were recorded. Automatic segmentation was done using the "Automatic Threshold Tool" available in the dedicated software provided by the scanner's manufacturer. Volume and area measurements were performed using the threshold values determined both visually and automatically. The paired Student's t-test showed no significant difference between visual and automatic segmentation methods regarding root canal volume measurements (p=0.93) and root canal surface (p=0.79). Although visual and automatic segmentation methods can be used to determine the threshold and calculate root canal volume and surface, the automatic method may be the most suitable for ensuring the reproducibility of threshold determination.

  16. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  17. Human visual system automatically encodes sequential regularities of discrete events.

    Science.gov (United States)

    Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki

    2010-06-01

    For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential

  18. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  19. Automatic segmentation of mandible in panoramic x-ray.

    Science.gov (United States)

    Abdi, Amir Hossein; Kasaei, Shohreh; Mehdizadeh, Mojdeh

    2015-10-01

    As the panoramic x-ray is the most common extraoral radiography in dentistry, segmentation of its anatomical structures facilitates diagnosis and registration of dental records. This study presents a fast and accurate method for automatic segmentation of mandible in panoramic x-rays. In the proposed four-step algorithm, a superior border is extracted through horizontal integral projections. A modified Canny edge detector accompanied by morphological operators extracts the inferior border of the mandible body. The exterior borders of ramuses are extracted through a contour tracing method based on the average model of mandible. The best-matched template is fetched from the atlas of mandibles to complete the contour of left and right processes. The algorithm was tested on a set of 95 panoramic x-rays. Evaluating the results against manual segmentations of three expert dentists showed that the method is robust. It achieved an average performance of [Formula: see text] in Dice similarity, specificity, and sensitivity.

  20. Automatic Tortuosity-Based Retinopathy of Prematurity Screening System

    Science.gov (United States)

    Sukkaew, Lassada; Uyyanonvara, Bunyarit; Makhanov, Stanislav S.; Barman, Sarah; Pangputhipong, Pannet

    Retinopathy of Prematurity (ROP) is an infant disease characterized by increased dilation and tortuosity of the retinal blood vessels. Automatic tortuosity evaluation from retinal digital images is very useful to facilitate an ophthalmologist in the ROP screening and to prevent childhood blindness. This paper proposes a method to automatically classify the image into tortuous and non-tortuous. The process imitates expert ophthalmologists' screening by searching for clearly tortuous vessel segments. First, a skeleton of the retinal blood vessels is extracted from the original infant retinal image using a series of morphological operators. Next, we propose to partition the blood vessels recursively using an adaptive linear interpolation scheme. Finally, the tortuosity is calculated based on the curvature of the resulting vessel segments. The retinal images are then classified into two classes using segments characterized by the highest tortuosity. For an optimal set of training parameters the prediction is as high as 100%.

  1. Molecular Docking Studies and Anti-Tyrosinase Activity of Thai Mango Seed Kernel Extract

    Directory of Open Access Journals (Sweden)

    Patchreenart Saparpakorn

    2009-01-01

    Full Text Available The alcoholic extract from seed kernels of Thai mango (Mangifera indica L. cv. ‘Fahlun’ (Anacardiaceae and its major phenolic principle (pentagalloylglucopyranose exhibited potent, dose-dependent inhibitory effects on tyrosinase with respect to L-DOPA. Molecular docking studies revealed that the binding orientations of the phenolic principles were in the tyrosinase binding pocket and their orientations were located in the hydrophobic binding pocket surrounding the binuclear copper active site. The results indicated a possible mechanism for their anti-tyrosinase activity which may involve an ability to chelate the copper atoms which are required for the catalytic activity of tyrosinase.

  2. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    Science.gov (United States)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  3. Molecularly imprinted solid-phase extraction of glutathione from urine samples

    International Nuclear Information System (INIS)

    Song, Renyuan; Hu, Xiaoling; Guan, Ping; Li, Ji; Zhao, Na; Wang, Qiaoli

    2014-01-01

    Molecularly imprinted polymer (MIP) particles for glutathione were synthesized through iniferter-controlled living radical precipitation polymerization (IRPP) under ultraviolet radiation at ambient temperature. Static adsorption, solid-phase extraction, and high-performance liquid chromatography were carried out to evaluate the adsorption properties and selective recognition characteristics of the polymers for glutathione and its structural analogs. The obtained IRPP-MIP particles exhibited a regularly spherical shape, rapid binding kinetics, high imprinting factor, and high selectivity compared with the MIP particles prepared using traditional free-radical precipitation polymerization. The selective separation and enrichment of glutathione from the mixture of glycyl-glycine and glutathione disulfide could be achieved on the IRPP-MIP cartridge. The recoveries of glutathione, glycyl-glycine, and glutathione disulfide were 95.6% ± 3.65%, 29.5% ± 1.26%, and 49.9% ± 1.71%, respectively. The detection limit (S/N = 3) of glutathione was 0.5 mg·L −1 . The relative standard deviations (RSDs) for 10 replicate detections of 50 mg·L −1 of glutathione were 5.76%, and the linear range of the calibration curve was 0.5 mg·L −1 to 200 mg·L −1 under optimized conditions. The proposed approach was successfully applied to determine glutathione in spiked human urine samples with recoveries of 90.24% to 96.20% and RSDs of 0.48% to 5.67%. - Highlights: • Imprinted polymer particles were prepared by IRPP at ambient temperature. • High imprinting factor, high selectivity, and rapid binding kinetics were achieved. • Selective solid-phase extraction of glutathione from human urine samples

  4. Analysis of iminosugars and other low molecular weight carbohydrates in Aglaonema sp. extracts by hydrophilic interaction liquid chromatography coupled to mass spectrometry.

    Science.gov (United States)

    Rodríguez-Sánchez, S; García-Sarrió, M J; Quintanilla-López, J E; Soria, A C; Sanz, M L

    2015-12-04

    A method by hydrophilic interaction liquid chromatography coupled to tandem mass spectrometry (HILIC-MS(2)) has been successfully developed for the simultaneous analysis of bioactive iminosugars and other low molecular weight carbohydrates in Aglaonema leaf extracts. Among other experimental chromatographic conditions, mobile phase eluents, additives and column temperature were evaluated in terms of retention time, resolution, peak width and symmetry provided for target carbohydrates. In general, narrow peaks (wh: 0.2-0.6min) with good symmetry (As: 0.9-1.3) and excellent resolution (Rs>1.8) were obtained for iminosugars using an acetonitrile:water gradient with 5mM ammonium acetate in both eluents at 55°C. Tandem mass spectra were used to confirm the presence of previously detected iminosugars in Aglaonema extracts and to tentatively identify for the first time others such as miglitol isomer, glycosyl-miglitol isomers and glycosyl-DMDP isomers. Concentration of total iminosugars varied from 1.35 to 2.84mgg(-1) in the extracts of the different Aglaonema samples analyzed. To the best of our knowledge, this is the first time that a HILIC-MS(2) method has been proposed for the simultaneous analysis of iminosugars and other low molecular weight carbohydrates of Aglaonema sp. extracts. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Automatic Vessel Segmentation on Retinal Images

    Institute of Scientific and Technical Information of China (English)

    Chun-Yuan Yu; Chia-Jen Chang; Yen-Ju Yao; Shyr-Shen Yu

    2014-01-01

    Several features of retinal vessels can be used to monitor the progression of diseases. Changes in vascular structures, for example, vessel caliber, branching angle, and tortuosity, are portents of many diseases such as diabetic retinopathy and arterial hyper-tension. This paper proposes an automatic retinal vessel segmentation method based on morphological closing and multi-scale line detection. First, an illumination correction is performed on the green band retinal image. Next, the morphological closing and subtraction processing are applied to obtain the crude retinal vessel image. Then, the multi-scale line detection is used to fine the vessel image. Finally, the binary vasculature is extracted by the Otsu algorithm. In this paper, for improving the drawbacks of multi-scale line detection, only the line detectors at 4 scales are used. The experimental results show that the accuracy is 0.939 for DRIVE (digital retinal images for vessel extraction) retinal database, which is much better than other methods.

  6. Relations between Automatically Extracted Motion Features and the Quality of Mother-Infant Interactions at 4 and 13 Months.

    Science.gov (United States)

    Egmose, Ida; Varni, Giovanna; Cordes, Katharina; Smith-Nielsen, Johanne; Væver, Mette S; Køppe, Simo; Cohen, David; Chetouani, Mohamed

    2017-01-01

    Bodily movements are an essential component of social interactions. However, the role of movement in early mother-infant interaction has received little attention in the research literature. The aim of the present study was to investigate the relationship between automatically extracted motion features and interaction quality in mother-infant interactions at 4 and 13 months. The sample consisted of 19 mother-infant dyads at 4 months and 33 mother-infant dyads at 13 months. The coding system Coding Interactive Behavior (CIB) was used for rating the quality of the interactions. Kinetic energy of upper-body, arms and head motion was calculated and used as segmentation in order to extract coarse- and fine-grained motion features. Spearman correlations were conducted between the composites derived from the CIB and the coarse- and fine-grained motion features. At both 4 and 13 months, longer durations of maternal arm motion and infant upper-body motion were associated with more aversive interactions, i.e., more parent-led interactions and more infant negativity. Further, at 4 months, the amount of motion silence was related to more adaptive interactions, i.e., more sensitive and child-led interactions. Analyses of the fine-grained motion features showed that if the mother coordinates her head movements with her infant's head movements, the interaction is rated as more adaptive in terms of less infant negativity and less dyadic negative states. We found more and stronger correlations between the motion features and the interaction qualities at 4 compared to 13 months. These results highlight that motion features are related to the quality of mother-infant interactions. Factors such as infant age and interaction set-up are likely to modify the meaning and importance of different motion features.

  7. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  8. Semi Automatic Ontology Instantiation in the domain of Risk Management

    Science.gov (United States)

    Makki, Jawad; Alquier, Anne-Marie; Prince, Violaine

    One of the challenging tasks in the context of Ontological Engineering is to automatically or semi-automatically support the process of Ontology Learning and Ontology Population from semi-structured documents (texts). In this paper we describe a Semi-Automatic Ontology Instantiation method from natural language text, in the domain of Risk Management. This method is composed from three steps 1 ) Annotation with part-of-speech tags, 2) Semantic Relation Instances Extraction, 3) Ontology instantiation process. It's based on combined NLP techniques using human intervention between steps 2 and 3 for control and validation. Since it heavily relies on linguistic knowledge it is not domain dependent which is a good feature for portability between the different fields of risk management application. The proposed methodology uses the ontology of the PRIMA1 project (supported by the European community) as a Generic Domain Ontology and populates it via an available corpus. A first validation of the approach is done through an experiment with Chemical Fact Sheets from Environmental Protection Agency2.

  9. An automatic tooth preparation technique: A preliminary study

    Science.gov (United States)

    Yuan, Fusong; Wang, Yong; Zhang, Yaopeng; Sun, Yuchun; Wang, Dangxiao; Lyu, Peijun

    2016-04-01

    The aim of this study is to validate the feasibility and accuracy of a new automatic tooth preparation technique in dental healthcare. An automatic tooth preparation robotic device with three-dimensional motion planning software was developed, which controlled an ultra-short pulse laser (USPL) beam (wavelength 1,064 nm, pulse width 15 ps, output power 30 W, and repeat frequency rate 100 kHz) to complete the tooth preparation process. A total of 15 freshly extracted human intact first molars were collected and fixed into a phantom head, and the target preparation shapes of these molars were designed using customised computer-aided design (CAD) software. The accuracy of tooth preparation was evaluated using the Geomagic Studio and Imageware software, and the preparing time of each tooth was recorded. Compared with the target preparation shape, the average shape error of the 15 prepared molars was 0.05-0.17 mm, the preparation depth error of the occlusal surface was approximately 0.097 mm, and the error of the convergence angle was approximately 1.0°. The average preparation time was 17 minutes. These results validated the accuracy and feasibility of the automatic tooth preparation technique.

  10. Automatic 3d Building Model Generations with Airborne LiDAR Data

    Science.gov (United States)

    Yastikli, N.; Cetin, Z.

    2017-11-01

    LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D) modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified that automatic 3D

  11. AUTOMATIC 3D BUILDING MODEL GENERATIONS WITH AIRBORNE LiDAR DATA

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2017-11-01

    Full Text Available LiDAR systems become more and more popular because of the potential use for obtaining the point clouds of vegetation and man-made objects on the earth surface in an accurate and quick way. Nowadays, these airborne systems have been frequently used in wide range of applications such as DEM/DSM generation, topographic mapping, object extraction, vegetation mapping, 3 dimensional (3D modelling and simulation, change detection, engineering works, revision of maps, coastal management and bathymetry. The 3D building model generation is the one of the most prominent applications of LiDAR system, which has the major importance for urban planning, illegal construction monitoring, 3D city modelling, environmental simulation, tourism, security, telecommunication and mobile navigation etc. The manual or semi-automatic 3D building model generation is costly and very time-consuming process for these applications. Thus, an approach for automatic 3D building model generation is needed in a simple and quick way for many studies which includes building modelling. In this study, automatic 3D building models generation is aimed with airborne LiDAR data. An approach is proposed for automatic 3D building models generation including the automatic point based classification of raw LiDAR point cloud. The proposed point based classification includes the hierarchical rules, for the automatic production of 3D building models. The detailed analyses for the parameters which used in hierarchical rules have been performed to improve classification results using different test areas identified in the study area. The proposed approach have been tested in the study area which has partly open areas, forest areas and many types of the buildings, in Zekeriyakoy, Istanbul using the TerraScan module of TerraSolid. The 3D building model was generated automatically using the results of the automatic point based classification. The obtained results of this research on study area verified

  12. A hybrid model based on neural networks for biomedical relation extraction.

    Science.gov (United States)

    Zhang, Yijia; Lin, Hongfei; Yang, Zhihao; Wang, Jian; Zhang, Shaowu; Sun, Yuanyuan; Yang, Liang

    2018-05-01

    Biomedical relation extraction can automatically extract high-quality biomedical relations from biomedical texts, which is a vital step for the mining of biomedical knowledge hidden in the literature. Recurrent neural networks (RNNs) and convolutional neural networks (CNNs) are two major neural network models for biomedical relation extraction. Neural network-based methods for biomedical relation extraction typically focus on the sentence sequence and employ RNNs or CNNs to learn the latent features from sentence sequences separately. However, RNNs and CNNs have their own advantages for biomedical relation extraction. Combining RNNs and CNNs may improve biomedical relation extraction. In this paper, we present a hybrid model for the extraction of biomedical relations that combines RNNs and CNNs. First, the shortest dependency path (SDP) is generated based on the dependency graph of the candidate sentence. To make full use of the SDP, we divide the SDP into a dependency word sequence and a relation sequence. Then, RNNs and CNNs are employed to automatically learn the features from the sentence sequence and the dependency sequences, respectively. Finally, the output features of the RNNs and CNNs are combined to detect and extract biomedical relations. We evaluate our hybrid model using five public (protein-protein interaction) PPI corpora and a (drug-drug interaction) DDI corpus. The experimental results suggest that the advantages of RNNs and CNNs in biomedical relation extraction are complementary. Combining RNNs and CNNs can effectively boost biomedical relation extraction performance. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Exploration of Web Users' Search Interests through Automatic Subject Categorization of Query Terms.

    Science.gov (United States)

    Pu, Hsiao-tieh; Yang, Chyan; Chuang, Shui-Lung

    2001-01-01

    Proposes a mechanism that carefully integrates human and machine efforts to explore Web users' search interests. The approach consists of a four-step process: extraction of core terms; construction of subject taxonomy; automatic subject categorization of query terms; and observation of users' search interests. Research findings are proved valuable…

  14. Automatic coronary calcium scoring using noncontrast and contrast CT images

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Guanyu, E-mail: yang.list@seu.edu.cn; Chen, Yang; Shu, Huazhong [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Ning, Xiufang; Sun, Qiaoyu [Laboratory of Image Science and Technology, School of Computer Science and Engineering, Southeast University, No. 2, Si Pai Lou, Nanjing 210096 (China); Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing 210096 (China); Coatrieux, Jean-Louis [INSERM-U1099, Rennes F-35000 (France); Labotatoire Traitement du Signal et de l’Image (LTSI), Université de Rennes 1, Campus de Beaulieu, Bat. 22, Rennes 35042 Cedex (France); Centre de Recherche en Information Biomédicale Sino-Français (LIA CRIBs), Nanjing 210096 (China)

    2016-05-15

    Purpose: Calcium scoring is widely used to assess the risk of coronary heart disease (CHD). Accurate coronary artery calcification detection in noncontrast CT image is a prerequisite step for coronary calcium scoring. Currently, calcified lesions in the coronary arteries are manually identified by radiologists in clinical practice. Thus, in this paper, a fully automatic calcium scoring method was developed to alleviate the work load of the radiologists or cardiologists. Methods: The challenge of automatic coronary calcification detection is to discriminate the calcification in the coronary arteries from the calcification in the other tissues. Since the anatomy of coronary arteries is difficult to be observed in the noncontrast CT images, the contrast CT image of the same patient is used to extract the regions of the aorta, heart, and coronary arteries. Then, a patient-specific region-of-interest (ROI) is generated in the noncontrast CT image according to the segmentation results in the contrast CT image. This patient-specific ROI focuses on the regions in the neighborhood of coronary arteries for calcification detection, which can eliminate the calcifications in the surrounding tissues. A support vector machine classifier is applied finally to refine the results by removing possible image noise. Furthermore, the calcified lesions in the noncontrast images belonging to the different main coronary arteries are identified automatically using the labeling results of the extracted coronary arteries. Results: Forty datasets from four different CT machine vendors were used to evaluate their algorithm, which were provided by the MICCAI 2014 Coronary Calcium Scoring (orCaScore) Challenge. The sensitivity and positive predictive value for the volume of detected calcifications are 0.989 and 0.948. Only one patient out of 40 patients had been assigned to the wrong risk category defined according to Agatston scores (0, 1–100, 101–300, >300) by comparing with the ground

  15. Molecular alterations of tropoelastin and proteoglycans induced by tobacco smoke extracts and ultraviolet A in cultured skin fibroblasts

    International Nuclear Information System (INIS)

    Yin, Lei; Morita, Akimichi; Tsuji, Takuo

    2002-01-01

    Functional integrity of normal skin is dependent on the balance between the biosynthesis and degradation of extracellular matrix, primarily composed of collagen, elastin and proteoglycans. In our previous studies, we found that tobacco smoke extracts decreased expressions of type I and III procollagen and induced matrix metalloproteinase-1 (MMP-1) and MMP-3 in the cultured skin fibroblasts. We here further investigated the effects of tobacco smoke extracts or ultraviolet A (UVA) treatments on the expression of tropoelastin (soluble elastin protein), and versican and decorin (proteoglycans) in cultured skin fibroblasts. The mRNA of tropoelastin increased by tobacco smoke extracts or UVA irradiation. Versican was markedly shown to decrease after these treatments by using western blotting and the mRNA of versican V0 also significantly decreased. UVA treatment did not show remarkable change in decorin protein, but resulted in marked decrease of decorin D1 mRNA. In contrast to UVA irradiation, the treatments of tobacco smoke extracts resulted in significant increase in decorin, while mRNA of decorin D1 decreased as compared to the control. MMP-7 increased after the treatment of tobacco smoke extracts or UVA. These results indicated that common molecular features might underlie the skin premature aging induced by tobacco smoke extracts and UVA, including abnormal regulation of extracellular matrix deposition through elevated MMPs, reduced collagen production, abnormal tropoelastin accumulation, and altered proteoglycans. (author)

  16. Molecular alterations of tropoelastin and proteoglycans induced by tobacco smoke extracts and ultraviolet A in cultured skin fibroblasts

    Energy Technology Data Exchange (ETDEWEB)

    Yin, Lei; Morita, Akimichi; Tsuji, Takuo [Nagoya City Univ. (Japan). Medical School

    2002-02-01

    Functional integrity of normal skin is dependent on the balance between the biosynthesis and degradation of extracellular matrix, primarily composed of collagen, elastin and proteoglycans. In our previous studies, we found that tobacco smoke extracts decreased expressions of type I and III procollagen and induced matrix metalloproteinase-1 (MMP-1) and MMP-3 in the cultured skin fibroblasts. We here further investigated the effects of tobacco smoke extracts or ultraviolet A (UVA) treatments on the expression of tropoelastin (soluble elastin protein), and versican and decorin (proteoglycans) in cultured skin fibroblasts. The mRNA of tropoelastin increased by tobacco smoke extracts or UVA irradiation. Versican was markedly shown to decrease after these treatments by using western blotting and the mRNA of versican V0 also significantly decreased. UVA treatment did not show remarkable change in decorin protein, but resulted in marked decrease of decorin D1 mRNA. In contrast to UVA irradiation, the treatments of tobacco smoke extracts resulted in significant increase in decorin, while mRNA of decorin D1 decreased as compared to the control. MMP-7 increased after the treatment of tobacco smoke extracts or UVA. These results indicated that common molecular features might underlie the skin premature aging induced by tobacco smoke extracts and UVA, including abnormal regulation of extracellular matrix deposition through elevated MMPs, reduced collagen production, abnormal tropoelastin accumulation, and altered proteoglycans. (author)

  17. Automatic differentiation bibliography

    Energy Technology Data Exchange (ETDEWEB)

    Corliss, G.F. [comp.

    1992-07-01

    This is a bibliography of work related to automatic differentiation. Automatic differentiation is a technique for the fast, accurate propagation of derivative values using the chain rule. It is neither symbolic nor numeric. Automatic differentiation is a fundamental tool for scientific computation, with applications in optimization, nonlinear equations, nonlinear least squares approximation, stiff ordinary differential equation, partial differential equations, continuation methods, and sensitivity analysis. This report is an updated version of the bibliography which originally appeared in Automatic Differentiation of Algorithms: Theory, Implementation, and Application.

  18. Advancing Affect Modeling via Preference Learning and Unsupervised Feature Extraction

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez

    strategies (error functions and training algorithms) for artificial neural networks are examined across synthetic and psycho-physiological datasets, and compared against support vector machines and Cohen’s method. Results reveal the best training strategies for neural networks and suggest their superiority...... difficulties, ordinal reports such as rankings and ratings can yield more reliable affect annotations than alternative tools. This thesis explores preference learning methods to automatically learn computational models from ordinal annotations of affect. In particular, an extensive collection of training...... over the other examined methods. The second challenge addressed in this thesis refers to the extraction of relevant information from physiological modalities. Deep learning is proposed as an automatic approach to extract input features for models of affect from physiological signals. Experiments...

  19. Effects of Feature Extraction and Classification Methods on Cyberbully Detection

    OpenAIRE

    ÖZEL, Selma Ayşe; SARAÇ, Esra

    2016-01-01

    Cyberbullying is defined as an aggressive, intentional action against a defenseless person by using the Internet, or other electronic contents. Researchers have found that many of the bullying cases have tragically ended in suicides; hence automatic detection of cyberbullying has become important. In this study we show the effects of feature extraction, feature selection, and classification methods that are used, on the performance of automatic detection of cyberbullying. To perform the exper...

  20. A new generic method for the semi-automatic extraction of river and road networks in low and mid-resolution satellite images

    Energy Technology Data Exchange (ETDEWEB)

    Grazzini, Jacopo [Los Alamos National Laboratory; Dillard, Scott [PNNL; Soille, Pierre [EC JRC

    2010-10-21

    This paper addresses the problem of semi-automatic extraction of road or hydrographic networks in satellite images. For that purpose, we propose an approach combining concepts arising from mathematical morphology and hydrology. The method exploits both geometrical and topological characteristics of rivers/roads and their tributaries in order to reconstruct the complete networks. It assumes that the images satisfy the following two general assumptions, which are the minimum conditions for a road/river network to be identifiable and are usually verified in low- to mid-resolution satellite images: (i) visual constraint: most pixels composing the network have similar spectral signature that is distinguishable from most of the surrounding areas; (ii) geometric constraint: a line is a region that is relatively long and narrow, compared with other objects in the image. While this approach fully exploits local (roads/rivers are modeled as elongated regions with a smooth spectral signature in the image and a maximum width) and global (they are structured like a tree) characteristics of the networks, further directional information about the image structures is incorporated. Namely, an appropriate anisotropic metric is designed by using both the characteristic features of the target network and the eigen-decomposition of the gradient structure tensor of the image. Following, the geodesic propagation from a given network seed with this metric is combined with hydrological operators for overland flow simulation to extract the paths which contain most line evidence and identify them with the target network.

  1. Comparative Analysis of Music Recordings from Western and Non-Western traditions by Automatic Tonal Feature Extraction

    Directory of Open Access Journals (Sweden)

    Emilia Gómez

    2008-09-01

    Full Text Available The automatic analysis of large musical corpora by means of computational models overcomes some limitations of manual analysis, and the unavailability of scores for most existing music makes necessary to work with audio recordings. Until now, research on this area has focused on music from the Western tradition. Nevertheless, we might ask if the available methods are suitable when analyzing music from other cultures. We present an empirical approach to the comparative analysis of audio recordings, focusing on tonal features and data mining techniques. Tonal features are related to the pitch class distribution, pitch range and employed scale, gamut and tuning system. We provide our initial but promising results obtained when trying to automatically distinguish music from Western and non- Western traditions; we analyze which descriptors are most relevant and study their distribution over 1500 pieces from different traditions and styles. As a result, some feature distributions differ for Western and non-Western music, and the obtained classification accuracy is higher than 80% for different classification algorithms and an independent test set. These results show that automatic description of audio signals together with data mining techniques provide means to characterize huge music collections from different traditions and complement musicological manual analyses.

  2. Acquisition of data for plasma simulation by automated extraction of terminology from article abstracts

    International Nuclear Information System (INIS)

    Pichl, L.; Suzuki, Manabu; Murata, Masaki; Sasaki, Akira; Kato, Daiji; Murakami, Izumi; Rhee, Yongjoo

    2007-01-01

    Computer simulation of burning plasmas as well as computational plasma modeling in image processing requires a number of accurate data, in addition to a relevant model framework. To this aim, it is very important to recognize, obtain and evaluate data relevant for such a simulation from the literature. This work focuses on the simultaneous search of relevant data across various online databases, extraction of cataloguing and numerical information, and automatic recognition of specific terminology in the text retrieved. The concept is illustrated on the particular terminology of Atomic and Molecular data relevant to edge plasma simulation. The IAEA search engine GENIE and the NIFS search engine Joint Search 2 are compared and discussed. Accurate modeling of the imaged object is considered to be the ultimate challenge in improving the resolution limits of plasma imaging. (author)

  3. Automatic video shot boundary detection using k-means clustering and improved adaptive dual threshold comparison

    Science.gov (United States)

    Sa, Qila; Wang, Zhihui

    2018-03-01

    At present, content-based video retrieval (CBVR) is the most mainstream video retrieval method, using the video features of its own to perform automatic identification and retrieval. This method involves a key technology, i.e. shot segmentation. In this paper, the method of automatic video shot boundary detection with K-means clustering and improved adaptive dual threshold comparison is proposed. First, extract the visual features of every frame and divide them into two categories using K-means clustering algorithm, namely, one with significant change and one with no significant change. Then, as to the classification results, utilize the improved adaptive dual threshold comparison method to determine the abrupt as well as gradual shot boundaries.Finally, achieve automatic video shot boundary detection system.

  4. Apparatus for extraction and separation of a preferentially photo-dissociated molecular isotope into positive and negative ions by means of an electric field

    International Nuclear Information System (INIS)

    Fletcher, J.C.

    1978-01-01

    Apparatus for the separation and extraction of molecular isotopes is claimed. Molecules of one and the same isotope are preferentially photo-dissociated by a laser and an ultraviolet source, or by multi-photon absorption of laser radiation. The resultant ions are confined with a magnetic field, moved in opposite directions by an electric field, extracted from the photo-dissociation region by means of screening and accelerating grids, and collected in ducts

  5. Automatic liver volume segmentation and fibrosis classification

    Science.gov (United States)

    Bal, Evgeny; Klang, Eyal; Amitai, Michal; Greenspan, Hayit

    2018-02-01

    In this work, we present an automatic method for liver segmentation and fibrosis classification in liver computed-tomography (CT) portal phase scans. The input is a full abdomen CT scan with an unknown number of slices, and the output is a liver volume segmentation mask and a fibrosis grade. A multi-stage analysis scheme is applied to each scan, including: volume segmentation, texture features extraction and SVM based classification. Data contains portal phase CT examinations from 80 patients, taken with different scanners. Each examination has a matching Fibroscan grade. The dataset was subdivided into two groups: first group contains healthy cases and mild fibrosis, second group contains moderate fibrosis, severe fibrosis and cirrhosis. Using our automated algorithm, we achieved an average dice index of 0.93 ± 0.05 for segmentation and a sensitivity of 0.92 and specificity of 0.81for classification. To the best of our knowledge, this is a first end to end automatic framework for liver fibrosis classification; an approach that, once validated, can have a great potential value in the clinic.

  6. Automatic Evaluations and Exercising: Systematic Review and Implications for Future Research.

    Science.gov (United States)

    Schinkoeth, Michaela; Antoniewicz, Franziska

    2017-01-01

    The general purpose of this systematic review was to summarize, structure and evaluate the findings on automatic evaluations of exercising. Studies were eligible for inclusion if they reported measuring automatic evaluations of exercising with an implicit measure and assessed some kind of exercise variable. Fourteen nonexperimental and six experimental studies (out of a total N = 1,928) were identified and rated by two independent reviewers. The main study characteristics were extracted and the grade of evidence for each study evaluated. First, results revealed a large heterogeneity in the applied measures to assess automatic evaluations of exercising and the exercise variables. Generally, small to large-sized significant relations between automatic evaluations of exercising and exercise variables were identified in the vast majority of studies. The review offers a systematization of the various examined exercise variables and prompts to differentiate more carefully between actually observed exercise behavior (proximal exercise indicator) and associated physiological or psychological variables (distal exercise indicator). Second, a lack of transparent reported reflections on the differing theoretical basis leading to the use of specific implicit measures was observed. Implicit measures should be applied purposefully, taking into consideration the individual advantages or disadvantages of the measures. Third, 12 studies were rated as providing first-grade evidence (lowest grade of evidence), five represent second-grade and three were rated as third-grade evidence. There is a dramatic lack of experimental studies, which are essential for illustrating the cause-effect relation between automatic evaluations of exercising and exercise and investigating under which conditions automatic evaluations of exercising influence behavior. Conclusions about the necessity of exercise interventions targeted at the alteration of automatic evaluations of exercising should therefore

  7. Matrix molecularly imprinted mesoporous sol–gel sorbent for efficient solid-phase extraction of chloramphenicol from milk

    International Nuclear Information System (INIS)

    Samanidou, Victoria; Kehagia, Maria; Kabir, Abuzar; Furton, Kenneth G.

    2016-01-01

    Highly selective and efficient chloramphenicol imprinted sol–gel silica based inorganic polymeric sorbent (sol–gel MIP) was synthesized via matrix imprinting approach for the extraction of chloramphenicol in milk. Chloramphenicol was used as the template molecule, 3-aminopropyltriethoxysilane (3-APTES) and triethoxyphenylsilane (TEPS) as the functional precursors, tetramethyl orthosilicate (TMOS) as the cross-linker, isopropanol as the solvent/porogen, and HCl as the sol–gel catalyst. Non-imprinted sol–gel polymer (sol–gel NIP) was synthesized under identical conditions in absence of template molecules for comparison purpose. Both synthesized materials were characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FT-IR) and nitrogen adsorption porosimetry, which unambiguously confirmed their significant structural and morphological differences. The synthesized MIP and NIP materials were evaluated as sorbents for molecularly imprinted solid phase extraction (MISPE) of chloramphenicol in milk. The effect of critical extraction parameters (flow rate, elution solvent, sample and eluent volume, selectivity coefficient, retention capacity) was studied in terms of retention and desorption of chloramphenicol. Competition and cross reactivity tests have proved that sol–gel MIP sorbent possesses significantly higher specific retention and enrichment capacity for chloramphenicol compared to its non-imprinted analogue. The maximum imprinting factor (IF) was found as 9.7, whereas the highest adsorption capacity of chloramphenicol by sol–gel MIP was 23 mg/g. The sol–gel MIP was found to be adequately selective towards chloramphenicol to provide the necessary minimum required performance limit (MRPL) of 0.3 μg/kg set forth by European Commission after analysis by LC-MS even without requiring time consuming solvent evaporation and sample reconstitution step, often considered as an integral part in solid phase extraction work

  8. Matrix molecularly imprinted mesoporous sol–gel sorbent for efficient solid-phase extraction of chloramphenicol from milk

    Energy Technology Data Exchange (ETDEWEB)

    Samanidou, Victoria, E-mail: samanidu@chem.auth.gr [Laboratory of Analytical Chemistry, Department of Chemistry, Aristotle University of Thessaloniki (Greece); Kehagia, Maria [Laboratory of Analytical Chemistry, Department of Chemistry, Aristotle University of Thessaloniki (Greece); Kabir, Abuzar, E-mail: akabir@fiu.edu [International Forensic Research Institute, Department of Chemistry and Biochemistry, Florida International University, Miami, FL (United States); Furton, Kenneth G. [International Forensic Research Institute, Department of Chemistry and Biochemistry, Florida International University, Miami, FL (United States)

    2016-03-31

    Highly selective and efficient chloramphenicol imprinted sol–gel silica based inorganic polymeric sorbent (sol–gel MIP) was synthesized via matrix imprinting approach for the extraction of chloramphenicol in milk. Chloramphenicol was used as the template molecule, 3-aminopropyltriethoxysilane (3-APTES) and triethoxyphenylsilane (TEPS) as the functional precursors, tetramethyl orthosilicate (TMOS) as the cross-linker, isopropanol as the solvent/porogen, and HCl as the sol–gel catalyst. Non-imprinted sol–gel polymer (sol–gel NIP) was synthesized under identical conditions in absence of template molecules for comparison purpose. Both synthesized materials were characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FT-IR) and nitrogen adsorption porosimetry, which unambiguously confirmed their significant structural and morphological differences. The synthesized MIP and NIP materials were evaluated as sorbents for molecularly imprinted solid phase extraction (MISPE) of chloramphenicol in milk. The effect of critical extraction parameters (flow rate, elution solvent, sample and eluent volume, selectivity coefficient, retention capacity) was studied in terms of retention and desorption of chloramphenicol. Competition and cross reactivity tests have proved that sol–gel MIP sorbent possesses significantly higher specific retention and enrichment capacity for chloramphenicol compared to its non-imprinted analogue. The maximum imprinting factor (IF) was found as 9.7, whereas the highest adsorption capacity of chloramphenicol by sol–gel MIP was 23 mg/g. The sol–gel MIP was found to be adequately selective towards chloramphenicol to provide the necessary minimum required performance limit (MRPL) of 0.3 μg/kg set forth by European Commission after analysis by LC-MS even without requiring time consuming solvent evaporation and sample reconstitution step, often considered as an integral part in solid phase extraction work

  9. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have the...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/......., organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual...

  10. An automatic analyzer for sports video databases using visual cues and real-world modeling

    NARCIS (Netherlands)

    Han, Jungong; Farin, D.S.; With, de P.H.N.; Lao, Weilun

    2006-01-01

    With the advent of hard-disk video recording, video databases gradually emerge for consumer applications. The large capacity of disks requires the need for fast storage and retrieval functions. We propose a semantic analyzer for sports video, which is able to automatically extract and analyze key

  11. FERMENTATION BY Lactobacillus paracasei OF GALACTOOLIGOSACCHARIDES AND LOW-MOLECULAR-WEIGHT CARBOHYDRATES EXTRACTED FROM SQUASH (Curcubita maxima AND LUPIN (Lupinus albus SEEDS

    Directory of Open Access Journals (Sweden)

    María I. Palacio

    2014-02-01

    Full Text Available The in vitro prebiotic activity of galactooligosaccharides (GOS and low-molecular-weight carbohydrates (LMWC extracted from lupin and squash seeds on the growth of Lactobacillus paracasei BGP1 was studied. To this end, the change in cell density after 24 h of L. paracasei growth on 1% (w/v glucose, 1% (w/v raffinose, 1% (w/v commercial inulin GR, 1% (w/v lupin extract, and 1% (w/v squash extract relative to the change in cell density of a mixture of enteric strains under the same culture conditions were evaluated. Additionally, the principal components of GOS and LMWC in the extracts were identified using Thin Layer Chromatography. The highest prebiotic activity score was for L. paracasei grown on squash extract (0.55±0.03, followed by lupin extract (0.49±0.02, inulin (0.38±0.05 and raffinose (0.37±0.05. These results will contribute to selecting plant species as potential sources of prebiotic ingredients for the development of functional foods.

  12. Easy, fast and environmental friendly method for the simultaneous extraction of the 16 EPA PAHs using magnetic molecular imprinted polymers (mag-MIPs).

    Science.gov (United States)

    Villar-Navarro, Mercedes; Martín-Valero, María Jesús; Fernández-Torres, Rut Maria; Callejón-Mochón, Manuel; Bello-López, Miguel Ángel

    2017-02-15

    An easy and environmental friendly method, based on the use of magnetic molecular imprinted polymers (mag-MIPs) is proposed for the simultaneous extraction of the 16 U.S. EPA polycyclic aromatic hydrocarbons (PAHs) priority pollutants. The mag-MIPs based extraction protocol is simple, more sensitive and low organic solvent consuming compared to official methods and also adequate for those PAHs more retained in the particulate matter. The new proposed extraction method followed by HPLC determination has been validated and applied to different types of water samples: tap water, river water, lake water and mineral water. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A Neutral-Network-Fusion Architecture for Automatic Extraction of Oceanographic Features from Satellite Remote Sensing Imagery

    National Research Council Canada - National Science Library

    Askari, Farid

    1999-01-01

    This report describes an approach for automatic feature detection from fusion of remote sensing imagery using a combination of neural network architecture and the Dempster-Shafer (DS) theory of evidence...

  14. Automatic multimodal detection for long-term seizure documentation in epilepsy.

    Science.gov (United States)

    Fürbass, F; Kampusch, S; Kaniusas, E; Koren, J; Pirker, S; Hopfengärtner, R; Stefan, H; Kluge, T; Baumgartner, C

    2017-08-01

    This study investigated sensitivity and false detection rate of a multimodal automatic seizure detection algorithm and the applicability to reduced electrode montages for long-term seizure documentation in epilepsy patients. An automatic seizure detection algorithm based on EEG, EMG, and ECG signals was developed. EEG/ECG recordings of 92 patients from two epilepsy monitoring units including 494 seizures were used to assess detection performance. EMG data were extracted by bandpass filtering of EEG signals. Sensitivity and false detection rate were evaluated for each signal modality and for reduced electrode montages. All focal seizures evolving to bilateral tonic-clonic (BTCS, n=50) and 89% of focal seizures (FS, n=139) were detected. Average sensitivity in temporal lobe epilepsy (TLE) patients was 94% and 74% in extratemporal lobe epilepsy (XTLE) patients. Overall detection sensitivity was 86%. Average false detection rate was 12.8 false detections in 24h (FD/24h) for TLE and 22 FD/24h in XTLE patients. Utilization of 8 frontal and temporal electrodes reduced average sensitivity from 86% to 81%. Our automatic multimodal seizure detection algorithm shows high sensitivity with full and reduced electrode montages. Evaluation of different signal modalities and electrode montages paces the way for semi-automatic seizure documentation systems. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  15. Automatic registration of imaging mass spectrometry data to the Allen Brain Atlas transcriptome

    Science.gov (United States)

    Abdelmoula, Walid M.; Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Tolner, Else; van den Maagdenberg, Arn M. J. M.; Lelieveldt, B. P. F.; McDonnell, Liam; Dijkstra, Jouke

    2014-03-01

    Imaging Mass Spectrometry (IMS) is an emerging molecular imaging technology that provides spatially resolved information on biomolecular structures; each image pixel effectively represents a molecular mass spectrum. By combining the histological images and IMS-images, neuroanatomical structures can be distinguished based on their biomolecular features as opposed to morphological features. The combination of IMS data with spatially resolved gene expression maps of the mouse brain, as provided by the Allen Mouse Brain atlas, would enable comparative studies of spatial metabolic and gene expression patterns in life-sciences research and biomarker discovery. As such, it would be highly desirable to spatially register IMS slices to the Allen Brain Atlas (ABA). In this paper, we propose a multi-step automatic registration pipeline to register ABA histology to IMS- images. Key novelty of the method is the selection of the best reference section from the ABA, based on pre-processed histology sections. First, we extracted a hippocampus-specific geometrical feature from the given experimental histological section to initially localize it among the ABA sections. Then, feature-based linear registration is applied to the initially localized section and its two neighbors in the ABA to select the most similar reference section. A non-rigid registration yields a one-to-one mapping of the experimental IMS slice to the ABA. The pipeline was applied on 6 coronal sections from two mouse brains, showing high anatomical correspondence, demonstrating the feasibility of complementing biomolecule distributions from individual mice with the genome-wide ABA transcriptome.

  16. Software design of automatic counting system for nuclear track based on mathematical morphology algorithm

    International Nuclear Information System (INIS)

    Pan Yi; Mao Wanchong

    2010-01-01

    The parameter measurement of nuclear track occupies an important position in the field of nuclear technology. However, traditional artificial counting method has many limitations. In recent years, DSP and digital image processing technology have been applied in nuclear field more and more. For the sake of reducing errors of visual measurement in artificial counting method, an automatic counting system for nuclear track based on DM642 real-time image processing platform is introduced in this article, which is able to effectively remove interferences from the background and noise points, as well as automatically extract nuclear track-points by using mathematical morphology algorithm. (authors)

  17. Testing a low molecular mass fraction of a mushroom (Lentinus edodes) extract formulated as an oral rinse in a cohort of volunteers

    NARCIS (Netherlands)

    Signoretto, C.; Burlacchini, G.; Marchi, A.; Grillenzoni, M.; Cavalleri, G.; Ciric, L.; Lingström, P.; Pezzati, E.; Daglia, M.; Zaura, E.; Pratten, J.; Spratt, D.A.; Wilson, M.; Canepari, P.

    2011-01-01

    Although foods are considered enhancing factors for dental caries and periodontitis, laboratory researches indicate that several foods and beverages contain components endowed with antimicrobial and antiplaque activities. A low molecular mass (LMM) fraction of an aqueous mushroom extract has been

  18. Clinically-inspired automatic classification of ovarian carcinoma subtypes

    Directory of Open Access Journals (Sweden)

    Aicha BenTaieb

    2016-01-01

    Full Text Available Context: It has been shown that ovarian carcinoma subtypes are distinct pathologic entities with differing prognostic and therapeutic implications. Histotyping by pathologists has good reproducibility, but occasional cases are challenging and require immunohistochemistry and subspecialty consultation. Motivated by the need for more accurate and reproducible diagnoses and to facilitate pathologists′ workflow, we propose an automatic framework for ovarian carcinoma classification. Materials and Methods: Our method is inspired by pathologists′ workflow. We analyse imaged tissues at two magnification levels and extract clinically-inspired color, texture, and segmentation-based shape descriptors using image-processing methods. We propose a carefully designed machine learning technique composed of four modules: A dissimilarity matrix, dimensionality reduction, feature selection and a support vector machine classifier to separate the five ovarian carcinoma subtypes using the extracted features. Results: This paper presents the details of our implementation and its validation on a clinically derived dataset of eighty high-resolution histopathology images. The proposed system achieved a multiclass classification accuracy of 95.0% when classifying unseen tissues. Assessment of the classifier′s confusion (confusion matrix between the five different ovarian carcinoma subtypes agrees with clinician′s confusion and reflects the difficulty in diagnosing endometrioid and serous carcinomas. Conclusions: Our results from this first study highlight the difficulty of ovarian carcinoma diagnosis which originate from the intrinsic class-imbalance observed among subtypes and suggest that the automatic analysis of ovarian carcinoma subtypes could be valuable to clinician′s diagnostic procedure by providing a second opinion.

  19. Automatic segmentation of equine larynx for diagnosis of laryngeal hemiplegia

    Science.gov (United States)

    Salehin, Md. Musfequs; Zheng, Lihong; Gao, Junbin

    2013-10-01

    This paper presents an automatic segmentation method for delineation of the clinically significant contours of the equine larynx from an endoscopic image. These contours are used to diagnose the most common disease of horse larynx laryngeal hemiplegia. In this study, hierarchal structured contour map is obtained by the state-of-the-art segmentation algorithm, gPb-OWT-UCM. The conic-shaped outer boundary of equine larynx is extracted based on Pascal's theorem. Lastly, Hough Transformation method is applied to detect lines related to the edges of vocal folds. The experimental results show that the proposed approach has better performance in extracting the targeted contours of equine larynx than the results of using only the gPb-OWT-UCM method.

  20. Automatic extraction of ontological relations from Arabic text

    Directory of Open Access Journals (Sweden)

    Mohammed G.H. Al Zamil

    2014-12-01

    The proposed methodology has been designed to analyze Arabic text using lexical semantic patterns of the Arabic language according to a set of features. Next, the features have been abstracted and enriched with formal descriptions for the purpose of generalizing the resulted rules. The rules, then, have formulated a classifier that accepts Arabic text, analyzes it, and then displays related concepts labeled with its designated relationship. Moreover, to resolve the ambiguity of homonyms, a set of machine translation, text mining, and part of speech tagging algorithms have been reused. We performed extensive experiments to measure the effectiveness of our proposed tools. The results indicate that our proposed methodology is promising for automating the process of extracting ontological relations.

  1. Wild Roman chamomile extracts and phenolic compounds: enzymatic assays and molecular modelling studies with VEGFR-2 tyrosine kinase.

    Science.gov (United States)

    Guimarães, Rafaela; Calhelha, Ricardo C; Froufe, Hugo J C; Abreu, Rui M V; Carvalho, Ana Maria; Queiroz, Maria João R P; Ferreira, Isabel C F R

    2016-01-01

    Angiogenesis is a process by which new blood vessels are formed from the pre-existing vasculature, and it is a key process that leads to tumour development. Some studies have recognized phenolic compounds as chemopreventive agents; flavonoids, in particular, seem to suppress the growth of tumor cells modifying the cell cycle. Herein, the antiangiogenic activity of Roman chamomile (Chamaemelum nobile L.) extracts (methanolic extract and infusion) and the main phenolic compounds present (apigenin, apigenin-7-O-glucoside, caffeic acid, chlorogenic acid, luteolin, and luteolin-7-O-glucoside) was evaluated through enzymatic assays using the tyrosine kinase intracellular domain of the Vascular Endothelium Growth Factor Receptor-2 (VEGFR-2), which is a transmembrane receptor expressed fundamentally in endothelial cells involved in angiogenesis, and molecular modelling studies. The methanolic extract showed a lower IC50 value (concentration that provided 50% of VEGFR-2 inhibition) than the infusion, 269 and 301 μg mL(-1), respectively. Regarding phenolic compounds, luteolin and apigenin showed the highest capacity to inhibit the phosphorylation of VEGFR-2, leading us to believe that these compounds are involved in the activity revealed by the methanolic extract.

  2. Automatic mining of symptom severity from psychiatric evaluation notes.

    Science.gov (United States)

    Karystianis, George; Nevado, Alejo J; Kim, Chi-Hun; Dehghan, Azad; Keane, John A; Nenadic, Goran

    2018-03-01

    As electronic mental health records become more widely available, several approaches have been suggested to automatically extract information from free-text narrative aiming to support epidemiological research and clinical decision-making. In this paper, we explore extraction of explicit mentions of symptom severity from initial psychiatric evaluation records. We use the data provided by the 2016 CEGS N-GRID NLP shared task Track 2, which contains 541 records manually annotated for symptom severity according to the Research Domain Criteria. We designed and implemented 3 automatic methods: a knowledge-driven approach relying on local lexicalized rules based on common syntactic patterns in text suggesting positive valence symptoms; a machine learning method using a neural network; and a hybrid approach combining the first 2 methods with a neural network. The results on an unseen evaluation set of 216 psychiatric evaluation records showed a performance of 80.1% for the rule-based method, 73.3% for the machine-learning approach, and 72.0% for the hybrid one. Although more work is needed to improve the accuracy, the results are encouraging and indicate that automated text mining methods can be used to classify mental health symptom severity from free text psychiatric notes to support epidemiological and clinical research. © 2017 The Authors International Journal of Methods in Psychiatric Research Published by John Wiley & Sons Ltd.

  3. Keyword Extraction from Arabic Legal Texts

    Science.gov (United States)

    Rammal, Mahmoud; Bahsoun, Zeinab; Al Achkar Jabbour, Mona

    2015-01-01

    Purpose: The purpose of this paper is to apply local grammar (LG) to develop an indexing system which automatically extracts keywords from titles of Lebanese official journals. Design/methodology/approach: To build LG for our system, the first word that plays the determinant role in understanding the meaning of a title is analyzed and grouped as…

  4. Evaluation of degree of readsorption of radionuclides during sequential extraction in soil: comparison between batch and dynamic extraction systems

    DEFF Research Database (Denmark)

    Petersen, Roongrat; Hansen, Elo Harald; Hou, Xiaolin

    . However, the techniques have an important problem with redistribution as a result of readsorption of dissolved analytes onto the remaining solids phases during extraction. Many authors have demonstrated the readsorption problem and inaccuracy from it. In our previous work, a dynamic extraction system......Sequential extraction techniques have been widely used to fractionate metals in solid samples (soils, sediments, solid wastes, etc.) due to their leachability. The results are useful for obtaining information about bioavailability, potential mobility and transport of element in natural environments...... developed in our laboratory for heavy metal fractionation has shown the reduction of readsorption problem in comparison with the batch techniques. Moreover, the system shows many advantages over the batch system such as speed of extraction, simple procedure, fully automatic, less risk of contamination...

  5. An automatic system for Turkish word recognition using Discrete Wavelet Neural Network based on adaptive entropy

    International Nuclear Information System (INIS)

    Avci, E.

    2007-01-01

    In this paper, an automatic system is presented for word recognition using real Turkish word signals. This paper especially deals with combination of the feature extraction and classification from real Turkish word signals. A Discrete Wavelet Neural Network (DWNN) model is used, which consists of two layers: discrete wavelet layer and multi-layer perceptron. The discrete wavelet layer is used for adaptive feature extraction in the time-frequency domain and is composed of Discrete Wavelet Transform (DWT) and wavelet entropy. The multi-layer perceptron used for classification is a feed-forward neural network. The performance of the used system is evaluated by using noisy Turkish word signals. Test results showing the effectiveness of the proposed automatic system are presented in this paper. The rate of correct recognition is about 92.5% for the sample speech signals. (author)

  6. Automatic identification of mobile and rigid substructures in molecular dynamics simulations and fractional structural fluctuation analysis.

    Directory of Open Access Journals (Sweden)

    Leandro Martínez

    Full Text Available The analysis of structural mobility in molecular dynamics plays a key role in data interpretation, particularly in the simulation of biomolecules. The most common mobility measures computed from simulations are the Root Mean Square Deviation (RMSD and Root Mean Square Fluctuations (RMSF of the structures. These are computed after the alignment of atomic coordinates in each trajectory step to a reference structure. This rigid-body alignment is not robust, in the sense that if a small portion of the structure is highly mobile, the RMSD and RMSF increase for all atoms, resulting possibly in poor quantification of the structural fluctuations and, often, to overlooking important fluctuations associated to biological function. The motivation of this work is to provide a robust measure of structural mobility that is practical, and easy to interpret. We propose a Low-Order-Value-Optimization (LOVO strategy for the robust alignment of the least mobile substructures in a simulation. These substructures are automatically identified by the method. The algorithm consists of the iterative superposition of the fraction of structure displaying the smallest displacements. Therefore, the least mobile substructures are identified, providing a clearer picture of the overall structural fluctuations. Examples are given to illustrate the interpretative advantages of this strategy. The software for performing the alignments was named MDLovoFit and it is available as free-software at: http://leandro.iqm.unicamp.br/mdlovofit.

  7. An integrated automatic system to evaluate U and Th dynamic lixiviation from solid matrices, and to extract/pre-concentrate leached analytes previous ICP-MS detection.

    Science.gov (United States)

    Ceballos, Melisa Rodas; García-Tenorio, Rafael; Estela, José Manuel; Cerdà, Víctor; Ferrer, Laura

    2017-12-01

    Leached fractions of U and Th from different environmental solid matrices were evaluated by an automatic system enabling the on-line lixiviation and extraction/pre-concentration of these two elements previous ICP-MS detection. UTEVA resin was used as selective extraction material. Ten leached fraction, using artificial rainwater (pH 5.4) as leaching agent, and a residual fraction were analyzed for each sample, allowing the study of behavior of U and Th in dynamic lixiviation conditions. Multivariate techniques have been employed for the efficient optimization of the independent variables that affect the lixiviation process. The system reached LODs of 0.1 and 0.7ngkg -1 of U and Th, respectively. The method was satisfactorily validated for three solid matrices, by the analysis of a soil reference material (IAEA-375), a certified sediment reference material (BCR- 320R) and a phosphogypsum reference material (MatControl CSN-CIEMAT 2008). Besides, environmental samples were analyzed, showing a similar behavior, i.e. the content of radionuclides decreases with the successive extractions. In all cases, the accumulative leached fraction of U and Th for different solid matrices studied (soil, sediment and phosphogypsum) were extremely low, up to 0.05% and 0.005% of U and Th, respectively. However, a great variability was observed in terms of mass concentration released, e.g. between 44 and 13,967ngUkg -1 . Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Determination of ciprofloxacin in Jiaozhou Bay using molecularly imprinted solid-phase extraction followed by high-performance liquid chromatography with fluorescence detection

    International Nuclear Information System (INIS)

    Lian, Ziru; Wang, Jiangtao

    2016-01-01

    A high selective pre-treatment method for the cleanup and preconcentration of ciprofloxacin in natural seawater samples was developed based on molecularly imprinted solid-phase extraction (MISPE). The ciprofloxacin imprinted polymers were synthesized and the characteristics of obtained polymers were evaluated by scanning electron microscopy, Fourier transform infrared spectroscopy and binding experiments. The imprinted materials showed high adsorption ability for ciprofloxacin and were applied as special solid-phase extraction sorbents for selective separation of ciprofloxacin. An off-line MISPE procedure was optimized and the developed MISPE method allowed direct purification and enrichment of the ciprofloxacin from the aqueous samples prior to high-performance liquid chromatography analysis. The recoveries of spiked seawater on the MISPE cartridges ranged from 75.2 to 112.4% and the relative standard deviations were less than 4.46%. Five seawater samples from Jiaozhou Bay were analyzed and ciprofloxacin was detected in two samples with the concentrations of 0.24 and 0.38 μg L −1 , respectively. - Highlights: • Ciprofloxacin molecularly imprinted polymers (Cip-MIPs) were prepared. • The characteristics and recognition efficiency of MIPs were studied. • An off-line method for Cip was developed using MIPs as solid-phase extraction. • Cip in five seawater samples from Jiaozhou Bay of China was determined.

  9. Semantics-based information extraction for detecting economic events

    NARCIS (Netherlands)

    A.C. Hogenboom (Alexander); F. Frasincar (Flavius); K. Schouten (Kim); O. van der Meer

    2013-01-01

    textabstractAs today's financial markets are sensitive to breaking news on economic events, accurate and timely automatic identification of events in news items is crucial. Unstructured news items originating from many heterogeneous sources have to be mined in order to extract knowledge useful for

  10. Fully automatic oil spill detection from COSMO-SkyMed imagery using a neural network approach

    Science.gov (United States)

    Avezzano, Ruggero G.; Del Frate, Fabio; Latini, Daniele

    2012-09-01

    The increased amount of available Synthetic Aperture Radar (SAR) images acquired over the ocean represents an extraordinary potential for improving oil spill detection activities. On the other side this involves a growing workload on the operators at analysis centers. In addition, even if the operators go through extensive training to learn manual oil spill detection, they can provide different and subjective responses. Hence, the upgrade and improvements of algorithms for automatic detection that can help in screening the images and prioritizing the alarms are of great benefit. In the framework of an ASI Announcement of Opportunity for the exploitation of COSMO-SkyMed data, a research activity (ASI contract L/020/09/0) aiming at studying the possibility to use neural networks architectures to set up fully automatic processing chains using COSMO-SkyMed imagery has been carried out and results are presented in this paper. The automatic identification of an oil spill is seen as a three step process based on segmentation, feature extraction and classification. We observed that a PCNN (Pulse Coupled Neural Network) was capable of providing a satisfactory performance in the different dark spots extraction, close to what it would be produced by manual editing. For the classification task a Multi-Layer Perceptron (MLP) Neural Network was employed.

  11. Automatic radiation dose monitoring for CT of trauma patients with different protocols: feasibility and accuracy

    International Nuclear Information System (INIS)

    Higashigaito, K.; Becker, A.S.; Sprengel, K.; Simmen, H.-P.; Wanner, G.; Alkadhi, H.

    2016-01-01

    Aim: To demonstrate the feasibility and accuracy of automatic radiation dose monitoring software for computed tomography (CT) of trauma patients in a clinical setting over time, and to evaluate the potential of radiation dose reduction using iterative reconstruction (IR). Materials and methods: In a time period of 18 months, data from 378 consecutive thoraco-abdominal CT examinations of trauma patients were extracted using automatic radiation dose monitoring software, and patients were split into three cohorts: cohort 1, 64-section CT with filtered back projection, 200 mAs tube current–time product; cohort 2, 128-section CT with IR and identical imaging protocol; cohort 3, 128-section CT with IR, 150 mAs tube current–time product. Radiation dose parameters from the software were compared with the individual patient protocols. Image noise was measured and image quality was semi-quantitatively determined. Results: Automatic extraction of radiation dose metrics was feasible and accurate in all (100%) patients. All CT examinations were of diagnostic quality. There were no differences between cohorts 1 and 2 regarding volume CT dose index (CTDI_v_o_l; p=0.62), dose–length product (DLP), and effective dose (ED, both p=0.95), while noise was significantly lower (chest and abdomen, both −38%, p<0.017). Compared to cohort 1, CTDI_v_o_l, DLP, and ED in cohort 3 were significantly lower (all −25%, p<0.017), similar to the noise in the chest (–32%) and abdomen (–27%, both p<0.017). Compared to cohort 2, CTDI_v_o_l (–28%), DLP, and ED (both –26%) in cohort 3 was significantly lower (all, p<0.017), while noise in the chest (+9%) and abdomen (+18%) was significantly higher (all, p<0.017). Conclusion: Automatic radiation dose monitoring software is feasible and accurate, and can be implemented in a clinical setting for evaluating the effects of lowering radiation doses of CT protocols over time. - Highlights: • Automatic dose monitoring software can be

  12. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  13. A molecular dynamics study of components of the ginger (Zingiber officinale) extract inside human acetylcholinesterase: implications for Alzheimer disease.

    Science.gov (United States)

    Cuya, Teobaldo; Baptista, Leonardo; Celmar Costa França, Tanos

    2017-11-23

    Components of ginger (Zingiber officinale) extracts have been described as potential new drug candidates against Alzheimer disease (AD), able to interact with several molecular targets related to the AD treatment. However, there are very few theoretical studies in the literature on the possible mechanisms of action by which these compounds can work as potential anti-AD drugs. For this reason, we performed here docking, molecular dynamic simulations and mmpbsa calculations on four components of ginger extracts former reported as active inhibitors of human acetylcholinesterase (HssAChE), and compared our results to the known HssAChE inhibitor and commercial drug in use against AD, donepezil (DNP). Our findings points to two among the compounds studied: (E)-1,7-bis(4-hydroxy-3-methoxyphenyl)hept-4-en-3-on and 1-(3,4-dihydroxy-5-methoxyphenyl)-7-(4-hydroxy-3- ethoxyphenyl) heptane-3,5-diyl diacetate, as promising new HssAChE inhibitors that could be as effective as DNP. We also mapped the binding of the studied compounds in the different binding pockets inside HssAChE and established the preferred interactions to be favored in the design of new and more efficient inhibitors.

  14. Automatic Fiscal Stabilizers

    Directory of Open Access Journals (Sweden)

    Narcis Eduard Mitu

    2013-11-01

    Full Text Available Policies or institutions (built into an economic system that automatically tend to dampen economic cycle fluctuations in income, employment, etc., without direct government intervention. For example, in boom times, progressive income tax automatically reduces money supply as incomes and spendings rise. Similarly, in recessionary times, payment of unemployment benefits injects more money in the system and stimulates demand. Also called automatic stabilizers or built-in stabilizers.

  15. Characterization of citrus pectin samples extracted under different conditions: influence of acid type and pH of extraction

    DEFF Research Database (Denmark)

    Kaya, Merve; Sousa, Antonio G.; Crepeau, Marie-Jeanne

    2014-01-01

    on the chemical and macromolecular characteristics of pectin samples. Methods Citrus peel (orange, lemon, lime and grapefruit) from a commercial supplier was used as raw material. Pectin samples were obtained on a bulk plant scale (kilograms; harsh nitric acid, mild nitric acid and harsh oxalic acid extraction......) and on a laboratory scale (grams; mild oxalic acid extraction). Pectin composition (acidic and neutral sugars) and physicochemical properties (molar mass and intrinsic viscosity) were determined. Key Results Oxalic acid extraction allowed the recovery of pectin samples of high molecular weight. Mild oxalic acid......-extracted pectins were rich in long homogalacturonan stretches and contained rhamnogalacturonan I stretches with conserved side chains. Nitric acid-extracted pectins exhibited lower molecular weights and contained rhamnogalacturonan I stretches encompassing few and/or short side chains. Grapefruit pectin was found...

  16. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  17. Solid phase extraction using molecular imprinting polymers (MISPE for the determination of estrogens in surface water by HPLC

    Directory of Open Access Journals (Sweden)

    Viviane do Nascimento Bianchi

    2017-05-01

    Full Text Available Estrogens are emerging pollutants and traditional sewage treatments unable to remove them. They are harmful to human health and to the environment. It is therefore important to evaluate the presence and concentration of estrogens in water bodies and environmental matrices. This work presents the development and application of a methodology for the determination of E1, E3, EE2 and E2 in surface waters using solid phase extraction with molecular imprinting polymers (MISPE followed by identification and quantification by HPLC-DAD. Acetonitrile and water deionized acidified with phosphoric acid pH 3 (1:1, v/v, a flow rate of 1.0 ml min-1, at 40°C and an injection volume of 5 µL. The method was validated according to the protocol ICH Q2R. Reproducibility and repeatability tests resulted in a smaller variation coefficient of 10%; the calibration curves in the concentration ranged from 1 to 20 mg L-1, with return linearity values greater than 0.99. The limits of detection and quantification were less than 1 mg L-1 and the method was satisfactory for specificity and selectivity tests using caffeine, which is often found in water bodies receiving effluent, and DES, an estrogen used in the treatment of prostate cancer. Selected samples underwent clean-up and pre-concentration treatments using solid phase extraction with commercial phase (C18 and molecularly imprinted polymers (MISPE. The analysis of MISPE extracts indicate that it is possible to obtain results with greater sensitivity and precision for analyses of complex environmental matrices, demonstrating that the developed method can be applied in complex environmental matrices.

  18. Determination of malachite green in fish based on magnetic molecularly imprinted polymer extraction followed by electrochemiluminescence.

    Science.gov (United States)

    Huang, Baomei; Zhou, Xibin; Chen, Jing; Wu, Guofan; Lu, Xiaoquan

    2015-09-01

    A novel procedure for selective extraction of malachite green (MG) from fish samples was set up by using magnetic molecularly imprinted polymers (MMIP) as the solid phase extraction material followed by electrochemiluminescence (ECL) determination. MMIP was prepared by using Fe3O4 magnetite as magnetic component, MG as template molecule, methacrylic acid (MAA) as functional monomer and ethylene glycol dimethacrylate (EGDMA) as crosslinking agent. MMIP was characterized by SEM, TEM, FT-IR, VSM and XRD. Leucomalachite green (LMG) was oxidized in situ to MG by 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ). And then MMIP was successfully used to selectively enrich MG from fish samples. Adsorbed MG was desorbed and determined by ECL. Under the optimal conditions, calibration curve was good linear in the range of 0.29-290 μg/kg and the limit of detection (LOD) was 7.3 ng/kg (S/N=3). The recoveries of MMIP extraction were 77.1-101.2%. In addition, MMIP could be regenerated. To the best of our knowledge, MMIP coupling with ECL quenching of Ru(bpy)3(2+)/TPA for the determination of MG has not yet been developed. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Automatic supervision and fault detection of PV systems based on power losses analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chouder, A.; Silvestre, S. [Electronic Engineering Department, Universitat Politecnica de Catalunya, C/Jordi Girona 1-3, Campus Nord UPC, 08034 Barcelona (Spain)

    2010-10-15

    In this work, we present a new automatic supervision and fault detection procedure for PV systems, based on the power losses analysis. This automatic supervision system has been developed in Matlab and Simulink environment. It includes parameter extraction techniques to calculate main PV system parameters from monitoring data in real conditions of work, taking into account the environmental irradiance and module temperature evolution, allowing simulation of the PV system behaviour in real time. The automatic supervision method analyses the output power losses, presents in the DC side of the PV generator, capture losses. Two new power losses indicators are defined: thermal capture losses (L{sub ct}) and miscellaneous capture losses (L{sub cm}). The processing of these indicators allows the supervision system to generate a faulty signal as indicator of fault detection in the PV system operation. Two new indicators of the deviation of the DC variables respect to the simulated ones have been also defined. These indicators are the current and voltage ratios: R{sub C} and R{sub V}. Analysing both, the faulty signal and the current/voltage ratios, the type of fault can be identified. The automatic supervision system has been successfully tested experimentally. (author)

  20. Automatic supervision and fault detection of PV systems based on power losses analysis

    International Nuclear Information System (INIS)

    Chouder, A.; Silvestre, S.

    2010-01-01

    In this work, we present a new automatic supervision and fault detection procedure for PV systems, based on the power losses analysis. This automatic supervision system has been developed in Matlab and Simulink environment. It includes parameter extraction techniques to calculate main PV system parameters from monitoring data in real conditions of work, taking into account the environmental irradiance and module temperature evolution, allowing simulation of the PV system behaviour in real time. The automatic supervision method analyses the output power losses, presents in the DC side of the PV generator, capture losses. Two new power losses indicators are defined: thermal capture losses (L ct ) and miscellaneous capture losses (L cm ). The processing of these indicators allows the supervision system to generate a faulty signal as indicator of fault detection in the PV system operation. Two new indicators of the deviation of the DC variables respect to the simulated ones have been also defined. These indicators are the current and voltage ratios: R C and R V . Analysing both, the faulty signal and the current/voltage ratios, the type of fault can be identified. The automatic supervision system has been successfully tested experimentally.

  1. Automatic and Systematic Atomistic Simulations in the MedeA® Software Environment: Application to EU-REACH

    Directory of Open Access Journals (Sweden)

    Rozanska Xavier

    2015-03-01

    Full Text Available This work demonstrates the systematic prediction of thermodynamic properties for batches of thousands of molecules using automated procedures. This is accomplished with newly developed tools and functions within the Material Exploration and Design Analysis (MedeA® software environment, which handle the automatic execution of sequences of tasks for large numbers of molecules including the creation of 3D molecular models from 1D representations, systematic exploration of possible conformers for each molecule, the creation and submission of computational tasks for property calculations on parallel computers, and the post-processing for comparison with available experimental properties. After the description of the different MedeA® functionalities and methods that make it easy to perform such large number of computations, we illustrate the strength and power of the approach with selected examples from molecular mechanics and quantum chemical simulations. Specifically, comparisons of thermochemical data with quantum-based heat capacities and standard energies of formation have been obtained for more than 2 000 compounds, yielding average deviations with experiments of less than 4% with the Design Institute for Physical PRoperties (DIPPR database. The automatic calculation of the density of molecular fluids is demonstrated for 192 systems. The relaxation to minimum-energy structures and the calculation of vibrational frequencies of 5 869 molecules are evaluated automatically using a semi-empirical quantum mechanical approach with a success rate of 99.9%. The present approach is scalable to large number of molecules, thus opening exciting possibilities with the advent of exascale computing.

  2. Transfer Learning for Adaptive Relation Extraction

    Science.gov (United States)

    2011-09-13

    sequences. Second, the restriction to the correct 3Automatic Content Extraction http://www.itl.nist.gov/iad/mig/tests/ace/ 21 Relation Candidate...the weight vector is chosen to be ~λ∗ = arg max ~λ [Πni=1 logP~λ(yi|xi)− m∑ i=1 λ2 2σ2 ] , where σ controls the degree of regularization. Maximizing the

  3. Liquid chromatography-mass spectrometry platform for both small neurotransmitters and neuropeptides in blood, with automatic and robust solid phase extraction

    Science.gov (United States)

    Johnsen, Elin; Leknes, Siri; Wilson, Steven Ray; Lundanes, Elsa

    2015-03-01

    Neurons communicate via chemical signals called neurotransmitters (NTs). The numerous identified NTs can have very different physiochemical properties (solubility, charge, size etc.), so quantification of the various NT classes traditionally requires several analytical platforms/methodologies. We here report that a diverse range of NTs, e.g. peptides oxytocin and vasopressin, monoamines adrenaline and serotonin, and amino acid GABA, can be simultaneously identified/measured in small samples, using an analytical platform based on liquid chromatography and high-resolution mass spectrometry (LC-MS). The automated platform is cost-efficient as manual sample preparation steps and one-time-use equipment are kept to a minimum. Zwitter-ionic HILIC stationary phases were used for both on-line solid phase extraction (SPE) and liquid chromatography (capillary format, cLC). This approach enabled compounds from all NT classes to elute in small volumes producing sharp and symmetric signals, and allowing precise quantifications of small samples, demonstrated with whole blood (100 microliters per sample). An additional robustness-enhancing feature is automatic filtration/filter back-flushing (AFFL), allowing hundreds of samples to be analyzed without any parts needing replacement. The platform can be installed by simple modification of a conventional LC-MS system.

  4. Automatic identification of bullet signatures based on consecutive matching striae (CMS) criteria.

    Science.gov (United States)

    Chu, Wei; Thompson, Robert M; Song, John; Vorburger, Theodore V

    2013-09-10

    The consecutive matching striae (CMS) numeric criteria for firearm and toolmark identifications have been widely accepted by forensic examiners, although there have been questions concerning its observer subjectivity and limited statistical support. In this paper, based on signal processing and extraction, a model for the automatic and objective counting of CMS is proposed. The position and shape information of the striae on the bullet land is represented by a feature profile, which is used for determining the CMS number automatically. Rapid counting of CMS number provides a basis for ballistics correlations with large databases and further statistical and probability analysis. Experimental results in this report using bullets fired from ten consecutively manufactured barrels support this developed model. Published by Elsevier Ireland Ltd.

  5. Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm

    Science.gov (United States)

    Foroutan, M.; Zimbelman, J. R.

    2017-09-01

    Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.

  6. Automatic measurement of axial length of human eye using three-dimensional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Watanabe, Masaki; Kiryu, Tohru

    2011-01-01

    The measurement of axial length and the evaluation of three dimensional (3D) form of an eye are essential to evaluate the mechanism of myopia progression. We propose a method of automatic measurement of axial length including adjustment of the pulse sequence of short-term scan which could suppress influence of eyeblink, using a magnetic resonance imaging (MRI) which acquires 3D images noninvasively. Acquiring T 2 -weighted images with 3.0 tesla MRI device and eight-channel phased-array head coil, we extracted left and right eye ball images, and then reconstructed 3D volume. The surface coordinates were calculated from 3D volume, fitting the ellipsoid model coordinates with the surface coordinates, and measured the axial length automatically. Measuring twenty one subjects, we compared the automatically measured values of axial length with the manually measured ones, then confirmed significant elongation in the axial length of myopia compared with that of emmetropia. Furthermore, there were no significant differences (P<0.05) between the means of automatic measurements and the manual ones. Accordingly, the automatic measurement process of axial length could be a tool for the elucidation of the mechanism of myopia progression, which would be suitable for evaluating the axial length easily and noninvasively. (author)

  7. Semi-supervised learning based probabilistic latent semantic analysis for automatic image annotation

    Institute of Scientific and Technical Information of China (English)

    Tian Dongping

    2017-01-01

    In recent years, multimedia annotation problem has been attracting significant research attention in multimedia and computer vision areas, especially for automatic image annotation, whose purpose is to provide an efficient and effective searching environment for users to query their images more easily.In this paper, a semi-supervised learning based probabilistic latent semantic analysis ( PL-SA) model for automatic image annotation is presenred.Since it' s often hard to obtain or create la-beled images in large quantities while unlabeled ones are easier to collect, a transductive support vector machine ( TSVM) is exploited to enhance the quality of the training image data.Then, differ-ent image features with different magnitudes will result in different performance for automatic image annotation.To this end, a Gaussian normalization method is utilized to normalize different features extracted from effective image regions segmented by the normalized cuts algorithm so as to reserve the intrinsic content of images as complete as possible.Finally, a PLSA model with asymmetric mo-dalities is constructed based on the expectation maximization( EM) algorithm to predict a candidate set of annotations with confidence scores.Extensive experiments on the general-purpose Corel5k dataset demonstrate that the proposed model can significantly improve performance of traditional PL-SA for the task of automatic image annotation.

  8. Automatic X-ray inspection for the HTR-PM spherical fuel elements

    Energy Technology Data Exchange (ETDEWEB)

    Yi, DU, E-mail: duyi11@mails.tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building A309, Haidian District, Beijing 100084 (China); Xiangang, WANG, E-mail: wangxiangang@tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building A309, Haidian District, Beijing 100084 (China); Xincheng, XIANG, E-mail: inetxxc@tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building, Haidian District, Beijing 100084 (China); Bing, LIU, E-mail: bingliu@tsinghua.edu.cn [Institute of Nuclear and New Energy Technology (INET), Tsinghua University, Energy Science Building, Haidian District, Beijing 100084 (China)

    2014-12-15

    Highlights: • An automatic X-ray inspection method is established to characterize HTR pebbles. • The method provides physical characterization and the inner structure of pebbles. • The method can be conducted non-destructively, quickly and automatically. • Sample pebbles were measured with this AXI method for validation. • The method shows the potential to be applied in situ. - Abstract: Inefficient quality assessment and control (QA and C) of spherical fuel elements for high temperature reactor-pebblebed modules (HTR-PM) has been a long-term problem, since conventional methods are labor intensive and cannot reveal the inside information nondestructively. Herein, we proposed a nondestructive, automated X-ray inspection (AXI) method to characterize spherical fuel elements including their inner structures based on X-ray digital radiography (DR). Briefly, DR images at different angles are first obtained and then the chosen important parameters such as spherical diameters, geometric and mass centers, can be automatically extracted and calculated via image processing techniques. Via evaluating sample spherical fuel elements, we proved that this AXI method can be conducted non-destructively, quickly and automatically. This method not only provides accurate physical characterization of spherical fuel elements but also reveals their inner structure with good resolution, showing great potentials to facilitate fast QA and C in HTM-PM spherical fuel element development and production.

  9. Automatic X-ray inspection for the HTR-PM spherical fuel elements

    International Nuclear Information System (INIS)

    Yi, DU; Xiangang, WANG; Xincheng, XIANG; Bing, LIU

    2014-01-01

    Highlights: • An automatic X-ray inspection method is established to characterize HTR pebbles. • The method provides physical characterization and the inner structure of pebbles. • The method can be conducted non-destructively, quickly and automatically. • Sample pebbles were measured with this AXI method for validation. • The method shows the potential to be applied in situ. - Abstract: Inefficient quality assessment and control (QA and C) of spherical fuel elements for high temperature reactor-pebblebed modules (HTR-PM) has been a long-term problem, since conventional methods are labor intensive and cannot reveal the inside information nondestructively. Herein, we proposed a nondestructive, automated X-ray inspection (AXI) method to characterize spherical fuel elements including their inner structures based on X-ray digital radiography (DR). Briefly, DR images at different angles are first obtained and then the chosen important parameters such as spherical diameters, geometric and mass centers, can be automatically extracted and calculated via image processing techniques. Via evaluating sample spherical fuel elements, we proved that this AXI method can be conducted non-destructively, quickly and automatically. This method not only provides accurate physical characterization of spherical fuel elements but also reveals their inner structure with good resolution, showing great potentials to facilitate fast QA and C in HTM-PM spherical fuel element development and production

  10. Synthesis of molecular imprinting polymers for extraction of gallic acid from urine.

    Science.gov (United States)

    Bhawani, Showkat Ahmad; Sen, Tham Soon; Ibrahim, Mohammad Nasir Mohammad

    2018-02-21

    The molecularly imprinted polymers for gallic acid were synthesized by precipitation polymerization. During the process of synthesis a non-covalent approach was used for the interaction of template and monomer. In the polymerization process, gallic acid was used as a template, acrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker and 2,2'-azobisisobutyronitrile as an initiator and acetonitrile as a solvent. The synthesized imprinted and non-imprinted polymer particles were characterized by using Fourier-transform infrared spectroscopy and scanning electron microscopy. The rebinding efficiency of synthesized polymer particles was evaluated by batch binding assay. The highly selective imprinted polymer for gallic acid was MIPI1 with a composition (molar ratio) of 1:4:20, template: monomer: cross-linker, respectively. The MIPI1 showed highest binding efficiency (79.50%) as compared to other imprinted and non-imprinted polymers. The highly selective imprinted polymers have successfully extracted about 80% of gallic acid from spiked urine sample.

  11. Automatic Atrial Fibrillation Detection: A Novel Approach Using Discrete Wavelet Transform and Heart Rate Variabilit

    DEFF Research Database (Denmark)

    Bruun, Iben H.; Hissabu, Semira M. S.; Poulsen, Erik S.

    2017-01-01

    be used as a screening tool for patients suspected to have AF. The method includes an automatic peak detection prior to the feature extraction, as well as a noise cancellation technique followed by a bagged tree classification. Simulation studies on the MIT-BIH Atrial Fibrillation database was performed...

  12. Analysis of separation test for automatic brake adjuster based on linear radon transformation

    Science.gov (United States)

    Luo, Zai; Jiang, Wensong; Guo, Bin; Fan, Weijun; Lu, Yi

    2015-01-01

    The linear Radon transformation is applied to extract inflection points for online test system under the noise conditions. The linear Radon transformation has a strong ability of anti-noise and anti-interference by fitting the online test curve in several parts, which makes it easy to handle consecutive inflection points. We applied the linear Radon transformation to the separation test system to solve the separating clearance of automatic brake adjuster. The experimental results show that the feature point extraction error of the gradient maximum optimal method is approximately equal to ±0.100, while the feature point extraction error of linear Radon transformation method can reach to ±0.010, which has a lower error than the former one. In addition, the linear Radon transformation is robust.

  13. Development of SM-2 emulsion detector and its application to automatic control of deemulsifying agent addition

    International Nuclear Information System (INIS)

    Wu Hongpei.

    1985-01-01

    Emulsion phenomena had ever occurred in trifattyamine solvent extraction in some uranium mills owing to the presence of the colloidal polysilicic acid in feed solutions with the concentration even as high as >= 0.46 g/l (based on SiO 2 ). Polyether has been used as the deemulsifying agent to remove colloidal polysilicic acid in feed solution in question. In order to reduce the amount of polyether consumption, SM-2 emulsion detector was thus developed and used for automatic control of polyether addition into feed solution. The working principle and basic constitutional structure of SM-2 detector are described. When polyether solution is added into feed solution, certain turbidity occurs owing to the flocculated particles of polysilicic acid. It was found that a linear relationship existed between turbidity and photoelectric pressure difference in millivolts which can be detected by the SM-2 detector. Therefore, it is feasible that the minimum concentration of polysilicic acid, over which emulsion may occurs, can be found through experiments. To take advantage of this linear relationship, we can automatically control the addition of polyether solution in an appropriate amount without occurrence of emulsion phenomenon during solvent extraction. The scheme of automatic control of addition of polyether solution is presented too

  14. Automatic MPST-cut for segmentation of carpal bones from MR volumes.

    Science.gov (United States)

    Gemme, Laura; Nardotto, Sonia; Dellepiane, Silvana G

    2017-08-01

    In the context of rheumatic diseases, several studies suggest that Magnetic Resonance Imaging (MRI) allows the detection of the three main signs of Rheumatoid Arthritis (RA) at higher sensitivities than available through conventional radiology. The rapid, accurate segmentation of bones is an essential preliminary step for quantitative diagnosis, erosion evaluation, and multi-temporal data fusion. In the present paper, a new, semi-automatic, 3D graph-based segmentation method to extract carpal bone data is proposed. The method is unsupervised, does not employ any a priori model or knowledge, and is adaptive to the individual variability of the acquired data. After selecting one source point inside the Region of Interest (ROI), a segmentation process is initiated, which consists of two automatic stages: a cost-labeling phase and a graph-cutting phase. The algorithm finds optimal paths based on a new cost function by creating a Minimum Path Spanning Tree (MPST). To extract the region, a cut of the obtained tree is necessary. A new criterion of the MPST-cut based on compactness shape factor was conceived and developed. The proposed approach is applied to a large database of 96 T1-weighted MR bone volumes. Performance quality is evaluated by comparing the results with gold-standard bone volumes manually defined by rheumatologists through the computation of metrics extracted from the confusion matrix. Furthermore, comparisons with the existing literature are carried out. The results show that this method is efficient and provides satisfactory performance for bone segmentation on low-field MR volumes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  16. A Multi-stage Method to Extract Road from High Resolution Satellite Image

    International Nuclear Information System (INIS)

    Zhijian, Huang; Zhang, Jinfang; Xu, Fanjiang

    2014-01-01

    Extracting road information from high-resolution satellite images is complex and hardly achieves by exploiting only one or two modules. This paper presents a multi-stage method, consisting of automatic information extraction and semi-automatic post-processing. The Multi-scale Enhancement algorithm enlarges the contrast of human-made structures with the background. The Statistical Region Merging segments images into regions, whose skeletons are extracted and pruned according to geometry shape information. Setting the start and the end skeleton points, the shortest skeleton path is constructed as a road centre line. The Bidirectional Adaptive Smoothing technique smoothens the road centre line and adjusts it to right position. With the smoothed line and its average width, a Buffer algorithm reconstructs the road region easily. Seen from the last results, the proposed method eliminates redundant non-road regions, repairs incomplete occlusions, jumps over complete occlusions, and reserves accurate road centre lines and neat road regions. During the whole process, only a few interactions are needed

  17. Molecularly imprinted microspheres synthesized by a simple, fast, and universal suspension polymerization for selective extraction of the topical anesthetic benzocaine in human serum and fish tissues.

    Science.gov (United States)

    Sun, Hui; Lai, Jia-Ping; Chen, Fang; Zhu, De-Rong

    2015-02-01

    A simple, fast, and universal suspension polymerization method was used to synthesize the molecularly imprinted microspheres (MIMs) for the topical anesthetic benzocaine (BZC). The desired diameter (10-20 μm) and uniform morphology of the MIMs were obtained easily by changing one or more of the synthesis conditions, including type and amount of surfactant, stirring rate, and ratio of organic to water phase. The MIMs obtained were used as a molecular-imprinting solid-phase-extraction (MISPE) material for extraction of BZC in human serum and fish tissues. The MISPE results revealed that the BZC in these biosamples could be enriched effectively after the MISPE operation. The recoveries of BZC on MIMs cartridges were higher than 90% (n = 3). Finally, an MISPE-HPLC method with UV detection was developed for highly selective extraction and fast detection of trace BZC in human serum and fish tissues. The developed method could also be used for the enrichment and detection of BZC in other complex biosamples.

  18. Ontology Assisted Formal Specification Extraction from Text

    Directory of Open Access Journals (Sweden)

    Andreea Mihis

    2010-12-01

    Full Text Available In the field of knowledge processing, the ontologies are the most important mean. They make possible for the computer to understand better the natural language and to make judgments. In this paper, a method which use ontologies in the semi-automatic extraction of formal specifications from a natural language text is proposed.

  19. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  20. Semi-automatic Term Extraction for an isiZulu Linguistic Terms ...

    African Journals Online (AJOL)

    user

    This paper advances the use of frequency analysis and the keyword analysis as strategies to extract terms for the compilation of the dictionary of isiZulu linguistic terms. The study uses the isiZulu. National Corpus (INC) of about 1,2 million tokens as a reference corpus as well as an LSP corpus of about 100,000 tokens as a ...

  1. Acute and Sub-acute Toxicity Profile of Aqueous Leaf Extract of ...

    African Journals Online (AJOL)

    information on the safety/toxicity of the aqueous extract of Nymphaea .... automatic chemistry analyzer (Abaxis Inc. Union. City, CA .... play central role in gaseous exchange and inter- compartmental .... OECD guidelines for testing of chemicals ...

  2. Feature ranking and rank aggregation for automatic sleep stage classification: a comparative study.

    Science.gov (United States)

    Najdi, Shirin; Gharbali, Ali Abdollahi; Fonseca, José Manuel

    2017-08-18

    Nowadays, sleep quality is one of the most important measures of healthy life, especially considering the huge number of sleep-related disorders. Identifying sleep stages using polysomnographic (PSG) signals is the traditional way of assessing sleep quality. However, the manual process of sleep stage classification is time-consuming, subjective and costly. Therefore, in order to improve the accuracy and efficiency of the sleep stage classification, researchers have been trying to develop automatic classification algorithms. Automatic sleep stage classification mainly consists of three steps: pre-processing, feature extraction and classification. Since classification accuracy is deeply affected by the extracted features, a poor feature vector will adversely affect the classifier and eventually lead to low classification accuracy. Therefore, special attention should be given to the feature extraction and selection process. In this paper the performance of seven feature selection methods, as well as two feature rank aggregation methods, were compared. Pz-Oz EEG, horizontal EOG and submental chin EMG recordings of 22 healthy males and females were used. A comprehensive feature set including 49 features was extracted from these recordings. The extracted features are among the most common and effective features used in sleep stage classification from temporal, spectral, entropy-based and nonlinear categories. The feature selection methods were evaluated and compared using three criteria: classification accuracy, stability, and similarity. Simulation results show that MRMR-MID achieves the highest classification performance while Fisher method provides the most stable ranking. In our simulations, the performance of the aggregation methods was in the average level, although they are known to generate more stable results and better accuracy. The Borda and RRA rank aggregation methods could not outperform significantly the conventional feature ranking methods. Among

  3. Road Extraction and Car Detection from Aerial Image Using Intensity and Color

    Directory of Open Access Journals (Sweden)

    Vahid Ghods

    2011-07-01

    Full Text Available In this paper a new automatic approach to road extraction from aerial images is proposed. The initialization strategies are based on the intensity, color, and Hough transform. After road elements extraction, chain codes are calculated. In the last step, using shadow, cars on the roads are detected. We implemented our method on the 25 images from "Google Earth" database. The experiments show an increase in both the completeness and the quality indexes for the extracted road.

  4. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    Science.gov (United States)

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  5. Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model Generation for ns-3

    Science.gov (United States)

    2015-12-01

    more protocols (especially at different layers of the OSI model ), implementing an inference engine to extract inter- and intrapacket dependencies, and...ARL-TR-7543 ● DEC 2015 US Army Research Laboratory Automatic Traffic-Based Internet Control Message Protocol (ICMP) Model ...ICMP) Model Generation for ns-3 by Jaime C Acosta and Felipe Jovel Survivability/Lethality Analysis Directorate, ARL Felipe Sotelo and Caesar

  6. Automatic analysis of trabecular bone structure from knee MRI

    DEFF Research Database (Denmark)

    Marques, Joselene; Granlund, Rabia; Lillholm, Martin

    2012-01-01

    We investigated the feasibility of quantifying osteoarthritis (OA) by analysis of the trabecular bone structure in low-field knee MRI. Generic texture features were extracted from the images and subsequently selected by sequential floating forward selection (SFFS), following a fully automatic......, uncommitted machine-learning based framework. Six different classifiers were evaluated in cross-validation schemes and the results showed that the presence of OA can be quantified by a bone structure marker. The performance of the developed marker reached a generalization area-under-the-ROC (AUC) of 0...

  7. Automatic identification of corrosion damage using image processing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bento, Mariana P.; Ramalho, Geraldo L.B.; Medeiros, Fatima N.S. de; Ribeiro, Elvis S. [Universidade Federal do Ceara (UFC), Fortaleza, CE (Brazil); Medeiros, Luiz C.L. [Petroleo Brasileiro S.A. (PETROBRAS), Rio de Janeiro, RJ (Brazil)

    2009-07-01

    This paper proposes a Nondestructive Evaluation (NDE) method for atmospheric corrosion detection on metallic surfaces using digital images. In this study, the uniform corrosion is characterized by texture attributes extracted from co-occurrence matrix and the Self Organizing Mapping (SOM) clustering algorithm. We present a technique for automatic inspection of oil and gas storage tanks and pipelines of petrochemical industries without disturbing their properties and performance. Experimental results are promising and encourage the possibility of using this methodology in designing trustful and robust early failure detection systems. (author)

  8. An analysis of line-drawings based upon automatically inferred grammar and its application to chest x-ray images

    International Nuclear Information System (INIS)

    Nakayama, Akira; Yoshida, Yuuji; Fukumura, Teruo

    1984-01-01

    There is a technique using inferring grammer as image- structure analyzing technique. This technique involves a few problems if it is applied to naturally obtained images, as the practical grammatical technique for two-dimensional image is not established. The authors developed a technique which solved the above problems for the main purpose of the automated structure analysis of naturally obtained image. The first half of this paper describes on the automatic inference of line drawing generation grammar and the line drawing analysis based on that automatic inference. The second half of the paper reports on the actual analysis. The proposed technique is that to extract object line drawings out of the line drawings containing noise. The technique was evaluated for its effectiveness with an example of extracting rib center lines out of thin line chest X-ray images having practical scale and complexity. In this example, the total number of characteristic points (ends, branch points and intersections) composing line drawings per one image was 377, and the total number of line segments composing line drawings was 566 on average per sheet. The extraction ratio was 86.6 % which seemed to be proper when the complexity of input line drawings was considered. Further, the result was compared with the identified rib center lines with the automatic screening system AISCR-V3 for comparison with the conventional processing technique, and it was satisfactory when the versatility of this method was considered. (Wakatsuki, Y.)

  9. A Wide Lock-Range Referenceless CDR with Automatic Frequency Acquisition

    OpenAIRE

    Seon-Kyoo Lee; Young-Sang Kim; Hong-June Park; Jae-Yoon Sim

    2011-01-01

    A wide lock-range referenceless CDR circuit is proposed with an automatic tracking of data rate. For efficient frequency acquisition, a DLL-based loop is used with a simple phase/frequency detector to extract 1-bit period of input data stream. The CDR, implemented in a 65 nm CMOS, shows a lock range of 650 Mb/s-to-8 Gb/s and BER of less than 10-12 at 8 Gb/s with low power consumption.

  10. A Wide Lock-Range Referenceless CDR with Automatic Frequency Acquisition

    Directory of Open Access Journals (Sweden)

    Seon-Kyoo Lee

    2011-01-01

    Full Text Available A wide lock-range referenceless CDR circuit is proposed with an automatic tracking of data rate. For efficient frequency acquisition, a DLL-based loop is used with a simple phase/frequency detector to extract 1-bit period of input data stream. The CDR, implemented in a 65 nm CMOS, shows a lock range of 650 Mb/s-to-8 Gb/s and BER of less than 10-12 at 8 Gb/s with low power consumption.

  11. A novel automatic molecular test for detection of multidrug resistance tuberculosis in sputum specimen: A case control study.

    Science.gov (United States)

    Li, Qiang; Ou, Xi C; Pang, Yu; Xia, Hui; Huang, Hai R; Zhao, Bing; Wang, Sheng F; Zhao, Yan L

    2017-07-01

    MiniLab tuberculosis (ML TB) assay is a new automatic diagnostic tool for diagnosis of multidrug resistance tuberculosis (MDR-TB). This study was conducted with aims to know the performance of this assay. Sputum sample from 224 TB suspects was collected from tuberculosis suspects seeking medical care at Beijing Chest hospital. The sputum samples were directly used for smear and ML TB test. The left sputum sample was used to conduct Xpert MTB/RIF, Bactec MGIT culture and drug susceptibility test (DST). All discrepancies between the results from DST, molecular and phenotypic methods were confirmed by DNA Sequencing. The sensitivity and specificity of ML TB test for detecting MTBC from TB suspects were 95.1% and 88.9%, respectively. The sensitivity for smear negative TB suspects was 64.3%. For detection of RIF resistance, the sensitivity and specificity of ML TB test were 89.2% and 95.7%, respectively. For detection of INH resistance, the sensitivity and specificity of ML TB test were 78.3% and 98.1%, respectively. ML TB test showed similar performance to Xpert MTB/RIF for detection of MTBC and RIF resistance. In addition, ML TB also had good performance for INH resistance detection. Copyright © 2017. Published by Elsevier Ltd.

  12. Association of radionuclides with different molecular size fractions in soil solution: implications for plant uptake

    International Nuclear Information System (INIS)

    Nisbet, A.F.; Shaw, S.; Salbu, B.

    1993-01-01

    The feasibility of using hollow fibre ultrafiltration to determine the molecular size distribution of radionuclides in soil solution was investigated. The physical and chemical composition of soil plays a vital role in determining radionuclide uptake by plant roots. Soil solution samples were extracted from loam, peat and sand soils that had been artificially contaminated with 137 Cs, 90 Sr, 239 Pu and 241 Am six years previously as part of a five-year lysimeter study on radionuclide uptake to crops. Ultrafiltration of soil solution was performed using hollow fibre cartridges with a nominal molecular weight cut off of 3 and 10 kD. The association of 137 Cs, 90 Sr, 239 Pu and 241 Am with different molecular size fractions of the soil solution is discussed in terms of radionuclide bioavailability to cabbage grown in the same three soils. 137 Cs and 90 Sr were present in low molecular weight forms and as such were mobile in soil and potentially available for uptake by the cabbage. In contrast, a large proportion (61-87%) of the 239 Pu and 241 Am were associated with colloidal and high molecular weight material and therefore less available for uptake by plant roots. The contribution from low molecular weight species of 239 Pu and 241 Am to the total activity in soil solution decreased in the order loam ≥ peat ≥ sand. Association of radionuclides with low molecular weight species of less than 3 kD did not, however, automatically imply availability to plants. (author)

  13. Research on automatic correction of electronic beam path for distributed control

    International Nuclear Information System (INIS)

    Guo Xin; Su Haijun; Li Deming; Wang Shengli; Guo Honglei

    2014-01-01

    Background: Dynamitron, an electron irradiation accelerator of high-voltage, is used as a radiation source for industrial and agricultural production. The control system is an important component of dynamitron. Purpose: The aim is to improve the control system to meet the performance requirements of dynamitron for the stability. Methods: A distributed control system for the 1.5-MeV dynamitron is proposed to gain better performance. On this basis, an electron beam trajectory automatic correction method based on Cerebellar Model Articulation Controller and Proportional-Integral Derivative (CMAC-PID) controller is designed to improve the function of electron beam extraction system. Results: The distributed control system can meet the control requirements of the accelerator. The stability of the CMAC PID controller is better than that of conventional PID controller for the electron beam trajectory automatic correction system, and hence the CMAC-PID controller can provide better protection of dynamitron when electron beam deflection occurs. Conclusion: The distributed control system and the electron beam trajectory automatic correction method system can effectively improve the performance and reduce the failure probability of the accelerator, thereby enhancing the efficiency of the accelerator. (authors)

  14. Extraction of High Molecular Weight DNA from Fungal Rust Spores for Long Read Sequencing.

    Science.gov (United States)

    Schwessinger, Benjamin; Rathjen, John P

    2017-01-01

    Wheat rust fungi are complex organisms with a complete life cycle that involves two different host plants and five different spore types. During the asexual infection cycle on wheat, rusts produce massive amounts of dikaryotic urediniospores. These spores are dikaryotic (two nuclei) with each nucleus containing one haploid genome. This dikaryotic state is likely to contribute to their evolutionary success, making them some of the major wheat pathogens globally. Despite this, most published wheat rust genomes are highly fragmented and contain very little haplotype-specific sequence information. Current long-read sequencing technologies hold great promise to provide more contiguous and haplotype-phased genome assemblies. Long reads are able to span repetitive regions and phase structural differences between the haplomes. This increased genome resolution enables the identification of complex loci and the study of genome evolution beyond simple nucleotide polymorphisms. Long-read technologies require pure high molecular weight DNA as an input for sequencing. Here, we describe a DNA extraction protocol for rust spores that yields pure double-stranded DNA molecules with molecular weight of >50 kilo-base pairs (kbp). The isolated DNA is of sufficient purity for PacBio long-read sequencing, but may require additional purification for other sequencing technologies such as Nanopore and 10× Genomics.

  15. Finding weak points automatically

    International Nuclear Information System (INIS)

    Archinger, P.; Wassenberg, M.

    1999-01-01

    Operators of nuclear power stations have to carry out material tests at selected components by regular intervalls. Therefore a full automaticated test, which achieves a clearly higher reproducibility, compared to part automaticated variations, would provide a solution. In addition the full automaticated test reduces the dose of radiation for the test person. (orig.) [de

  16. A New Automatic Method of Urban Areas Mapping in East Asia from LANDSAT Data

    Science.gov (United States)

    XU, R.; Jia, G.

    2012-12-01

    Cities, as places where human activities are concentrated, account for a small percent of global land cover but are frequently cited as the chief causes of, and solutions to, climate, biogeochemistry, and hydrology processes at local, regional, and global scales. Accompanying with uncontrolled economic growth, urban sprawl has been attributed to the accelerating integration of East Asia into the world economy and involved dramatic changes in its urban form and land use. To understand the impact of urban extent on biogeophysical processes, reliable mapping of built-up areas is particularly essential in eastern cities as a result of their characteristics of smaller patches, more fragile, and a lower fraction of the urban landscape which does not have natural than in the West. Segmentation of urban land from other land-cover types using remote sensing imagery can be done by standard classification processes as well as a logic rule calculation based on spectral indices and their derivations. Efforts to establish such a logic rule with no threshold for automatically mapping are highly worthwhile. Existing automatic methods are reviewed, and then a proposed approach is introduced including the calculation of the new index and the improved logic rule. Following this, existing automatic methods as well as the proposed approach are compared in a common context. Afterwards, the proposed approach is tested separately in cities of large, medium, and small scale in East Asia selected from different LANDSAT images. The results are promising as the approach can efficiently segment urban areas, even in the presence of more complex eastern cities. Key words: Urban extraction; Automatic Method; Logic Rule; LANDSAT images; East AisaThe Proposed Approach of Extraction of Urban Built-up Areas in Guangzhou, China

  17. Validation and extraction of molecular-geometry information from small-molecule databases.

    Science.gov (United States)

    Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N

    2017-02-01

    A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.

  18. Effects of Ultrasound Assisted Extraction in Conjugation with Aid of Actinidin on the Molecular and Physicochemical Properties of Bovine Hide Gelatin

    Directory of Open Access Journals (Sweden)

    Tanbir Ahmad

    2018-03-01

    Full Text Available Actinidin was used to pretreat the bovine hide and ultrasonic wave (53 kHz and 500 W was used for the time durations of 2, 4 and 6 h at 60 °C to extract gelatin samples (UA2, UA4 and UA6, respectively. Control (UAC gelatin was extracted using ultrasound for 6 h at 60 °C without enzyme pretreatment. There was significant (p < 0.05 increase in gelatin yield as the time duration of ultrasound treatment increased with UA6 giving the highest yield of 19.65%. Gel strength and viscosity of UAC and UA6 extracted gelatin samples were 627.53 and 502.16 g and 16.33 and 15.60 mPa.s, respectively. Longer duration of ultrasound treatment increased amino acids content of the extracted gelatin and UAC exhibited the highest content of amino acids. Progressive degradation of polypeptide chains was observed in the protein pattern of the extracted gelatin as the time duration of ultrasound extraction increased. Fourier transform infrared (FTIR spectroscopy depicted loss of molecular order and degradation in UA6. Scanning electron microscopy (SEM revealed protein aggregation and network formation in the gelatin samples with increasing time of ultrasound treatment. The study indicated that ultrasound assisted gelatin extraction using actinidin exhibited high yield with good quality gelatin.

  19. RNA Polymerase II Second Largest Subunit Molecular Identification of Boletus griseipurpureus Corner From Thailand and Antibacterial Activity of Basidiocarp Extracts.

    Science.gov (United States)

    Aung-Aud-Chariya, Amornrat; Bangrak, Phuwadol; Lumyong, Saisamorn; Phupong, Worrapong; Aggangan, Nelly Siababa; Kamlangdee, Niyom

    2015-03-01

    Boletus griseipurpureus Corner, an edible mushroom, is a putative ectomycorrhizal fungus. Currently, the taxonomic boundary of this mushroom is unclear and its bitter taste makes it interesting for evaluating its antibacterial properties. The purpose of this study was to identify the genetic variation of this mushroom and also to evaluate any antibacterial activities. Basidiocarps were collected from 2 north-eastern provinces, Roi Et and Ubon Ratchathani, and from 2 southern provinces, Songkhla and Surat Thani, in Thailand. Genomic DNA was extracted and molecular structure was examined using the RNA polymerase II (RPB2) analysis. Antibacterial activities of basidiocarp extracts were conducted with Escherichia coli ATCC 25922, Staphylococcus aureus ATCC 29523 and methicillin-resistant Staphylococcus aureus (MRSA) 189 using the agar-well diffusion method. All the samples collected for this study constituted a monophyletic clade, which was closely related with the Boletus group of polypore fungi. For the antibacterial study, it was found that the crude methanol extract of basidiomes inhibited the growth of all bacteria in vitro more than the crude ethyl acetate extract. Basidomes collected from four locations in Thailand had low genetic variation and their extracts inhibited the growth of all tested bacteria. The health benefits of this edible species should be evaluated further.

  20. ExcelAutomat: a tool for systematic processing of files as applied to quantum chemical calculations

    Science.gov (United States)

    Laloo, Jalal Z. A.; Laloo, Nassirah; Rhyman, Lydia; Ramasami, Ponnadurai

    2017-07-01

    The processing of the input and output files of quantum chemical calculations often necessitates a spreadsheet as a key component of the workflow. Spreadsheet packages with a built-in programming language editor can automate the steps involved and thus provide a direct link between processing files and the spreadsheet. This helps to reduce user-interventions as well as the need to switch between different programs to carry out each step. The ExcelAutomat tool is the implementation of this method in Microsoft Excel (MS Excel) using the default Visual Basic for Application (VBA) programming language. The code in ExcelAutomat was adapted to work with the platform-independent open-source LibreOffice Calc, which also supports VBA. ExcelAutomat provides an interface through the spreadsheet to automate repetitive tasks such as merging input files, splitting, parsing and compiling data from output files, and generation of unique filenames. Selected extracted parameters can be retrieved as variables which can be included in custom codes for a tailored approach. ExcelAutomat works with Gaussian files and is adapted for use with other computational packages including the non-commercial GAMESS. ExcelAutomat is available as a downloadable MS Excel workbook or as a LibreOffice workbook.

  1. The automatic component of habit in health behavior: habit as cue-contingent automaticity.

    Science.gov (United States)

    Orbell, Sheina; Verplanken, Bas

    2010-07-01

    Habit might be usefully characterized as a form of automaticity that involves the association of a cue and a response. Three studies examined habitual automaticity in regard to different aspects of the cue-response relationship characteristic of unhealthy and healthy habits. In each study, habitual automaticity was assessed by the Self-Report Habit Index (SRHI). In Study 1 SRHI scores correlated with attentional bias to smoking cues in a Stroop task. Study 2 examined the ability of a habit cue to elicit an unwanted habit response. In a prospective field study, habitual automaticity in relation to smoking when drinking alcohol in a licensed public house (pub) predicted the likelihood of cigarette-related action slips 2 months later after smoking in pubs had become illegal. In Study 3 experimental group participants formed an implementation intention to floss in response to a specified situational cue. Habitual automaticity of dental flossing was rapidly enhanced compared to controls. The studies provided three different demonstrations of the importance of cues in the automatic operation of habits. Habitual automaticity assessed by the SRHI captured aspects of a habit that go beyond mere frequency or consistency of the behavior. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  2. SurfCut: Free-Boundary Surface Extraction

    KAUST Repository

    Algarni, Marei Saeed Mohammed

    2016-09-15

    We present SurfCut, an algorithm for extracting a smooth simple surface with unknown boundary from a noisy 3D image and a seed point. In contrast to existing approaches that extract smooth simple surfaces with boundary, our method requires less user input, i.e., a seed point, rather than a 3D boundary curve. Our method is built on the novel observation that certain ridge curves of a front propagated using the Fast Marching algorithm are likely to lie on the surface. Using the framework of cubical complexes, we design a novel algorithm to robustly extract such ridge curves and form the surface of interest. Our algorithm automatically cuts these ridge curves to form the surface boundary, and then extracts the surface. Experiments show the robustness of our method to errors in the data, and that we achieve higher accuracy with lower computational cost than comparable methods. © Springer International Publishing AG 2016.

  3. Extraction kinetics of uranium (VI) with polyurethane foam

    International Nuclear Information System (INIS)

    Huang, Ting-Chia; Chen, Dong-Hwang; Huang, Shius-Dong; Huang, Ching-Tsven; Shieh, Mu-Chang.

    1993-01-01

    The extraction kinetics of uranium(VI) from aqueous nitrate solution with polyether-based polyurethane foam was investigated in a batch reactor with automatic squeezing. The extraction curves of uranium(VI) concentration in solution vs. extraction time exhibited a rather rapid exponential decay within the first few minutes, followed by a slower exponential decay during the remaining period. This phenomenon can be attributed to the presence of two-phase structure, hard segment domains and soft segment matrix in the polyurethane foam. A two-stage rate model expressed by a superposition of two exponential curves was proposed, according to which the experimental data were fitted by an optimization method. The extraction rate of uranium (VI) was also found to increase with increasing temperature, nitrate concentration, and hydration of the cation of nitrate salt. (author)

  4. [Phenotypic and molecular characterization of a Colombian family with phenylketonuria].

    Science.gov (United States)

    Gélvez, Nancy; Acosta, Johana; López, Greizy; Castro, Derly; Prieto, Juan Carlos; Bermúdez, Martha; Tamayo, Marta L

    2016-09-01

    Phenylketonuria is a metabolic disorder characterized by severe neurological involvement and behavioral disorder, whose early diagnosis enables an effective treatment to avoid disease sequelae, thus changing the prognosis. Objective: To characterize a family with phenylketonuria in Colombia at clinical, biochemical and molecular levels. Materials and methods: The population consisted of seven individuals of a consanguineous family with four children with suggestive symptoms of phenylketonuria. After signing an informed consent, blood and urine samples were taken for colorimetric tests and high performance liquid and thin layer chromatographies. DNA extraction and sequencing of the 13 exons of the PAH gene were performed in all subjects. We designed primers for each exon with the Primer 3 software using automatic sequencing equipment Abiprism 3100 Avant. Sequences were analyzed using the SeqScape, v2.0, software. Results: We described the clinical and molecular characteristics of a Colombian family with phenylketonuria and confirmed the presence of the mutation c.398_401delATCA. We established a genotype-phenotype correlation, highlighting the interesting clinical variability found among the affected patients despite having the same mutation in all of them. Conclusions: Early recognition of this disease is very important to prevent its neurological and psychological sequelae, given that patients reach old age without diagnosis or proper management.

  5. Determination of diethylstilbestrol in seawater by molecularly imprinted solid-phase extraction coupled with high-performance liquid chromatography.

    Science.gov (United States)

    He, Xiuping; Mei, Xiaoqi; Wang, Jiangtao; Lian, Ziru; Tan, Liju; Wu, Wei

    2016-01-15

    An effective and highly selective molecularly imprinted material was prepared by suspension polymerization for the isolation and pre-concentration of synthetic estrogen diethylstilbestrol (DES) in seawater. The obtained MIPMs were proved to have more uniform size and porous structure, with maximum adsorption capacity of 8.43 mg g(-1) almost two times more than NIPMs (4.43 mg g(-1)). The MIPMs showed no significant deterioration of the adsorption capacity after five rounds of regeneration. An off-line molecularly imprinted solid phase extraction (MISPE) method followed by HPLC-DAD was proposed for the detection of DES in seawater, and recoveries were satisfactorily higher than 77%. Four seawater samples in aquaculture area were analyzed and 0.61 ng mL(-1) DES was detected in one sample. The result demonstrated that this method can be used for the rapid separation and clean up of trace residual of DES in seawater. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Automatic Reverse Engineering of Private Flight Control Protocols of UAVs

    Directory of Open Access Journals (Sweden)

    Ran Ji

    2017-01-01

    Full Text Available The increasing use of civil unmanned aerial vehicles (UAVs has the potential to threaten public safety and privacy. Therefore, airspace administrators urgently need an effective method to regulate UAVs. Understanding the meaning and format of UAV flight control commands by automatic protocol reverse-engineering techniques is highly beneficial to UAV regulation. To improve our understanding of the meaning and format of UAV flight control commands, this paper proposes a method to automatically analyze the private flight control protocols of UAVs. First, we classify flight control commands collected from a binary network trace into clusters; then, we analyze the meaning of flight control commands by the accumulated error of each cluster; next, we extract the binary format of commands and infer field semantics in these commands; and finally, we infer the location of the check field in command and the generator polynomial matrix. The proposed approach is validated via experiments on a widely used consumer UAV.

  7. Automatic Compound Annotation from Mass Spectrometry Data Using MAGMa.

    Science.gov (United States)

    Ridder, Lars; van der Hooft, Justin J J; Verhoeven, Stefan

    2014-01-01

    The MAGMa software for automatic annotation of mass spectrometry based fragmentation data was applied to 16 MS/MS datasets of the CASMI 2013 contest. Eight solutions were submitted in category 1 (molecular formula assignments) and twelve in category 2 (molecular structure assignment). The MS/MS peaks of each challenge were matched with in silico generated substructures of candidate molecules from PubChem, resulting in penalty scores that were used for candidate ranking. In 6 of the 12 submitted solutions in category 2, the correct chemical structure obtained the best score, whereas 3 molecules were ranked outside the top 5. All top ranked molecular formulas submitted in category 1 were correct. In addition, we present MAGMa results generated retrospectively for the remaining challenges. Successful application of the MAGMa algorithm required inclusion of the relevant candidate molecules, application of the appropriate mass tolerance and a sufficient degree of in silico fragmentation of the candidate molecules. Furthermore, the effect of the exhaustiveness of the candidate lists and limitations of substructure based scoring are discussed.

  8. Effective synthesis of magnetic porous molecularly imprinted polymers for efficient and selective extraction of cinnamic acid from apple juices.

    Science.gov (United States)

    Shi, Shuyun; Fan, Dengxin; Xiang, Haiyan; Li, Huan

    2017-12-15

    An effective strategy was proposed to prepare novel magnetic porous molecularly imprinted polymers (MPMIPs) for highly selective extraction of cinnamic acid (CMA) from complex matrices. Characterization and various parameters affecting adsorption and desorption behaviors were investigated. Results revealed adsorption behavior between CMA and MPMIPs followed Freundlich equation adsorption isotherm with a maximum adsorption capacity at 4.35mg/g and pseudo-second-order reaction kinetics with equilibrium time at 60min. Subsequently, MPMIPs were successfully used to selectively extract CMA from apple juice with a relatively satisfactory recovery (92.7-101.4%). Coupling with high-performance liquid chromatography and ultraviolet detection (HPLC-UV), the limit of detection (LOD) for CMA was 0.006µg/mL, and the linear range (0.02-10μg/mL) was wide with correlation coefficient at 0.9995. Finally, the contents of CMA in two kinds of apple juices were determined as 0.132 and 0.120μg/mL. Results indicated the superiority of MPMIPs in the selective extraction field. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Microprocessor controlled system for automatic and semi-automatic syntheses of radiopharmaceuticals

    International Nuclear Information System (INIS)

    Ruth, T.J.; Adam, M.J.; Morris, D.; Jivan, S.

    1986-01-01

    A computer based system has been constructed to control the automatic synthesis of 2-deoxy-2-( 18 F)fluoro-D-glucose and is also being used in the development of an automatic synthesis of L-6-( 18 F)fluorodopa. (author)

  10. Chemical composition and molecular structure of polysaccharide-protein biopolymer from Durio zibethinus seed: extraction and purification process

    Directory of Open Access Journals (Sweden)

    Amid Bahareh

    2012-10-01

    Full Text Available Abstract Background The biological functions of natural biopolymers from plant sources depend on their chemical composition and molecular structure. In addition, the extraction and further processing conditions significantly influence the chemical and molecular structure of the plant biopolymer. The main objective of the present study was to characterize the chemical and molecular structure of a natural biopolymer from Durio zibethinus seed. A size-exclusion chromatography coupled to multi angle laser light-scattering (SEC-MALS was applied to analyze the molecular weight (Mw, number average molecular weight (Mn, and polydispersity index (Mw/Mn. Results The most abundant monosaccharide in the carbohydrate composition of durian seed gum were galactose (48.6-59.9%, glucose (37.1-45.1%, arabinose (0.58-3.41%, and xylose (0.3-3.21%. The predominant fatty acid of the lipid fraction from the durian seed gum were palmitic acid (C16:0, palmitoleic acid (C16:1, stearic acid (C18:0, oleic acid (C18:1, linoleic acid (C18:2, and linolenic acid (C18:2. The most abundant amino acids of durian seed gum were: leucine (30.9-37.3%, lysine (6.04-8.36%, aspartic acid (6.10-7.19%, glycine (6.07-7.42%, alanine (5.24-6.14%, glutamic acid (5.57-7.09%, valine (4.5-5.50%, proline (3.87-4.81%, serine (4.39-5.18%, threonine (3.44-6.50%, isoleucine (3.30-4.07%, and phenylalanine (3.11-9.04%. Conclusion The presence of essential amino acids in the chemical structure of durian seed gum reinforces its nutritional value.

  11. Extracting Information from Multimedia Meeting Collections

    OpenAIRE

    Gatica-Perez, Daniel; Zhang, Dong; Bengio, Samy

    2005-01-01

    Multimedia meeting collections, composed of unedited audio and video streams, handwritten notes, slides, and electronic documents that jointly constitute a raw record of complex human interaction processes in the workplace, have attracted interest due to the increasing feasibility of recording them in large quantities, by the opportunities for information access and retrieval applications derived from the automatic extraction of relevant meeting information, and by the challenges that the ext...

  12. Dietary administration of scallion extract effectively inhibits colorectal tumor growth: cellular and molecular mechanisms in mice.

    Directory of Open Access Journals (Sweden)

    Palanisamy Arulselvan

    Full Text Available Colorectal cancer is a common malignancy and a leading cause of cancer death worldwide. Diet is known to play an important role in the etiology of colon cancer and dietary chemoprevention is receiving increasing attention for prevention and/or alternative treatment of colon cancers. Allium fistulosum L., commonly known as scallion, is popularly used as a spice or vegetable worldwide, and as a traditional medicine in Asian cultures for treating a variety of diseases. In this study we evaluated the possible beneficial effects of dietary scallion on chemoprevention of colon cancer using a mouse model of colon carcinoma (CT-26 cells subcutaneously inoculated into BALB/c mice. Tumor lysates were subjected to western blotting for analysis of key inflammatory markers, ELISA for analysis of cytokines, and immunohistochemistry for analysis of inflammatory markers. Metabolite profiles of scallion extracts were analyzed by LC-MS/MS. Scallion extracts, particularly hot-water extract, orally fed to mice at 50 mg (dry weight/kg body weight resulted in significant suppression of tumor growth and enhanced the survival rate of test mice. At the molecular level, scallion extracts inhibited the key inflammatory markers COX-2 and iNOS, and suppressed the expression of various cellular markers known to be involved in tumor apoptosis (apoptosis index, proliferation (cyclin D1 and c-Myc, angiogenesis (VEGF and HIF-1α, and tumor invasion (MMP-9 and ICAM-1 when compared with vehicle control-treated mice. Our findings may warrant further investigation of the use of common scallion as a chemopreventive dietary agent to lower the risk of colon cancer.

  13. Synergistic effect of dicarbollide anions in liquid-liquid extraction: a molecular dynamics study at the octanol-water interface.

    Science.gov (United States)

    Chevrot, G; Schurhammer, R; Wipff, G

    2007-04-28

    We report a molecular dynamics study of chlorinated cobalt bis(dicarbollide) anions [(B(9)C(2)H(8)Cl(3))(2)Co](-)"CCD(-)" in octanol and at the octanol-water interface, with the main aim to understand why these hydrophobic species act as strong synergists in assisted liquid-liquid cation extraction. Neat octanol is quite heterogeneous and is found to display dual solvation properties, allowing to well solubilize CCD(-), Cs(+) salts in the form of diluted pairs or oligomers, without displaying aggregation. At the aqueous interface, octanol behaves as an amphiphile, forming either monolayers or bilayers, depending on the initial state and confinement conditions. In biphasic octanol-water systems, CCD(-) anions are found to mainly partition to the organic phase, thus attracting Cs(+) or even more hydrophilic counterions like Eu(3+) into that phase. The remaining CCD(-) anions adsorb at the interface, but are less surface active than at the chloroform interface. Finally, we compare the interfacial behavior of the Eu(BTP)(3)(3+) complex in the absence and in the presence of CCD(-) anions and extractant molecules. It is found that when the CCD(-)'s are concentrated enough, the complex is extracted to the octanol phase. Otherwise, it is trapped at the interface, attracted by water. These results are compared to those obtained with chloroform as organic phase and discussed in the context of synergistic effect of CCD(-) in liquid-liquid extraction, pointing to the importance of dual solvation properties of octanol and of the hydrophobic character of CCD(-) for synergistic extraction of cations.

  14. Automatic Camera Control

    DEFF Research Database (Denmark)

    Burelli, Paolo; Preuss, Mike

    2014-01-01

    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...

  15. Automatic feature design for optical character recognition using an evolutionary search procedure.

    Science.gov (United States)

    Stentiford, F W

    1985-03-01

    An automatic evolutionary search is applied to the problem of feature extraction in an OCR application. A performance measure based on feature independence is used to generate features which do not appear to suffer from peaking effects [17]. Features are extracted from a training set of 30 600 machine printed 34 class alphanumeric characters derived from British mail. Classification results on the training set and a test set of 10 200 characters are reported for an increasing number of features. A 1.01 percent forced decision error rate is obtained on the test data using 316 features. The hardware implementation should be cheap and fast to operate. The performance compares favorably with current low cost OCR page readers.

  16. Chemodosimeter-based fluorescent detection of L-cysteine after extracted by molecularly imprinted polymers.

    Science.gov (United States)

    Cai, Xiaoqiang; Li, Jinhua; Zhang, Zhong; Wang, Gang; Song, Xingliang; You, Jinmao; Chen, Lingxin

    2014-03-01

    A chemodosimeter-based fluorescent detection method coupled with molecularly imprinted polymers (MIPs) extraction was developed for determination of L-cysteine (L-Cys) by combining molecular imprinting technique with fluorescent chemodosimeter. The MIPs prepared by precipitation polymerization with L-Cys as template, possessed high specific surface area of 145 m(2)/g and good thermal stability without decomposition lower than 300 °C, and were successfully applied as an adsorbent with excellent selectivity for L-Cys over other amino acids, and enantioselectivity was also demonstrated. A novel chemodosimeter, rhodamine B1, was synthesized for discriminating L-Cys from its structurally similar homocysteine and glutathione as well as various possibly co-existing biospecies in aqueous solutions with notable fluorescence enhancement when adding L-Cys. As L-Cys was added with increasing concentrations, an emission band peaked at 580 nm occurred and significantly increased in fluorescence intensity, by which the L-Cys could be sensed optically. High detectability up to 12.5 nM was obtained. An excellent linearity was found within the wide range of 0.05-50 μM (r=0.9996), and reasonable relative standard deviations ranging from 0.3% to 3.5% were attained. Such typical features as high selectivity, high sensitivity, easy operation and low cost enabled this MIPs-fluorometry to be potentially applicable for routine detection of trace L-Cys. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Automatic detection of subglacial lakes in radar sounder data acquired in Antarctica

    Science.gov (United States)

    Ilisei, Ana-Maria; Khodadadzadeh, Mahdi; Dalsasso, Emanuele; Bruzzone, Lorenzo

    2017-10-01

    Subglacial lakes decouple the ice sheet from the underlying bedrock, thus facilitating the sliding of the ice masses towards the borders of the continents, consequently raising the sea level. This motivated increasing attention in the detection of subglacial lakes. So far, about 70% of the total number of subglacial lakes in Antarctica have been detected by analysing radargrams acquired by radar sounder (RS) instruments. Although the amount of radargrams is expected to drastically increase, from both airborne and possible future Earth observation RS missions, currently the main approach to the detection of subglacial lakes in radargrams is by visual interpretation. This approach is subjective and extremely time consuming, thus difficult to apply to a large amount of radargrams. In order to address the limitations of the visual interpretation and to assist glaciologists in better understanding the relationship between the subglacial environment and the climate system, in this paper, we propose a technique for the automatic detection of subglacial lakes. The main contribution of the proposed technique is the extraction of features for discriminating between lake and non-lake basal interfaces. In particular, we propose the extraction of features that locally capture the topography of the basal interface, the shape and the correlation of the basal waveforms. Then, the extracted features are given as input to a supervised binary classifier based on Support Vector Machine to perform the automatic subglacial lake detection. The effectiveness of the proposed method is proven both quantitatively and qualitatively by applying it to a large dataset acquired in East Antarctica by the MultiChannel Coherent Radar Depth Sounder.

  18. A novel automated device for rapid nucleic acid extraction utilizing a zigzag motion of magnetic silica beads

    International Nuclear Information System (INIS)

    Yamaguchi, Akemi; Matsuda, Kazuyuki; Uehara, Masayuki; Honda, Takayuki; Saito, Yasunori

    2016-01-01

    We report a novel automated device for nucleic acid extraction, which consists of a mechanical control system and a disposable cassette. The cassette is composed of a bottle, a capillary tube, and a chamber. After sample injection in the bottle, the sample is lysed, and nucleic acids are adsorbed on the surface of magnetic silica beads. These magnetic beads are transported and are vibrated through the washing reagents in the capillary tube under the control of the mechanical control system, and thus, the nucleic acid is purified without centrifugation. The purified nucleic acid is automatically extracted in 3 min for the polymerase chain reaction (PCR). The nucleic acid extraction is dependent on the transport speed and the vibration frequency of the magnetic beads, and optimizing these two parameters provided better PCR efficiency than the conventional manual procedure. There was no difference between the detection limits of our novel device and that of the conventional manual procedure. We have already developed the droplet-PCR machine, which can amplify and detect specific nucleic acids rapidly and automatically. Connecting the droplet-PCR machine to our novel automated extraction device enables PCR analysis within 15 min, and this system can be made available as a point-of-care testing in clinics as well as general hospitals. - Highlights: • Automatic nucleic acid extraction is performed in 3 min. • Zigzag motion of magnetic silica beads yields rapid and efficient extraction. • The present our device provides better performance than the conventional procedure.

  19. Automatic Shadow Detection and Removal from a Single Image.

    Science.gov (United States)

    Khan, Salman H; Bennamoun, Mohammed; Sohel, Ferdous; Togneri, Roberto

    2016-03-01

    We present a framework to automatically detect and remove shadows in real world scenes from a single image. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The features are learned at the super-pixel level and along the dominant boundaries in the image. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow masks. Using the detected shadow masks, we propose a Bayesian formulation to accurately extract shadow matte and subsequently remove shadows. The Bayesian formulation is based on a novel model which accurately models the shadow generation process in the umbra and penumbra regions. The model parameters are efficiently estimated using an iterative optimization procedure. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.

  20. Automatic limb identification and sleeping parameters assessment for pressure ulcer prevention.

    Science.gov (United States)

    Baran Pouyan, Maziyar; Birjandtalab, Javad; Nourani, Mehrdad; Matthew Pompeo, M D

    2016-08-01

    Pressure ulcers (PUs) are common among vulnerable patients such as elderly, bedridden and diabetic. PUs are very painful for patients and costly for hospitals and nursing homes. Assessment of sleeping parameters on at-risk limbs is critical for ulcer prevention. An effective assessment depends on automatic identification and tracking of at-risk limbs. An accurate limb identification can be used to analyze the pressure distribution and assess risk for each limb. In this paper, we propose a graph-based clustering approach to extract the body limbs from the pressure data collected by a commercial pressure map system. A robust signature-based technique is employed to automatically label each limb. Finally, an assessment technique is applied to evaluate the experienced stress by each limb over time. The experimental results indicate high performance and more than 94% average accuracy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Automatic target classification of man-made objects in synthetic aperture radar images using Gabor wavelet and neural network

    Science.gov (United States)

    Vasuki, Perumal; Roomi, S. Mohamed Mansoor

    2013-01-01

    Processing of synthetic aperture radar (SAR) images has led to the development of automatic target classification approaches. These approaches help to classify individual and mass military ground vehicles. This work aims to develop an automatic target classification technique to classify military targets like truck/tank/armored car/cannon/bulldozer. The proposed method consists of three stages via preprocessing, feature extraction, and neural network (NN). The first stage removes speckle noise in a SAR image by the identified frost filter and enhances the image by histogram equalization. The second stage uses a Gabor wavelet to extract the image features. The third stage classifies the target by an NN classifier using image features. The proposed work performs better than its counterparts, like K-nearest neighbor (KNN). The proposed work performs better on databases like moving and stationary target acquisition and recognition against the earlier methods by KNN.

  2. Motor automaticity in Parkinson’s disease

    Science.gov (United States)

    Wu, Tao; Hallett, Mark; Chan, Piu

    2017-01-01

    Bradykinesia is the most important feature contributing to motor difficulties in Parkinson’s disease (PD). However, the pathophysiology underlying bradykinesia is not fully understood. One important aspect is that PD patients have difficulty in performing learned motor skills automatically, but this problem has been generally overlooked. Here we review motor automaticity associated motor deficits in PD, such as reduced arm swing, decreased stride length, freezing of gait, micrographia and reduced facial expression. Recent neuroimaging studies have revealed some neural mechanisms underlying impaired motor automaticity in PD, including less efficient neural coding of movement, failure to shift automated motor skills to the sensorimotor striatum, instability of the automatic mode within the striatum, and use of attentional control and/or compensatory efforts to execute movements usually performed automatically in healthy people. PD patients lose previously acquired automatic skills due to their impaired sensorimotor striatum, and have difficulty in acquiring new automatic skills or restoring lost motor skills. More investigations on the pathophysiology of motor automaticity, the effect of L-dopa or surgical treatments on automaticity, and the potential role of using measures of automaticity in early diagnosis of PD would be valuable. PMID:26102020

  3. Automatic control of a primary electric thrust subsystem

    Science.gov (United States)

    Macie, T. W.; Macmedan, M. L.

    1975-01-01

    A concept for automatic control of the thrust subsystem has been developed by JPL and participating NASA Centers. This paper reports on progress in implementing the concept at JPL. Control of the Thrust Subsystem (TSS) is performed by the spacecraft computer command subsystem, and telemetry data is extracted by the spacecraft flight data subsystem. The Data and Control Interface Unit, an element of the TSS, provides the interface with the individual elements of the TSS. The control philosophy and implementation guidelines are presented. Control requirements are listed, and the control mechanism, including the serial digital data intercommunication system, is outlined. The paper summarizes progress to Fall 1974.

  4. Automatic QRS complex detection using two-level convolutional neural network.

    Science.gov (United States)

    Xiang, Yande; Lin, Zhitao; Meng, Jianyi

    2018-01-29

    The QRS complex is the most noticeable feature in the electrocardiogram (ECG) signal, therefore, its detection is critical for ECG signal analysis. The existing detection methods largely depend on hand-crafted manual features and parameters, which may introduce significant computational complexity, especially in the transform domains. In addition, fixed features and parameters are not suitable for detecting various kinds of QRS complexes under different circumstances. In this study, based on 1-D convolutional neural network (CNN), an accurate method for QRS complex detection is proposed. The CNN consists of object-level and part-level CNNs for extracting different grained ECG morphological features automatically. All the extracted morphological features are used by multi-layer perceptron (MLP) for QRS complex detection. Additionally, a simple ECG signal preprocessing technique which only contains difference operation in temporal domain is adopted. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed detection method achieves overall sensitivity Sen = 99.77%, positive predictivity rate PPR = 99.91%, and detection error rate DER = 0.32%. In addition, the performance variation is performed according to different signal-to-noise ratio (SNR) values. An automatic QRS detection method using two-level 1-D CNN and simple signal preprocessing technique is proposed for QRS complex detection. Compared with the state-of-the-art QRS complex detection approaches, experimental results show that the proposed method acquires comparable accuracy.

  5. Automatic dirt trail analysis in dermoscopy images.

    Science.gov (United States)

    Cheng, Beibei; Joe Stanley, R; Stoecker, William V; Osterwise, Christopher T P; Stricklin, Sherea M; Hinton, Kristen A; Moss, Randy H; Oliviero, Margaret; Rabinovitz, Harold S

    2013-02-01

    Basal cell carcinoma (BCC) is the most common cancer in the US. Dermatoscopes are devices used by physicians to facilitate the early detection of these cancers based on the identification of skin lesion structures often specific to BCCs. One new lesion structure, referred to as dirt trails, has the appearance of dark gray, brown or black dots and clods of varying sizes distributed in elongated clusters with indistinct borders, often appearing as curvilinear trails. In this research, we explore a dirt trail detection and analysis algorithm for extracting, measuring, and characterizing dirt trails based on size, distribution, and color in dermoscopic skin lesion images. These dirt trails are then used to automatically discriminate BCC from benign skin lesions. For an experimental data set of 35 BCC images with dirt trails and 79 benign lesion images, a neural network-based classifier achieved a 0.902 are under a receiver operating characteristic curve using a leave-one-out approach. Results obtained from this study show that automatic detection of dirt trails in dermoscopic images of BCC is feasible. This is important because of the large number of these skin cancers seen every year and the challenge of discovering these earlier with instrumentation. © 2011 John Wiley & Sons A/S.

  6. Comparable efficiency of different extraction protocols for wheat and rye prolamins

    Directory of Open Access Journals (Sweden)

    Peter Socha

    2016-01-01

    Full Text Available The identification and quantification of cereal storage proteins is of interest of many researchers. Their structural or functional properties are usually affected by the way how they are extracted. The efficiency of extraction process depends on the cereal source and working conditions. Here, we described various commonly used extraction protocols differing in the extraction conditions (pre-extraction of albumins/globulins, sequential extraction of individual protein fractions or co-extraction of gluten proteins, heating or non-heating, reducing or non-reducing conditions. The total protein content of all fractions extracted from commercially available wheat and rye flours was measured by the Bradford method. Tris-Tricine SDS-PAGE was used to determine the molecular weights of wheat gliadins, rye secalins and high-molecular weight glutelins which are the main triggering factors causing celiac disease. Moreover, we were able to distinguish individual subunits (α/β-, γ-, ω-gliadins and 40k-γ-, 75k-γ-, ω-secalins of wheat/rye prolamins. Generally, modified extraction protocols against classical Osborne procedure were more effective and yields higher protein content in all protein fractions. Bradford measurement led into underestimation of results in three extraction procedures, while all protein fractions were clearly identified on SDS-PAGE gels. Co-extraction of gluten proteins resulted in appearance of both, low-molecular weight fractions (wheat gliadins and rye secalins as well as high-molecular weight glutelins which means that is not necessary to extract gluten proteins separately. The two of three extraction protocols showed high technical reproducibility with coefficient of variation less than 20%. Carefully optimized extraction protocol can be advantageous for further analyses of cereal prolamins.  Normal 0 21 false false false SK X-NONE X-NONE

  7. KRAS detection in colonic tumors by DNA extraction from FTA paper: the molecular touch-prep.

    Science.gov (United States)

    Petras, Melissa L; Lefferts, Joel A; Ward, Brian P; Suriawinata, Arief A; Tsongalis, Gregory J

    2011-12-01

    DNA isolated from formalin-fixed paraffin-embedded (FFPE) tissue is usually more degraded and contains more polymerase chain reaction (PCR) inhibitors than DNA isolated from nonfixed tissue. In addition, the tumor size and cellular heterogeneity found in tissue sections can often impact testing for molecular biomarkers. As a potential remedy to this situation, we evaluated the use of Whatman FTA paper cards for collection of colorectal tumor samples before tissue fixation and for isolation of DNA for use in a real-time PCR-based KRAS mutation assay. Eleven colon tumor samples were collected by making a cut into the fresh tumor and applying the Whatman FTA paper to the cut surface. Matched FFPE tissue blocks from these tumors were also collected for comparison. KRAS mutation analysis was carried out using the Applied Biosystems 7500 Fast Real-time PCR System using 7 independent custom TaqMan PCR assays. Of the 11 colon tumors sampled, 6 were positive for KRAS mutations in both the Whatman FTA paper preparations and corresponding FFPE samples. Whatman FTA paper cards for collection of colorectal tumor samples before tissue fixation and for isolation of DNA have many advantages including ease of use, intrinsic antimicrobial properties, long storage potential (stability of DNA over time), and a faster turnaround time for results. Extracted DNA should be suitable for most molecular diagnostic assays that use PCR techniques. This novel means of DNA preservation from surgical specimens would benefit from additional study and validation as a dependable and practical technique to preserve specimens for molecular testing.

  8. Water-compatible dummy molecularly imprinted resin prepared in aqueous solution for green miniaturized solid-phase extraction of plant growth regulators.

    Science.gov (United States)

    Wang, Mingyu; Chang, Xiaochen; Wu, Xingyu; Yan, Hongyuan; Qiao, Fengxia

    2016-08-05

    A water-compatible dummy molecularly imprinted resin (MIR) was synthesized in water using melamine, urea, and formaldehyde as hydrophilic monomers of co-polycondensation. A triblock copolymer (PEO-PPO-PEO, P123) was used as porogen to dredge the network structure of MIR, and N-(1-naphthyl) ethylenediamine dihydrochloride, which has similar shape and size to the target analytes, was the dummy template of molecular imprinting. The obtained MIR was used as the adsorbent in a green miniaturized solid-phase extraction (MIR⬜mini-SPE) of plant growth regulators, and there was no organic solvent used in the entire MIR⬜mini-SPE procedure. The calibration linearity of MIR⬜mini-SPE⬜HPLC method was obtained in a range 5⬜250ngmL(↙1) for IAA, IPA, IBA, and NAA with correlation coefficient (r) Ⱕ0.9998. Recoveries at three spike levels are in the range of 87.6⬜100.0% for coconut juice with relative standard deviations Ⱔ8.1%. The MIR⬜mini-SPE method possesses the advantages of environmental friendliness, simple operation, and high efficiency, so it is potential to apply the green pretreatment strategy to extraction of trace analytes in aqueous samples. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. ZnO nanorod array solid phase micro-extraction fiber coating: fabrication and extraction capability

    International Nuclear Information System (INIS)

    Wang Dan; Zhang Zhuomin; Li Tiemei; Zhang Lan; Chen Guonan; Luo Lin

    2009-01-01

    In this paper, a ZnO nanorod array has been introduced as a coating to the headspace solid phase micro-extraction (HSSPME) field. The coating shows good extraction capability for volatile organic compounds (VOCs) by use of BTEX as a standard and can be considered suitable for sampling trace and small molecular VOC targets. In comparison with the randomly oriented ZnO nanorod HSSPME coating, ZnO nanorod array HSSPME fiber coating shows better extraction capability, which is attributed to the nanorod array structure of the coating. Also, this novel nanorod array coating shows good extraction selectivity to 1-propanethiol.

  10. Intelligent keyframe extraction for video printing

    Science.gov (United States)

    Zhang, Tong

    2004-10-01

    Nowadays most digital cameras have the functionality of taking short video clips, with the length of video ranging from several seconds to a couple of minutes. The purpose of this research is to develop an algorithm which extracts an optimal set of keyframes from each short video clip so that the user could obtain proper video frames to print out. In current video printing systems, keyframes are normally obtained by evenly sampling the video clip over time. Such an approach, however, may not reflect highlights or regions of interest in the video. Keyframes derived in this way may also be improper for video printing in terms of either content or image quality. In this paper, we present an intelligent keyframe extraction approach to derive an improved keyframe set by performing semantic analysis of the video content. For a video clip, a number of video and audio features are analyzed to first generate a candidate keyframe set. These features include accumulative color histogram and color layout differences, camera motion estimation, moving object tracking, face detection and audio event detection. Then, the candidate keyframes are clustered and evaluated to obtain a final keyframe set. The objective is to automatically generate a limited number of keyframes to show different views of the scene; to show different people and their actions in the scene; and to tell the story in the video shot. Moreover, frame extraction for video printing, which is a rather subjective problem, is considered in this work for the first time, and a semi-automatic approach is proposed.

  11. Image-based automatic recognition of larvae

    Science.gov (United States)

    Sang, Ru; Yu, Guiying; Fan, Weijun; Guo, Tiantai

    2010-08-01

    As the main objects, imagoes have been researched in quarantine pest recognition in these days. However, pests in their larval stage are latent, and the larvae spread abroad much easily with the circulation of agricultural and forest products. It is presented in this paper that, as the new research objects, larvae are recognized by means of machine vision, image processing and pattern recognition. More visional information is reserved and the recognition rate is improved as color image segmentation is applied to images of larvae. Along with the characteristics of affine invariance, perspective invariance and brightness invariance, scale invariant feature transform (SIFT) is adopted for the feature extraction. The neural network algorithm is utilized for pattern recognition, and the automatic identification of larvae images is successfully achieved with satisfactory results.

  12. Automatic Mosaicking of Satellite Imagery Considering the Clouds

    Science.gov (United States)

    Kang, Yifei; Pan, Li; Chen, Qi; Zhang, Tong; Zhang, Shasha; Liu, Zhang

    2016-06-01

    With the rapid development of high resolution remote sensing for earth observation technology, satellite imagery is widely used in the fields of resource investigation, environment protection, and agricultural research. Image mosaicking is an important part of satellite imagery production. However, the existence of clouds leads to lots of disadvantages for automatic image mosaicking, mainly in two aspects: 1) Image blurring may be caused during the process of image dodging, 2) Cloudy areas may be passed through by automatically generated seamlines. To address these problems, an automatic mosaicking method is proposed for cloudy satellite imagery in this paper. Firstly, modified Otsu thresholding and morphological processing are employed to extract cloudy areas and obtain the percentage of cloud cover. Then, cloud detection results are used to optimize the process of dodging and mosaicking. Thus, the mosaic image can be combined with more clear-sky areas instead of cloudy areas. Besides, clear-sky areas will be clear and distortionless. The Chinese GF-1 wide-field-of-view orthoimages are employed as experimental data. The performance of the proposed approach is evaluated in four aspects: the effect of cloud detection, the sharpness of clear-sky areas, the rationality of seamlines and efficiency. The evaluation results demonstrated that the mosaic image obtained by our method has fewer clouds, better internal color consistency and better visual clarity compared with that obtained by traditional method. The time consumed by the proposed method for 17 scenes of GF-1 orthoimages is within 4 hours on a desktop computer. The efficiency can meet the general production requirements for massive satellite imagery.

  13. Chelating resin immobilizing carboxymethylated polyethyleneimine for selective solid-phase extraction of trace elements: Effect of the molecular weight of polyethyleneimine and its carboxymethylation rate.

    Science.gov (United States)

    Kagaya, Shigehiro; Kajiwara, Takehiro; Gemmei-Ide, Makoto; Kamichatani, Waka; Inoue, Yoshinori

    2016-01-15

    The effect of the molecular weight of polyethyleneimine (PEI), defined as a compound having two or more ethyleneamine units, and of its carboxymethylation rate (CM/N), represented by the ratio of ion-exchange capacity to the amount of N on the resin, on the selective solid-phase extraction ability of the chelating resin immobilizing carboxymethylated (CM) PEI was investigated. The chelating resins (24 types) were prepared by immobilization of diethylenetriamine, triethylenetetramine, tetraethylenepentamine, pentaethylenehexamine, PEI300 (MW=ca. 300), and PEI600 (MW=ca. 600) on methacrylate resins, followed by carboxymethylation with various amounts of sodium monochloroacetate. When resins with approximately the same CM/N ratio (0.242-0.271) were used, the recovery of Cd, Co, Cu, Fe, Ni, Pb, Ti, Zn, and alkaline earth elements increased with increasing the molecular weight of PEIs under acidic and weakly acidic conditions; however, the extraction behavior of Mo and V was only slightly affected. This was probably due to the increase in N content of the resin, resulting in an increase in carboxylic acid groups; the difference in the molecular weight of PEIs immobilized on the resin exerts an insignificant influence on the selective extraction ability. The CM/N ratio considerably affected the extraction behavior for various elements. Under acidic and neutral conditions, the recovery of Cd, Co, Cu, Fe, Ni, Pb, Ti, and Zn increased with increasing CM/N values. However, under these conditions, the recovery of alkaline earth elements was considerably low when a resin with low CM/N ratio was used. This is presumably attributed to the different stability constants of the complexes of these elements with aminocarboxylic acids and amines, and to the electrostatic repulsion between the elements and the protonated amino groups in the CM-PEI. The recovery of Mo and V decreased or varied with increasing CM/N values, suggesting that the extraction of these elements occurred mainly

  14. RESEARCH ON REMOTE SENSING GEOLOGICAL INFORMATION EXTRACTION BASED ON OBJECT ORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2018-04-01

    Full Text Available The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  15. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    International Nuclear Information System (INIS)

    Benkirane, A.; Auger, G.; Chbihi, A.; Bloyet, D.; Plagnol, E.

    1994-01-01

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ''classical'' automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append

  16. A contextual image segmentation system using a priori information for automatic data classification in nuclear physics

    Energy Technology Data Exchange (ETDEWEB)

    Benkirane, A; Auger, G; Chbihi, A [Grand Accelerateur National d` Ions Lourds (GANIL), 14 - Caen (France); Bloyet, D [Caen Univ., 14 (France); Plagnol, E [Paris-11 Univ., 91 - Orsay (France). Inst. de Physique Nucleaire

    1994-12-31

    This paper presents an original approach to solve an automatic data classification problem by means of image processing techniques. The classification is achieved using image segmentation techniques for extracting the meaningful classes. Two types of information are merged for this purpose: the information contained in experimental images and a priori information derived from underlying physics (and adapted to image segmentation problem). This data fusion is widely used at different stages of the segmentation process. This approach yields interesting results in terms of segmentation performances, even in very noisy cases. Satisfactory classification results are obtained in cases where more ``classical`` automatic data classification methods fail. (authors). 25 refs., 14 figs., 1 append.

  17. AN AUTOMATIC OPTICAL AND SAR IMAGE REGISTRATION METHOD USING ITERATIVE MULTI-LEVEL AND REFINEMENT MODEL

    Directory of Open Access Journals (Sweden)

    C. Xu

    2016-06-01

    Full Text Available Automatic image registration is a vital yet challenging task, particularly for multi-sensor remote sensing images. Given the diversity of the data, it is unlikely that a single registration algorithm or a single image feature will work satisfactorily for all applications. Focusing on this issue, the mainly contribution of this paper is to propose an automatic optical-to-SAR image registration method using –level and refinement model: Firstly, a multi-level strategy of coarse-to-fine registration is presented, the visual saliency features is used to acquire coarse registration, and then specific area and line features are used to refine the registration result, after that, sub-pixel matching is applied using KNN Graph. Secondly, an iterative strategy that involves adaptive parameter adjustment for re-extracting and re-matching features is presented. Considering the fact that almost all feature-based registration methods rely on feature extraction results, the iterative strategy improve the robustness of feature matching. And all parameters can be automatically and adaptively adjusted in the iterative procedure. Thirdly, a uniform level set segmentation model for optical and SAR images is presented to segment conjugate features, and Voronoi diagram is introduced into Spectral Point Matching (VSPM to further enhance the matching accuracy between two sets of matching points. Experimental results show that the proposed method can effectively and robustly generate sufficient, reliable point pairs and provide accurate registration.

  18. Arrhythmia Classification Based on Multi-Domain Feature Extraction for an ECG Recognition System

    Directory of Open Access Journals (Sweden)

    Hongqiang Li

    2016-10-01

    Full Text Available Automatic recognition of arrhythmias is particularly important in the diagnosis of heart diseases. This study presents an electrocardiogram (ECG recognition system based on multi-domain feature extraction to classify ECG beats. An improved wavelet threshold method for ECG signal pre-processing is applied to remove noise interference. A novel multi-domain feature extraction method is proposed; this method employs kernel-independent component analysis in nonlinear feature extraction and uses discrete wavelet transform to extract frequency domain features. The proposed system utilises a support vector machine classifier optimized with a genetic algorithm to recognize different types of heartbeats. An ECG acquisition experimental platform, in which ECG beats are collected as ECG data for classification, is constructed to demonstrate the effectiveness of the system in ECG beat classification. The presented system, when applied to the MIT-BIH arrhythmia database, achieves a high classification accuracy of 98.8%. Experimental results based on the ECG acquisition experimental platform show that the system obtains a satisfactory classification accuracy of 97.3% and is able to classify ECG beats efficiently for the automatic identification of cardiac arrhythmias.

  19. Technical note: New applications for on-line automated solid phase extraction

    OpenAIRE

    MacFarlane, John D.

    1997-01-01

    This technical note explains the disadvantages of manual solid phase extraction (SPE) techniques and the benefits to be gained with automatic systems. The note reports on a number of general and highly specific applications using the Sample Preparation Unit OSP-2A.

  20. Automatic Photoelectric Telescope Service

    International Nuclear Information System (INIS)

    Genet, R.M.; Boyd, L.J.; Kissell, K.E.; Crawford, D.L.; Hall, D.S.; BDM Corp., McLean, VA; Kitt Peak National Observatory, Tucson, AZ; Dyer Observatory, Nashville, TN)

    1987-01-01

    Automatic observatories have the potential of gathering sizable amounts of high-quality astronomical data at low cost. The Automatic Photoelectric Telescope Service (APT Service) has realized this potential and is routinely making photometric observations of a large number of variable stars. However, without observers to provide on-site monitoring, it was necessary to incorporate special quality checks into the operation of the APT Service at its multiple automatic telescope installation on Mount Hopkins. 18 references

  1. Automatic imitation: A meta-analysis.

    Science.gov (United States)

    Cracco, Emiel; Bardi, Lara; Desmet, Charlotte; Genschow, Oliver; Rigoni, Davide; De Coster, Lize; Radkova, Ina; Deschrijver, Eliane; Brass, Marcel

    2018-05-01

    Automatic imitation is the finding that movement execution is facilitated by compatible and impeded by incompatible observed movements. In the past 15 years, automatic imitation has been studied to understand the relation between perception and action in social interaction. Although research on this topic started in cognitive science, interest quickly spread to related disciplines such as social psychology, clinical psychology, and neuroscience. However, important theoretical questions have remained unanswered. Therefore, in the present meta-analysis, we evaluated seven key questions on automatic imitation. The results, based on 161 studies containing 226 experiments, revealed an overall effect size of g z = 0.95, 95% CI [0.88, 1.02]. Moderator analyses identified automatic imitation as a flexible, largely automatic process that is driven by movement and effector compatibility, but is also influenced by spatial compatibility. Automatic imitation was found to be stronger for forced choice tasks than for simple response tasks, for human agents than for nonhuman agents, and for goalless actions than for goal-directed actions. However, it was not modulated by more subtle factors such as animacy beliefs, motion profiles, or visual perspective. Finally, there was no evidence for a relation between automatic imitation and either empathy or autism. Among other things, these findings point toward actor-imitator similarity as a crucial modulator of automatic imitation and challenge the view that imitative tendencies are an indicator of social functioning. The current meta-analysis has important theoretical implications and sheds light on longstanding controversies in the literature on automatic imitation and related domains. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Automatic tissue image segmentation based on image processing and deep learning

    Science.gov (United States)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in multimodality imaging, especially in fusion structural images offered by CT, MRI with functional images collected by optical technologies or other novel imaging technologies. Plus, image segmentation also provides detailed structure description for quantitative visualization of treating light distribution in the human body when incorporated with 3D light transport simulation method. Here we used image enhancement, operators, and morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in a deep learning way. We also introduced parallel computing. Such approaches greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. Our results can be used as a criteria when diagnosing diseases such as cerebral atrophy, which is caused by pathological changes in gray matter or white matter. We demonstrated the great potential of such image processing and deep leaning combined automatic tissue image segmentation in personalized medicine, especially in monitoring, and treatments.

  3. TCM grammar systems: an approach to aid the interpretation of the molecular interactions in Chinese herbal medicine.

    Science.gov (United States)

    Yan, Jing; Wang, Yun; Luo, Si-Jun; Qiao, Yan-Jiang

    2011-09-01

    Interpreting the molecular interactions in Chinese herbal medicine will help to understand the molecular mechanisms of Traditional Chinese medicines (TCM) and predict the new pharmacological effects of TCM. Yet, we still lack a method which could integrate the concerned pieces of parsed knowledge about TCM. To solve the problem, a new method named TCM grammar systems was proposed in the present article. The possibility to study the interactions of TCM at the molecular level using TCM grammar systems was explored using Herba Ephedrae Decoction (HED) as an example. A platform was established based on the formalism of TCM grammar systems. The related molecular network of Herba Ephedrae Decoction (HED) can be extracted automatically. The molecular network indicates that Beta2 adrenergic receptor, Glucocorticoid receptor and Interleukin12 are the relatively important targets for the anti-anaphylaxis asthma function of HED. Moreover, the anti-anaphylaxis asthma function of HED is also related with suppressing inflammation process. The results show the feasibility using TCM grammar systems to interpret the molecular mechanism of TCM. Although the results obtained depend on the database absolutely, recombination of existing knowledge in this method provides new insight for interpreting the molecular mechanism of TCM. TCM grammar systems could aid the interpretation of the molecular interactions in TCM to some extent. Moreover, it might be useful to predict the new pharmacological effects of TCM. The method is an in silico technology. In association with the experimental techniques, this method will play an important role in the understanding of the molecular mechanisms of TCM. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Magnetic molecularly imprinted polymer for the selective extraction of hesperetin from the dried pericarp of Citrus reticulata Blanco.

    Science.gov (United States)

    Wang, Dan-Dan; Gao, Die; Xu, Wan-Jun; Li, Fan; Yin, Man-Ni; Fu, Qi-Feng; Xia, Zhi-Ning

    2018-07-01

    In present study, novel magnetic molecularly imprinted polymers for hesperetin were successfully prepared by surface molecular imprinting method using functionalized Fe 3 O 4 particles as the magnetic cores. Hesperetin as the template, N-Isopropylacrylamide as the functional monomer, ethylene glycol dimethyl acrylate as the crosslinker, 2,2-azobisisobutyonnitrile as initiator and acetonitrile-methanol (3:1, v/v) as the porogen were applied in the preparation process. Fourier transform infrared spectroscopy, scanning electron microscopy, transmission electron microscope, x-ray diffraction and vibrating sample magnetometry were applied to characterize the magnetic molecularly imprinted polymers. The adsorption experiments indicated that the magnetic molecularly imprinted polymers performed high selective recognition property to hesperetin. The selectivity experiment indicated that the adsorption capacity and selectivity of polymers to hesperetin was higher than that of luteolin, baicalein and ombuin. Furthermore, the magnetic molecularly imprinted polymers were employed as adsorbents for extraction and enrichment of hesperetin from the dried pericarp of Citrus reticulata Blanco. The recoveries of hesperetin in the dried pericarp of Citrus reticulata Blanco ranged from 90.5% to 96.9%. The linear range of 0.15-110.72 µg/mL was obtained with correlation coefficient of greater than 0.9991. The limit of detection and quantification of the proposed method was 0.06 µg/mL and 0.15 µg/mL, respectively. Based on three replicate measurements, intra-day RSD was 0.71% and inter-day RSD was 2.31%. These results demonstrated that the prepared magnetic molecularly imprinted polymers were proven to be an effective material for the selective adsorption and enrichment of hesperetin from natural medicines, fruits and et al. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Automatic Feature Detection, Description and Matching from Mobile Laser Scanning Data and Aerial Imagery

    Science.gov (United States)

    Hussnain, Zille; Oude Elberink, Sander; Vosselman, George

    2016-06-01

    In mobile laser scanning systems, the platform's position is measured by GNSS and IMU, which is often not reliable in urban areas. Consequently, derived Mobile Laser Scanning Point Cloud (MLSPC) lacks expected positioning reliability and accuracy. Many of the current solutions are either semi-automatic or unable to achieve pixel level accuracy. We propose an automatic feature extraction method which involves utilizing corresponding aerial images as a reference data set. The proposed method comprise three steps; image feature detection, description and matching between corresponding patches of nadir aerial and MLSPC ortho images. In the data pre-processing step the MLSPC is patch-wise cropped and converted to ortho images. Furthermore, each aerial image patch covering the area of the corresponding MLSPC patch is also cropped from the aerial image. For feature detection, we implemented an adaptive variant of Harris-operator to automatically detect corner feature points on the vertices of road markings. In feature description phase, we used the LATCH binary descriptor, which is robust to data from different sensors. For descriptor matching, we developed an outlier filtering technique, which exploits the arrangements of relative Euclidean-distances and angles between corresponding sets of feature points. We found that the positioning accuracy of the computed correspondence has achieved the pixel level accuracy, where the image resolution is 12cm. Furthermore, the developed approach is reliable when enough road markings are available in the data sets. We conclude that, in urban areas, the developed approach can reliably extract features necessary to improve the MLSPC accuracy to pixel level.

  6. Cause Information Extraction from Financial Articles Concerning Business Performance

    Science.gov (United States)

    Sakai, Hiroyuki; Masuyama, Shigeru

    We propose a method of extracting cause information from Japanese financial articles concerning business performance. Our method acquires cause informtion, e. g. “_??__??__??__??__??__??__??__??__??__??_ (zidousya no uriage ga koutyou: Sales of cars were good)”. Cause information is useful for investors in selecting companies to invest. Our method extracts cause information as a form of causal expression by using statistical information and initial clue expressions automatically. Our method can extract causal expressions without predetermined patterns or complex rules given by hand, and is expected to be applied to other tasks for acquiring phrases that have a particular meaning not limited to cause information. We compared our method with our previous one originally proposed for extracting phrases concerning traffic accident causes and experimental results showed that our new method outperforms our previous one.

  7. Towards population reconstruction : extraction of family relationships from historical documents

    NARCIS (Netherlands)

    Efremova, I.; Montes Garcia, Alejandro; Zhang, J.; Calders, T.G.K.

    2015-01-01

    In this paper we present an approach for the automatic extraction of family relationships from a real-world collection of historical notary acts. We retrieve relationships such as husband - wife, parent - child, widow of, etc. We study two ways to deal with this problem. In our first approach, we

  8. Matrix molecularly imprinted mesoporous sol-gel sorbent for efficient solid-phase extraction of chloramphenicol from milk.

    Science.gov (United States)

    Samanidou, Victoria; Kehagia, Maria; Kabir, Abuzar; Furton, Kenneth G

    2016-03-31

    Highly selective and efficient chloramphenicol imprinted sol-gel silica based inorganic polymeric sorbent (sol-gel MIP) was synthesized via matrix imprinting approach for the extraction of chloramphenicol in milk. Chloramphenicol was used as the template molecule, 3-aminopropyltriethoxysilane (3-APTES) and triethoxyphenylsilane (TEPS) as the functional precursors, tetramethyl orthosilicate (TMOS) as the cross-linker, isopropanol as the solvent/porogen, and HCl as the sol-gel catalyst. Non-imprinted sol-gel polymer (sol-gel NIP) was synthesized under identical conditions in absence of template molecules for comparison purpose. Both synthesized materials were characterized by Scanning Electron Microscopy (SEM), Fourier Transform Infrared Spectroscopy (FT-IR) and nitrogen adsorption porosimetry, which unambiguously confirmed their significant structural and morphological differences. The synthesized MIP and NIP materials were evaluated as sorbents for molecularly imprinted solid phase extraction (MISPE) of chloramphenicol in milk. The effect of critical extraction parameters (flow rate, elution solvent, sample and eluent volume, selectivity coefficient, retention capacity) was studied in terms of retention and desorption of chloramphenicol. Competition and cross reactivity tests have proved that sol-gel MIP sorbent possesses significantly higher specific retention and enrichment capacity for chloramphenicol compared to its non-imprinted analogue. The maximum imprinting factor (IF) was found as 9.7, whereas the highest adsorption capacity of chloramphenicol by sol-gel MIP was 23 mg/g. The sol-gel MIP was found to be adequately selective towards chloramphenicol to provide the necessary minimum required performance limit (MRPL) of 0.3 μg/kg set forth by European Commission after analysis by LC-MS even without requiring time consuming solvent evaporation and sample reconstitution step, often considered as an integral part in solid phase extraction work-flow. Intra and

  9. SPS Beam Steering for LHC Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Gianfelice-Wendt, Eliana [Fermilab; Bartosik, Hannes [CERN; Cornelis, Karel [CERN; Norderhaug Drøsdal, Lene [CERN; Goddard, Brennan [CERN; Kain, Verena [CERN; Meddahi, Malika [CERN; Papaphilippou, Yannis [CERN; Wenninger, Jorg [CERN

    2014-07-01

    The CERN Super Proton Synchrotron accelerates beams for the Large Hadron Collider to 450 GeV. In addition it produces beams for fixed target facilities which adds complexity to the SPS operation. During the run 2012-2013 drifts of the extracted beam trajectories have been observed and lengthy optimizations in the transfer lines were performed to reduce particle losses in the LHC. The observed trajectory drifts are consistent with the measured SPS orbit drifts at extraction. While extensive studies are going on to understand, and possibly suppress, the source of such SPS orbit drifts the feasibility of an automatic beam steering towards a “golden” orbit at the extraction septa, by means of the interlocked correctors, is also being investigated. The challenges and constraints related to the implementation of such a correction in the SPS are described. Simulation results are presented and a possible operational steering strategy is proposed.

  10. Determination of semicarbazide in fish by molecularly imprinted stir bar sorptive extraction coupled with high performance liquid chromatography.

    Science.gov (United States)

    Tang, Tang; Wei, Fangdi; Wang, Xu; Ma, Yujie; Song, Yueyue; Ma, Yunsu; Song, Quan; Xu, Guanhong; Cen, Yao; Hu, Qin

    2018-02-15

    A novel molecularly imprinted stir bar (MI-SB) for sorptive extraction of semicarbazide (SEM) was prepared in present paper. The coating of the stir bar was characterized by scanning electron microscopy, Fourier-transform infrared spectroscopy, dynamic adsorption and static adsorption tests. The saturated adsorption of MI-SB was about 4 times over that of non-imprinted stir bar (NI-SB). The selectivity of MI-SB for SEM was much better than NI-SB. A method to determine SEM was established by coupling MI-SB sorptive extraction with HPLC-UV. The liner range was 1-100ng/mL for SEM with a correlation coefficient of 0.9985. The limit of detection was about 0.59ng/mL, which was below the minimum required performance limit of SEM in meat products regulated by European Union. The method was applied to the determination of SEM in fish samples with satisfactory results. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. AUTOMATIC SEGMENTATION OF BROADCAST AUDIO SIGNALS USING AUTO ASSOCIATIVE NEURAL NETWORKS

    Directory of Open Access Journals (Sweden)

    P. Dhanalakshmi

    2010-12-01

    Full Text Available In this paper, we describe automatic segmentation methods for audio broadcast data. Today, digital audio applications are part of our everyday lives. Since there are more and more digital audio databases in place these days, the importance of effective management for audio databases have become prominent. Broadcast audio data is recorded from the Television which comprises of various categories of audio signals. Efficient algorithms for segmenting the audio broadcast data into predefined categories are proposed. Audio features namely Linear prediction coefficients (LPC, Linear prediction cepstral coefficients, and Mel frequency cepstral coefficients (MFCC are extracted to characterize the audio data. Auto Associative Neural Networks are used to segment the audio data into predefined categories using the extracted features. Experimental results indicate that the proposed algorithms can produce satisfactory results.

  12. Automatic and objective oral cancer diagnosis by Raman spectroscopic detection of keratin with multivariate curve resolution analysis

    Science.gov (United States)

    Chen, Po-Hsiung; Shimada, Rintaro; Yabumoto, Sohshi; Okajima, Hajime; Ando, Masahiro; Chang, Chiou-Tzu; Lee, Li-Tzu; Wong, Yong-Kie; Chiou, Arthur; Hamaguchi, Hiro-O.

    2016-01-01

    We have developed an automatic and objective method for detecting human oral squamous cell carcinoma (OSCC) tissues with Raman microspectroscopy. We measure 196 independent Raman spectra from 196 different points of one oral tissue sample and globally analyze these spectra using a Multivariate Curve Resolution (MCR) analysis. Discrimination of OSCC tissues is automatically and objectively made by spectral matching comparison of the MCR decomposed Raman spectra and the standard Raman spectrum of keratin, a well-established molecular marker of OSCC. We use a total of 24 tissue samples, 10 OSCC and 10 normal tissues from the same 10 patients, 3 OSCC and 1 normal tissues from different patients. Following the newly developed protocol presented here, we have been able to detect OSCC tissues with 77 to 92% sensitivity (depending on how to define positivity) and 100% specificity. The present approach lends itself to a reliable clinical diagnosis of OSCC substantiated by the “molecular fingerprint” of keratin.

  13. Automatic indexing, compiling and classification

    International Nuclear Information System (INIS)

    Andreewsky, Alexandre; Fluhr, Christian.

    1975-06-01

    A review of the principles of automatic indexing, is followed by a comparison and summing-up of work by the authors and by a Soviet staff from the Moscou INFORM-ELECTRO Institute. The mathematical and linguistic problems of the automatic building of thesaurus and automatic classification are examined [fr

  14. Sieve-based relation extraction of gene regulatory networks from biological literature.

    Science.gov (United States)

    Žitnik, Slavko; Žitnik, Marinka; Zupan, Blaž; Bajec, Marko

    2015-01-01

    Relation extraction is an essential procedure in literature mining. It focuses on extracting semantic relations between parts of text, called mentions. Biomedical literature includes an enormous amount of textual descriptions of biological entities, their interactions and results of related experiments. To extract them in an explicit, computer readable format, these relations were at first extracted manually from databases. Manual curation was later replaced with automatic or semi-automatic tools with natural language processing capabilities. The current challenge is the development of information extraction procedures that can directly infer more complex relational structures, such as gene regulatory networks. We develop a computational approach for extraction of gene regulatory networks from textual data. Our method is designed as a sieve-based system and uses linear-chain conditional random fields and rules for relation extraction. With this method we successfully extracted the sporulation gene regulation network in the bacterium Bacillus subtilis for the information extraction challenge at the BioNLP 2013 conference. To enable extraction of distant relations using first-order models, we transform the data into skip-mention sequences. We infer multiple models, each of which is able to extract different relationship types. Following the shared task, we conducted additional analysis using different system settings that resulted in reducing the reconstruction error of bacterial sporulation network from 0.73 to 0.68, measured as the slot error rate between the predicted and the reference network. We observe that all relation extraction sieves contribute to the predictive performance of the proposed approach. Also, features constructed by considering mention words and their prefixes and suffixes are the most important features for higher accuracy of extraction. Analysis of distances between different mention types in the text shows that our choice of transforming

  15. A novel airport extraction model based on saliency region detection for high spatial resolution remote sensing images

    Science.gov (United States)

    Lv, Wen; Zhang, Libao; Zhu, Yongchun

    2017-06-01

    The airport is one of the most crucial traffic facilities in military and civil fields. Automatic airport extraction in high spatial resolution remote sensing images has many applications such as regional planning and military reconnaissance. Traditional airport extraction strategies usually base on prior knowledge and locate the airport target by template matching and classification, which will cause high computation complexity and large costs of computing resources for high spatial resolution remote sensing images. In this paper, we propose a novel automatic airport extraction model based on saliency region detection, airport runway extraction and adaptive threshold segmentation. In saliency region detection, we choose frequency-tuned (FT) model for computing airport saliency using low level features of color and luminance that is easy and fast to implement and can provide full-resolution saliency maps. In airport runway extraction, Hough transform is adopted to count the number of parallel line segments. In adaptive threshold segmentation, the Otsu threshold segmentation algorithm is proposed to obtain more accurate airport regions. The experimental results demonstrate that the proposed model outperforms existing saliency analysis models and shows good performance in the extraction of the airport.

  16. Comparison of extraction chromatography and a procedure based on the molecular recognition method as separation methods in the determination of neptunium and plutonium radionuclides

    International Nuclear Information System (INIS)

    Strisovska, Jana; Galanda, Dusan; Drabova, Veronika; Kuruc, Jozef

    2012-01-01

    The potential of various types of sorbents for separation of radionuclides of plutonium and neptunium were examined. Extraction chromatography and a procedure based on the molecular recognition method were used for the separation. The suitability of the various sorbent types and brands for this purpose was determined. (orig.)

  17. Preparation and evaluation of a novel molecularly imprinted polymer coating for selective extraction of indomethacin from biological samples by electrochemically controlled in-tube solid phase microextraction

    Energy Technology Data Exchange (ETDEWEB)

    Asiabi, Hamid [Department of Chemistry, Tarbiat Modares University, P.O. Box 14115-175, Tehran (Iran, Islamic Republic of); Yamini, Yadollah, E-mail: yyamini@modares.ac.ir [Department of Chemistry, Tarbiat Modares University, P.O. Box 14115-175, Tehran (Iran, Islamic Republic of); Seidi, Shahram; Ghahramanifard, Fazel [Department of Analytical Chemistry, Faculty of Chemistry, K.N. Toosi University of Technology, Tehran (Iran, Islamic Republic of)

    2016-03-24

    In the present work, an automated on-line electrochemically controlled in-tube solid-phase microextraction (EC-in-tube SPME) coupled with HPLC-UV was developed for the selective extraction and preconcentration of indomethacin as a model analyte in biological samples. Applying an electrical potential can improve the extraction efficiency and provide more convenient manipulation of different properties of the extraction system including selectivity, clean-up, rate, and efficiency. For more enhancement of the selectivity and applicability of this method, a novel molecularly imprinted polymer coated tube was prepared and applied for extraction of indomethacin. For this purpose, nanostructured copolymer coating consisting of polypyrrole doped with ethylene glycol dimethacrylate was prepared on the inner surface of a stainless-steel tube by electrochemical synthesis. The characteristics and application of the tubes were investigated. Electron microscopy provided a cross linked porous surface and the average thickness of the MIP coating was 45 μm. Compared with the non-imprinted polymer coated tubes, the special selectivity for indomethacin was discovered with the molecularly imprinted coated tube. Moreover, stable and reproducible responses were obtained without being considerably influenced by interferences commonly existing in biological samples. Under the optimal conditions, the limits of detection were in the range of 0.07–2.0 μg L{sup −1} in different matrices. This method showed good linearity for indomethacin in the range of 0.1–200 μg L{sup −1}, with coefficients of determination better than 0.996. The inter- and intra-assay precisions (RSD%, n = 3) were respectively in the range of 3.5–8.4% and 2.3–7.6% at three concentration levels of 7, 70 and 150 μg L{sup −1}. The results showed that the proposed method can be successfully applied for selective analysis of indomethacin in biological samples. - Graphical abstract: An automated on

  18. Sequential injection system incorporating a micro extraction column for automatic fractionation of metal ions in solid samples

    DEFF Research Database (Denmark)

    Chomchoei, Roongrat; Miró, Manuel; Hansen, Elo Harald

    2005-01-01

    as to the kinetics of the leaching processes and chemical associations in different soil geological phases. Special attention is also paid to the potentials of the microcolumn flowing technique for automatic processing of solid materials with variable homogeneity, as demonstrated with the sewage amended CRM483 soil...

  19. Automatic inspection of textured surfaces by support vector machines

    Science.gov (United States)

    Jahanbin, Sina; Bovik, Alan C.; Pérez, Eduardo; Nair, Dinesh

    2009-08-01

    Automatic inspection of manufactured products with natural looking textures is a challenging task. Products such as tiles, textile, leather, and lumber project image textures that cannot be modeled as periodic or otherwise regular; therefore, a stochastic modeling of local intensity distribution is required. An inspection system to replace human inspectors should be flexible in detecting flaws such as scratches, cracks, and stains occurring in various shapes and sizes that have never been seen before. A computer vision algorithm is proposed in this paper that extracts local statistical features from grey-level texture images decomposed with wavelet frames into subbands of various orientations and scales. The local features extracted are second order statistics derived from grey-level co-occurrence matrices. Subsequently, a support vector machine (SVM) classifier is trained to learn a general description of normal texture from defect-free samples. This algorithm is implemented in LabVIEW and is capable of processing natural texture images in real-time.

  20. Sleep Spindles as an Electrographic Element: Description and Automatic Detection Methods

    Directory of Open Access Journals (Sweden)

    Dorothée Coppieters ’t Wallant

    2016-01-01

    Full Text Available Sleep spindle is a peculiar oscillatory brain pattern which has been associated with a number of sleep (isolation from exteroceptive stimuli, memory consolidation and individual characteristics (intellectual quotient. Oddly enough, the definition of a spindle is both incomplete and restrictive. In consequence, there is no consensus about how to detect spindles. Visual scoring is cumbersome and user dependent. To analyze spindle activity in a more robust way, automatic sleep spindle detection methods are essential. Various algorithms were developed, depending on individual research interest, which hampers direct comparisons and meta-analyses. In this review, sleep spindle is first defined physically and topographically. From this general description, we tentatively extract the main characteristics to be detected and analyzed. A nonexhaustive list of automatic spindle detection methods is provided along with a description of their main processing principles. Finally, we propose a technique to assess the detection methods in a robust and comparable way.

  1. Contaminants and nutrients in variable sea areas (Canvas). Application of automatic monitoring stations in the German marine environment

    International Nuclear Information System (INIS)

    Nies, H.; Bruegge, B.; Sterzenbach, D.; Knauth, H.D.; Schroeder, F.

    1999-01-01

    Permanent observation of parameters at sea stations can only be obtained by automatic sampling. The MERMAID technique developed in former projects provides a possibility to run automatic stations within the German MARNET measuring stations to obtain data on nutrients concentration on line and to collect organic micropollutants and the radionuclide 137 Cs by solid phase extraction from seawater and subsequent analysis in the laboratory. The BSH MARNET consists of ten stations located in the German Bight sector of the North Sea and the western Baltic. First results from the time series of nutrient and organic micropollutant concentrations has been presented

  2. AUTOMATIC LUNG NODULE DETECTION BASED ON STATISTICAL REGION MERGING AND SUPPORT VECTOR MACHINES

    Directory of Open Access Journals (Sweden)

    Elaheh Aghabalaei Khordehchi

    2017-06-01

    Full Text Available Lung cancer is one of the most common diseases in the world that can be treated if the lung nodules are detected in their early stages of growth. This study develops a new framework for computer-aided detection of pulmonary nodules thorough a fully-automatic analysis of Computed Tomography (CT images. In the present work, the multi-layer CT data is fed into a pre-processing step that exploits an adaptive diffusion-based smoothing algorithm in which the parameters are automatically tuned using an adaptation technique. After multiple levels of morphological filtering, the Regions of Interest (ROIs are extracted from the smoothed images. The Statistical Region Merging (SRM algorithm is applied to the ROIs in order to segment each layer of the CT data. Extracted segments in consecutive layers are then analyzed in such a way that if they intersect at more than a predefined number of pixels, they are labeled with a similar index. The boundaries of the segments in adjacent layers which have the same indices are then connected together to form three-dimensional objects as the nodule candidates. After extracting four spectral, one morphological, and one textural feature from all candidates, they are finally classified into nodules and non-nodules using the Support Vector Machine (SVM classifier. The proposed framework has been applied to two sets of lung CT images and its performance has been compared to that of nine other competing state-of-the-art methods. The considerable efficiency of the proposed approach has been proved quantitatively and validated by clinical experts as well.

  3. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    Science.gov (United States)

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Fuzzy concept analysis for semantic knowledge extraction

    OpenAIRE

    De Maio, Carmen

    2012-01-01

    2010 - 2011 Availability of controlled vocabularies, ontologies, and so on is enabling feature to provide some added values in terms of knowledge management. Nevertheless, the design, maintenance and construction of domain ontologies are a human intensive and time consuming task. The Knowledge Extraction consists of automatic techniques aimed to identify and to define relevant concepts and relations of the domain of interest by analyzing structured (relational databases, XML) and unstructu...

  5. Automatic Extraction of Road Markings from Mobile Laser-Point Cloud Using Intensity Data

    Science.gov (United States)

    Yao, L.; Chen, Q.; Qin, C.; Wu, H.; Zhang, S.

    2018-04-01

    With the development of intelligent transportation, road's high precision information data has been widely applied in many fields. This paper proposes a concise and practical way to extract road marking information from point cloud data collected by mobile mapping system (MMS). The method contains three steps. Firstly, road surface is segmented through edge detection from scan lines. Then the intensity image is generated by inverse distance weighted (IDW) interpolation and the road marking is extracted by using adaptive threshold segmentation based on integral image without intensity calibration. Moreover, the noise is reduced by removing a small number of plaque pixels from binary image. Finally, point cloud mapped from binary image is clustered into marking objects according to Euclidean distance, and using a series of algorithms including template matching and feature attribute filtering for the classification of linear markings, arrow markings and guidelines. Through processing the point cloud data collected by RIEGL VUX-1 in case area, the results show that the F-score of marking extraction is 0.83, and the average classification rate is 0.9.

  6. Continuous nucleus extraction by optically-induced cell lysis on a batch-type microfluidic platform.

    Science.gov (United States)

    Huang, Shih-Hsuan; Hung, Lien-Yu; Lee, Gwo-Bin

    2016-04-21

    The extraction of a cell's nucleus is an essential technique required for a number of procedures, such as disease diagnosis, genetic replication, and animal cloning. However, existing nucleus extraction techniques are relatively inefficient and labor-intensive. Therefore, this study presents an innovative, microfluidics-based approach featuring optically-induced cell lysis (OICL) for nucleus extraction and collection in an automatic format. In comparison to previous micro-devices designed for nucleus extraction, the new OICL device designed herein is superior in terms of flexibility, selectivity, and efficiency. To facilitate this OICL module for continuous nucleus extraction, we further integrated an optically-induced dielectrophoresis (ODEP) module with the OICL device within the microfluidic chip. This on-chip integration circumvents the need for highly trained personnel and expensive, cumbersome equipment. Specifically, this microfluidic system automates four steps by 1) automatically focusing and transporting cells, 2) releasing the nuclei on the OICL module, 3) isolating the nuclei on the ODEP module, and 4) collecting the nuclei in the outlet chamber. The efficiency of cell membrane lysis and the ODEP nucleus separation was measured to be 78.04 ± 5.70% and 80.90 ± 5.98%, respectively, leading to an overall nucleus extraction efficiency of 58.21 ± 2.21%. These results demonstrate that this microfluidics-based system can successfully perform nucleus extraction, and the integrated platform is therefore promising in cell fusion technology with the goal of achieving genetic replication, or even animal cloning, in the near future.

  7. Synthesis and application of molecularly imprinted polymers for the selective extraction of organophosphorus pesticides from vegetable oils.

    Science.gov (United States)

    Boulanouar, Sara; Combès, Audrey; Mezzache, Sakina; Pichon, Valérie

    2017-09-01

    The increasing use of pesticides in agriculture causes environmental issues and possible serious health risks to humans and animals. Their determination at trace concentrations in vegetable oils constitutes a significant analytical challenge. Therefore, their analysis often requires both an extraction and a purification step prior to separation with liquid chromatography (LC) and mass spectrometry (MS) detection. This work aimed at developing sorbents that are able to selectively extract from vegetable oil samples several organophosphorus (OPs) pesticides presenting a wide range of physico-chemical properties. Therefore, different conditions were screened to prepare molecularly imprinted polymers (MIPs) by a non-covalent approach. The selectivity of the resulting polymers was evaluated by studying the OPs retention in pure media on both MIPs and non-imprinted polymers (NIP) used as control. The most promising MIP sorbent was obtained using monocrotophos (MCP) as the template, methacrylic acid (MAA) as the monomer and ethylene glycol dimethacrylate (EGDMA) as the cross-linker with a molar ratio of 1/4/20 respectively. The repeatability of the extraction procedure and of the synthesis procedure was demonstrated in pure media. The capacity of this MIP was 1mg/g for malathion. This MIP was also able to selectively extract three OPs from almond oil by applying the optimized SPE procedure. Recoveries were between 73 and 99% with SD values between 4 and 6% in this oil sample. The calculated LOQs (between 0.3 and 2μg/kg) in almond seeds with a SD between 0.1 and 0.4μg/kg were lower than the Maximum Residue Levels (MRLs) established for the corresponding compounds in almond seed. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Man-Made Object Extraction from Remote Sensing Imagery by Graph-Based Manifold Ranking

    Science.gov (United States)

    He, Y.; Wang, X.; Hu, X. Y.; Liu, S. H.

    2018-04-01

    The automatic extraction of man-made objects from remote sensing imagery is useful in many applications. This paper proposes an algorithm for extracting man-made objects automatically by integrating a graph model with the manifold ranking algorithm. Initially, we estimate a priori value of the man-made objects with the use of symmetric and contrast features. The graph model is established to represent the spatial relationships among pre-segmented superpixels, which are used as the graph nodes. Multiple characteristics, namely colour, texture and main direction, are used to compute the weights of the adjacent nodes. Manifold ranking effectively explores the relationships among all the nodes in the feature space as well as initial query assignment; thus, it is applied to generate a ranking map, which indicates the scores of the man-made objects. The man-made objects are then segmented on the basis of the ranking map. Two typical segmentation algorithms are compared with the proposed algorithm. Experimental results show that the proposed algorithm can extract man-made objects with high recognition rate and low omission rate.

  9. Fast Automatic Airport Detection in Remote Sensing Images Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Fen Chen

    2018-03-01

    Full Text Available Fast and automatic detection of airports from remote sensing images is useful for many military and civilian applications. In this paper, a fast automatic detection method is proposed to detect airports from remote sensing images based on convolutional neural networks using the Faster R-CNN algorithm. This method first applies a convolutional neural network to generate candidate airport regions. Based on the features extracted from these proposals, it then uses another convolutional neural network to perform airport detection. By taking the typical elongated linear geometric shape of airports into consideration, some specific improvements to the method are proposed. These approaches successfully improve the quality of positive samples and achieve a better accuracy in the final detection results. Experimental results on an airport dataset, Landsat 8 images, and a Gaofen-1 satellite scene demonstrate the effectiveness and efficiency of the proposed method.

  10. Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data.

    Science.gov (United States)

    Barros, Rodrigo C; Winck, Ana T; Machado, Karina S; Basgalupp, Márcio P; de Carvalho, André C P L F; Ruiz, Duncan D; de Souza, Osmar Norberto

    2012-11-21

    This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.

  11. Localized Segment Based Processing for Automatic Building Extraction from LiDAR Data

    Science.gov (United States)

    Parida, G.; Rajan, K. S.

    2017-05-01

    The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  12. LOCALIZED SEGMENT BASED PROCESSING FOR AUTOMATIC BUILDING EXTRACTION FROM LiDAR DATA

    Directory of Open Access Journals (Sweden)

    G. Parida

    2017-05-01

    Full Text Available The current methods of object segmentation and extraction and classification of aerial LiDAR data is manual and tedious task. This work proposes a technique for object segmentation out of LiDAR data. A bottom-up geometric rule based approach was used initially to devise a way to segment buildings out of the LiDAR datasets. For curved wall surfaces, comparison of localized surface normals was done to segment buildings. The algorithm has been applied to both synthetic datasets as well as real world dataset of Vaihingen, Germany. Preliminary results show successful segmentation of the buildings objects from a given scene in case of synthetic datasets and promissory results in case of real world data. The advantages of the proposed work is non-dependence on any other form of data required except LiDAR. It is an unsupervised method of building segmentation, thus requires no model training as seen in supervised techniques. It focuses on extracting the walls of the buildings to construct the footprint, rather than focussing on roof. The focus on extracting the wall to reconstruct the buildings from a LiDAR scene is crux of the method proposed. The current segmentation approach can be used to get 2D footprints of the buildings, with further scope to generate 3D models. Thus, the proposed method can be used as a tool to get footprints of buildings in urban landscapes, helping in urban planning and the smart cities endeavour.

  13. DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.

    Science.gov (United States)

    Supratak, Akara; Dong, Hao; Wu, Chao; Guo, Yike

    2017-11-01

    This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.

  14. Channel selection for automatic seizure detection

    DEFF Research Database (Denmark)

    Duun-Henriksen, Jonas; Kjaer, Troels Wesenberg; Madsen, Rasmus Elsborg

    2012-01-01

    Objective: To investigate the performance of epileptic seizure detection using only a few of the recorded EEG channels and the ability of software to select these channels compared with a neurophysiologist. Methods: Fifty-nine seizures and 1419 h of interictal EEG are used for training and testing...... of an automatic channel selection method. The characteristics of the seizures are extracted by the use of a wavelet analysis and classified by a support vector machine. The best channel selection method is based upon maximum variance during the seizure. Results: Using only three channels, a seizure detection...... sensitivity of 96% and a false detection rate of 0.14/h were obtained. This corresponds to the performance obtained when channels are selected through visual inspection by a clinical neurophysiologist, and constitutes a 4% improvement in sensitivity compared to seizure detection using channels recorded...

  15. Temperature sensitive molecularly imprinted microspheres for solid-phase dispersion extraction of malachite green, crystal violet and their leuko metabolites

    International Nuclear Information System (INIS)

    Tan, Lei; Chen, Kuncai; He, Rong; Peng, Rongfei; Huang, Cong

    2016-01-01

    This article demonstrates the feasibility of an alternative strategy for producing temperature sensitive molecularly imprinted microspheres (MIMs) for solid-phase dispersion extraction of malachite green, crystal violet and their leuko metabolites. Thermo-sensitive MIMs can change their structure following temperature stimulation. This allows capture and release of target molecules to be controlled by temperature. The fabrication technique provides surface molecular imprinting in acetonitrile using vinyl modified silica microspheres as solid supports, methacrylic acid and N-isopropyl acrylamide as the functional monomers, ethyleneglycol dimethacrylate as the cross-linker, and malachite green as the template. After elution of the template, the MIMs can be used for fairly group-selective solid phase dispersion extraction of malachite green, crystal violet, leucomalachite green, and leucocrystal violet from homogenized fish samples at a certain temperature. Following centrifugal separation of the microspheres, the analytes were eluted with a 95:5 mixture of acetonitrile and formic acid, and then quantified by ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) with isotope internal calibration. The detection limits for malachite green, crystal violet and their metabolites typically are 30 ng·kg −1 . Positive samples were identified by UHPLC-MS/MS in the positive ionization mode with multiple reaction monitoring. The method was applied to the determination of the dyes and the respective leuko dyes in fish samples, and accuracy and precision were validated by comparative analysis of the samples by using aluminum neutral columns. (author)

  16. Automatic individual arterial input functions calculated from PCA outperform manual and population-averaged approaches for the pharmacokinetic modeling of DCE-MR images.

    Science.gov (United States)

    Sanz-Requena, Roberto; Prats-Montalbán, José Manuel; Martí-Bonmatí, Luis; Alberich-Bayarri, Ángel; García-Martí, Gracián; Pérez, Rosario; Ferrer, Alberto

    2015-08-01

    To introduce a segmentation method to calculate an automatic arterial input function (AIF) based on principal component analysis (PCA) of dynamic contrast enhanced MR (DCE-MR) imaging and compare it with individual manually selected and population-averaged AIFs using calculated pharmacokinetic parameters. The study included 65 individuals with prostate examinations (27 tumors and 38 controls). Manual AIFs were individually extracted and also averaged to obtain a population AIF. Automatic AIFs were individually obtained by applying PCA to volumetric DCE-MR imaging data and finding the highest correlation of the PCs with a reference AIF. Variability was assessed using coefficients of variation and repeated measures tests. The different AIFs were used as inputs to the pharmacokinetic model and correlation coefficients, Bland-Altman plots and analysis of variance tests were obtained to compare the results. Automatic PCA-based AIFs were successfully extracted in all cases. The manual and PCA-based AIFs showed good correlation (r between pharmacokinetic parameters ranging from 0.74 to 0.95), with differences below the manual individual variability (RMSCV up to 27.3%). The population-averaged AIF showed larger differences (r from 0.30 to 0.61). The automatic PCA-based approach minimizes the variability associated to obtaining individual volume-based AIFs in DCE-MR studies of the prostate. © 2014 Wiley Periodicals, Inc.

  17. AUTOMATIC RECOGNITION OF INDOOR NAVIGATION ELEMENTS FROM KINECT POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    L. Zeng

    2017-09-01

    Full Text Available This paper realizes automatically the navigating elements defined by indoorGML data standard – door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor – histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor – in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  18. Automatic Recognition of Indoor Navigation Elements from Kinect Point Clouds

    Science.gov (United States)

    Zeng, L.; Kang, Z.

    2017-09-01

    This paper realizes automatically the navigating elements defined by indoorGML data standard - door, stairway and wall. The data used is indoor 3D point cloud collected by Kinect v2 launched in 2011 through the means of ORB-SLAM. By contrast, it is cheaper and more convenient than lidar, but the point clouds also have the problem of noise, registration error and large data volume. Hence, we adopt a shape descriptor - histogram of distances between two randomly chosen points, proposed by Osada and merges with other descriptor - in conjunction with random forest classifier to recognize the navigation elements (door, stairway and wall) from Kinect point clouds. This research acquires navigation elements and their 3-d location information from each single data frame through segmentation of point clouds, boundary extraction, feature calculation and classification. Finally, this paper utilizes the acquired navigation elements and their information to generate the state data of the indoor navigation module automatically. The experimental results demonstrate a high recognition accuracy of the proposed method.

  19. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  20. Automatic sets and Delone sets

    International Nuclear Information System (INIS)

    Barbe, A; Haeseler, F von

    2004-01-01

    Automatic sets D part of Z m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D part of Z m to be a Delone set in R m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples

  1. Preparation and Evaluation of Core–Shell Magnetic Molecularly Imprinted Polymers for Solid-Phase Extraction and Determination of Sterigmatocystin in Food

    Directory of Open Access Journals (Sweden)

    Jing-Min Liu

    2017-10-01

    Full Text Available Magnetic molecularly imprinted polymers (MMIPs, combination of outstanding magnetism with specific selective binding capability for target molecules, have proven to be attractive in separation science and bio-applications. Herein, we proposed the core–shell magnetic molecularly imprinted polymers for food analysis, employing the Fe3O4 particles prepared by co-precipitation protocol as the magnetic core and MMIP film onto the silica layer as the recognition and adsorption of target analytes. The obtained MMIPs materials have been fully characterized by scanning electron microscope (SEM, Fourier transform infrared spectrometer (FT-IR, vibrating sample magnetometer (VSM, and re-binding experiments. Under the optimal conditions, the fabricated Fe3O4@MIPs demonstrated fast adsorption equilibrium, a highly improved imprinting capacity, and excellent specificity to target sterigmatocystin (ST, which have been successfully applied as highly efficient solid-phase extraction materials followed by high-performance liquid chromatography (HPLC analysis. The MMIP-based solid phase extraction (SPE method gave linear response in the range of 0.05–5.0 mg·L−1 with a detection limit of 9.1 µg·L−1. Finally, the proposed method was used for the selective isolation and enrichment of ST in food samples with recoveries in the range 80.6–88.7% and the relative standard deviation (RSD <5.6%.

  2. Simple Extraction and Molecular Weight Characterization of Fucoidan From Indonesian Sargassum SP.

    OpenAIRE

    Junaidi, Lukman

    2013-01-01

    Fucoidan is a complex polysaccharide compounds found in brown algae. Fucoidan exhibits various biological properties for disease prevention. There are various methods used to extract fucoidan from brown algae, such as using ethanol, acetone, HCl and microwave. This research aims to extract and characterize fucoidan from Sargassum sp. using simple methods with the variables on extracting solutions, temperature and time of extraction. Extraction solution used were water and HCl. Ttemperature us...

  3. Human-competitive automatic topic indexing

    CERN Document Server

    Medelyan, Olena

    2009-01-01

    Topic indexing is the task of identifying the main topics covered by a document. These are useful for many purposes: as subject headings in libraries, as keywords in academic publications and as tags on the web. Knowing a document’s topics helps people judge its relevance quickly. However, assigning topics manually is labor intensive. This thesis shows how to generate them automatically in a way that competes with human performance. Three kinds of indexing are investigated: term assignment, a task commonly performed by librarians, who select topics from a controlled vocabulary; tagging, a popular activity of web users, who choose topics freely; and a new method of keyphrase extraction, where topics are equated to Wikipedia article names. A general two-stage algorithm is introduced that first selects candidate topics and then ranks them by significance based on their properties. These properties draw on statistical, semantic, domain-specific and encyclopedic knowledge. They are combined using a machine learn...

  4. Molecularly imprinted solid phase extraction using stable isotope labeled compounds as template and liquid chromatography-mass spectrometry for trace analysis of bisphenol A in water sample

    International Nuclear Information System (INIS)

    Kawaguchi, Migaku; Hayatsu, Yoshio; Nakata, Hisao; Ishii, Yumiko; Ito, Rie; Saito, Koichi; Nakazawa, Hiroyuki

    2005-01-01

    We have developed a molecularly imprinted polymer (MIP) using a stable isotope labeled compound as the template molecule and called it the ''isotope molecularly imprinted polymer'' (IMIP). In this study, bisphenol A (BPA) was used as the model compound. None imprinted polymer (NIP), MIP, dummy molecularly imprinted polymer (DMIP) and IMIP were prepared by the suspension polymerization method using without template, BPA, 4-tert-butylphenol (BP) and bisphenol A-d 16 (BPA-d 16 ), respectively. The polymers were subjected to molecularly imprinted solid phase extraction (MI-SPE), and the extracted samples were subjected to liquid chromatography-mass spectrometry (LC-MS). Although the leakage of BPA-d 16 from the IMIP was observed and that of BPA was not observed. The selectivity factors of MIP and IMIP for BPA were 4.45 and 4.43, respectively. Therefore, IMIP had the same molecular recognition ability as MIP. When MI-SPE with IMIP was used and followed by LC-MS in the analysis of river water sample, the detection limit of BPA was 1 ppt with high sensitivity. Moreover, the average recovery was higher than 99.8% (R.S.D.: 3.7%) by using bisphenol A- 13 C 12 (BPA- 13 C 12 ) as the surrogate standard. In addition, the IMIP were employed in MI-SPE of BPA in river water sample by LC-MS. The concentration of BPA in the river water sample was determined to be 32 pg ml -1 . We confirmed that it was possible to measure trace amounts of a target analyte by MI-SPE using IMIP

  5. A novel scheme for automatic nonrigid image registration using deformation invariant feature and geometric constraint

    Science.gov (United States)

    Deng, Zhipeng; Lei, Lin; Zhou, Shilin

    2015-10-01

    Automatic image registration is a vital yet challenging task, particularly for non-rigid deformation images which are more complicated and common in remote sensing images, such as distorted UAV (unmanned aerial vehicle) images or scanning imaging images caused by flutter. Traditional non-rigid image registration methods are based on the correctly matched corresponding landmarks, which usually needs artificial markers. It is a rather challenging task to locate the accurate position of the points and get accurate homonymy point sets. In this paper, we proposed an automatic non-rigid image registration algorithm which mainly consists of three steps: To begin with, we introduce an automatic feature point extraction method based on non-linear scale space and uniform distribution strategy to extract the points which are uniform distributed along the edge of the image. Next, we propose a hybrid point matching algorithm using DaLI (Deformation and Light Invariant) descriptor and local affine invariant geometric constraint based on triangulation which is constructed by K-nearest neighbor algorithm. Based on the accurate homonymy point sets, the two images are registrated by the model of TPS (Thin Plate Spline). Our method is demonstrated by three deliberately designed experiments. The first two experiments are designed to evaluate the distribution of point set and the correctly matching rate on synthetic data and real data respectively. The last experiment is designed on the non-rigid deformation remote sensing images and the three experimental results demonstrate the accuracy, robustness, and efficiency of the proposed algorithm compared with other traditional methods.

  6. Extracting the exponential behaviors in the market data

    Science.gov (United States)

    Watanabe, Kota; Takayasu, Hideki; Takayasu, Misako

    2007-08-01

    We introduce a mathematical criterion defining the bubbles or the crashes in financial market price fluctuations by considering exponential fitting of the given data. By applying this criterion we can automatically extract the periods in which bubbles and crashes are identified. From stock market data of so-called the Internet bubbles it is found that the characteristic length of bubble period is about 100 days.

  7. TU-G-204-02: Automatic Sclerotic Bone Metastases Detection in the Pelvic Region From Dual Energy CT

    Energy Technology Data Exchange (ETDEWEB)

    Fehr, D; Schmidtlein, C; Hwang, S; Deasy, J; Veeraraghavan, H [Memorial Sloan Kettering Cancer Center, New York, NY (United States)

    2015-06-15

    Purpose: To automatically detect sclerotic bone metastases in the pelvic region using dual energy computed tomography (DECT). Methods: We developed a two stage algorithm to automatically detect sclerotic bone metastases in the pelvis from DECT for patients with multiple bone metastatic lesions and with hip implants. The first stage consists of extracting the bone and marrow regions by using a support vector machine (SVM) classifier. We employed a novel representation of the DECT images using multi-material decomposition, which represents each voxel as a mixture of different physical materials (e.g. bone+water+fat). Following the extraction of bone and marrow, in the second stage, a bi -histogram equalization method was employed to enhance the contrast to reveal the bone metastases. Next, meanshift segmentation was performed to separate the voxels by their intensity levels. Finally, shape-based filtering was performed to extract the possible locations of the metastatic lesions using multiple shape criteria. We used the following shape parameters: area, eccentricity, major and minor axis, perimeter and skeleton. Results: A radiologist with several years of experience with DECT manually labeled 64 regions consisting of metastatic lesions from 10 different patients. However, the patients had many more metastasic lesions throughout the pelvis. Our method correctly identified 46 of the marked 64 regions (72%). In addition, our method also identified several other lesions, which can then be validated by the radiologist. The missed lesions were typically very large elongated regions consisting of several islands of very small (<4mm) lesions. Conclusion: We developed an algorithm to automatically detect sclerotic lesions in the pelvic region from DECT. Preliminary assessment shows that our algorithm generated lesions agreeing with the radiologist generated candidate regions. Furthermore, our method reveals additional lesions that can be inspected by the radiologist, thereby

  8. Microbes on building materials - Evaluation of DNA extraction protocols as common basis for molecular analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ettenauer, Joerg D., E-mail: joerg.ettenauer@boku.ac.at [VIBT-BOKU, University of Natural Resources and Life Sciences, Department of Biotechnology, Muthgasse 11, A-1190 Vienna (Austria); Pinar, Guadalupe, E-mail: Guadalupe.Pinar@boku.ac.at [VIBT-BOKU, University of Natural Resources and Life Sciences, Department of Biotechnology, Muthgasse 11, A-1190 Vienna (Austria); Lopandic, Ksenija, E-mail: Ksenija.Lopandic@boku.ac.at [VIBT-BOKU, University of Natural Resources and Life Sciences, Department of Biotechnology, Muthgasse 11, A-1190 Vienna (Austria); Spangl, Bernhard, E-mail: Bernhard.Spangl@boku.ac.at [University of Natural Resources and Life Sciences, Department of Landscape, Spatial and Infrastructure Science, Institute of Applied Statistics and Computing (IASC), Gregor Mendel-Str. 33, A-1180 Vienna (Austria); Ellersdorfer, Guenther, E-mail: Guenther.Ellersdorfer@boku.ac.at [VIBT-BOKU, University of Natural Resources and Life Sciences, Department of Biotechnology, Muthgasse 11, A-1190 Vienna (Austria); Voitl, Christian, E-mail: Christian.Voitl@boku.ac.at [VIBT-BOKU, University of Natural Resources and Life Sciences, Department of Biotechnology, Muthgasse 11, A-1190 Vienna (Austria); Sterflinger, Katja, E-mail: Katja.Sterflinger@boku.ac.at [VIBT-BOKU, University of Natural Resources and Life Sciences, Department of Biotechnology, Muthgasse 11, A-1190 Vienna (Austria)

    2012-11-15

    The study of microbial life in building materials is an emerging topic concerning biodeterioration of materials as well as health risks in houses and at working places. Biodegradation and potential health implications associated with microbial growth in our residues claim for more precise methods for quantification and identification. To date, cultivation experiments are commonly used to gain insight into the microbial diversity. Nowadays, molecular techniques for the identification of microorganisms provide efficient methods that can be applied in this field. The efficiency of DNA extraction is decisive in order to perform a reliable and reproducible quantification of the microorganisms by qPCR or to characterize the structure of the microbial community. In this study we tested thirteen DNA extraction methods and evaluated their efficiency for identifying (1) the quantity of DNA, (2) the quality and purity of DNA and (3) the ability of the DNA to be amplified in a PCR reaction using three universal primer sets for the ITS region of fungi as well as one primer pair targeting the 16S rRNA of bacteria with three typical building materials - common plaster, red brick and gypsum cardboard. DNA concentration measurements showed strong variations among the tested methods and materials. Measurement of the DNA yield showed up to three orders of magnitude variation from the same samples, whereas A260/A280 ratios often prognosticated biases in the PCR amplifications. Visualization of the crude DNA extracts and the comparison of DGGE fingerprints showed additional drawbacks of some methods. The FastDNA Spin kit for soil showed to be the best DNA extraction method and could provide positive results for all tests with the three building materials. Therefore, we suggest this method as a gold standard for quantification of indoor fungi and bacteria in building materials. -- Highlights: Black-Right-Pointing-Pointer Up to thirteen extraction methods were evaluated with three

  9. Automatic estimation of elasticity parameters in breast tissue

    Science.gov (United States)

    Skerl, Katrin; Cochran, Sandy; Evans, Andrew

    2014-03-01

    Shear wave elastography (SWE), a novel ultrasound imaging technique, can provide unique information about cancerous tissue. To estimate elasticity parameters, a region of interest (ROI) is manually positioned over the stiffest part of the shear wave image (SWI). The aim of this work is to estimate the elasticity parameters i.e. mean elasticity, maximal elasticity and standard deviation, fully automatically. Ultrasonic SWI of a breast elastography phantom and breast tissue in vivo were acquired using the Aixplorer system (SuperSonic Imagine, Aix-en-Provence, France). First, the SWI within the ultrasonic B-mode image was detected using MATLAB then the elasticity values were extracted. The ROI was automatically positioned over the stiffest part of the SWI and the elasticity parameters were calculated. Finally all values were saved in a spreadsheet which also contains the patient's study ID. This spreadsheet is easily available for physicians and clinical staff for further evaluation and so increase efficiency. Therewith the efficiency is increased. This algorithm simplifies the handling, especially for the performance and evaluation of clinical trials. The SWE processing method allows physicians easy access to the elasticity parameters of the examinations from their own and other institutions. This reduces clinical time and effort and simplifies evaluation of data in clinical trials. Furthermore, reproducibility will be improved.

  10. D Central Line Extraction of Fossil Oyster Shells

    Science.gov (United States)

    Djuricic, A.; Puttonen, E.; Harzhauser, M.; Mandic, O.; Székely, B.; Pfeifer, N.

    2016-06-01

    Photogrammetry provides a powerful tool to digitally document protected, inaccessible, and rare fossils. This saves manpower in relation to current documentation practice and makes the fragile specimens more available for paleontological analysis and public education. In this study, high resolution orthophoto (0.5 mm) and digital surface models (1 mm) are used to define fossil boundaries that are then used as an input to automatically extract fossil length information via central lines. In general, central lines are widely used in geosciences as they ease observation, monitoring and evaluation of object dimensions. Here, the 3D central lines are used in a novel paleontological context to study fossilized oyster shells with photogrammetric and LiDAR-obtained 3D point cloud data. 3D central lines of 1121 Crassostrea gryphoides oysters of various shapes and sizes were computed in the study. Central line calculation included: i) Delaunay triangulation between the fossil shell boundary points and formation of the Voronoi diagram; ii) extraction of Voronoi vertices and construction of a connected graph tree from them; iii) reduction of the graph to the longest possible central line via Dijkstra's algorithm; iv) extension of longest central line to the shell boundary and smoothing by an adjustment of cubic spline curve; and v) integration of the central line into the corresponding 3D point cloud. The resulting longest path estimate for the 3D central line is a size parameter that can be applied in oyster shell age determination both in paleontological and biological applications. Our investigation evaluates ability and performance of the central line method to measure shell sizes accurately by comparing automatically extracted central lines with manually collected reference data used in paleontological analysis. Our results show that the automatically obtained central line length overestimated the manually collected reference by 1.5% in the test set, which is deemed

  11. Automatic delineation of brain regions on MRI and PET images from the pig.

    Science.gov (United States)

    Villadsen, Jonas; Hansen, Hanne D; Jørgensen, Louise M; Keller, Sune H; Andersen, Flemming L; Petersen, Ida N; Knudsen, Gitte M; Svarer, Claus

    2018-01-15

    The increasing use of the pig as a research model in neuroimaging requires standardized processing tools. For example, extraction of regional dynamic time series from brain PET images requires parcellation procedures that benefit from being automated. Manual inter-modality spatial normalization to a MRI atlas is operator-dependent, time-consuming, and can be inaccurate with lack of cortical radiotracer binding or skull uptake. A parcellated PET template that allows for automatic spatial normalization to PET images of any radiotracer. MRI and [ 11 C]Cimbi-36 PET scans obtained in sixteen pigs made the basis for the atlas. The high resolution MRI scans allowed for creation of an accurately averaged MRI template. By aligning the within-subject PET scans to their MRI counterparts, an averaged PET template was created in the same space. We developed an automatic procedure for spatial normalization of the averaged PET template to new PET images and hereby facilitated transfer of the atlas regional parcellation. Evaluation of the automatic spatial normalization procedure found the median voxel displacement to be 0.22±0.08mm using the MRI template with individual MRI images and 0.92±0.26mm using the PET template with individual [ 11 C]Cimbi-36 PET images. We tested the automatic procedure by assessing eleven PET radiotracers with different kinetics and spatial distributions by using perfusion-weighted images of early PET time frames. We here present an automatic procedure for accurate and reproducible spatial normalization and parcellation of pig PET images of any radiotracer with reasonable blood-brain barrier penetration. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Automatisms: bridging clinical neurology with criminal law.

    Science.gov (United States)

    Rolnick, Joshua; Parvizi, Josef

    2011-03-01

    The law, like neurology, grapples with the relationship between disease states and behavior. Sometimes, the two disciplines share the same terminology, such as automatism. In law, the "automatism defense" is a claim that action was involuntary or performed while unconscious. Someone charged with a serious crime can acknowledge committing the act and yet may go free if, relying on the expert testimony of clinicians, the court determines that the act of crime was committed in a state of automatism. In this review, we explore the relationship between the use of automatism in the legal and clinical literature. We close by addressing several issues raised by the automatism defense: semantic ambiguity surrounding the term automatism, the presence or absence of consciousness during automatisms, and the methodological obstacles that have hindered the study of cognition during automatisms. Copyright © 2010 Elsevier Inc. All rights reserved.

  13. Molecular identification of polymers and anthropogenic particles extracted from oceanic water and fish stomach - A Raman micro-spectroscopy study.

    Science.gov (United States)

    Ghosal, Sutapa; Chen, Michael; Wagner, Jeff; Wang, Zhong-Min; Wall, Stephen

    2018-02-01

    Pacific Ocean trawl samples, stomach contents of laboratory-raised fish as well as fish from the subtropical gyres were analyzed by Raman micro-spectroscopy (RMS) to identify polymer residues and any detectable persistent organic pollutants (POP). The goal was to access specific molecular information at the individual particle level in order to identify polymer debris in the natural environment. The identification process was aided by a laboratory generated automated fluorescence removal algorithm. Pacific Ocean trawl samples of plastic debris associated with fish collection sites were analyzed to determine the types of polymers commonly present. Subsequently, stomach contents of fish from these locations were analyzed for ingested polymer debris. Extraction of polymer debris from fish stomach using KOH versus ultrapure water were evaluated to determine the optimal method of extraction. Pulsed ultrasonic extraction in ultrapure water was determined to be the method of choice for extraction with minimal chemical intrusion. The Pacific Ocean trawl samples yielded primarily polyethylene (PE) and polypropylene (PP) particles >1 mm, PE being the most prevalent type. Additional microplastic residues (1 mm - 10 μm) extracted by filtration, included a polystyrene (PS) particle in addition to PE and PP. Flame retardant, deca-BDE was tentatively identified on some of the PP trawl particles. Polymer residues were also extracted from the stomachs of Atlantic and Pacific Ocean fish. Two types of polymer related debris were identified in the Atlantic Ocean fish: (1) polymer fragments and (2) fragments with combined polymer and fatty acid signatures. In terms of polymer fragments, only PE and PP were detected in the fish stomachs from both locations. A variety of particles were extracted from oceanic fish as potential plastic pieces based on optical examination. However, subsequent RMS examination identified them as various non-plastic fragments, highlighting the importance

  14. Automatic Picking of Foraminifera: Design of the Foraminifera Image Recognition and Sorting Tool (FIRST) Prototype and Results of the Image Classification Scheme

    Science.gov (United States)

    de Garidel-Thoron, T.; Marchant, R.; Soto, E.; Gally, Y.; Beaufort, L.; Bolton, C. T.; Bouslama, M.; Licari, L.; Mazur, J. C.; Brutti, J. M.; Norsa, F.

    2017-12-01

    Foraminifera tests are the main proxy carriers for paleoceanographic reconstructions. Both geochemical and taxonomical studies require large numbers of tests to achieve statistical relevance. To date, the extraction of foraminifera from the sediment coarse fraction is still done by hand and thus time-consuming. Moreover, the recognition of morphotypes, ecologically relevant, requires some taxonomical skills not easily taught. The automatic recognition and extraction of foraminifera would largely help paleoceanographers to overcome these issues. Recent advances in automatic image classification using machine learning opens the way to automatic extraction of foraminifera. Here we detail progress on the design of an automatic picking machine as part of the FIRST project. The machine handles 30 pre-sieved samples (100-1000µm), separating them into individual particles (including foraminifera) and imaging each in pseudo-3D. The particles are classified and specimens of interest are sorted either for Individual Foraminifera Analyses (44 per slide) and/or for classical multiple analyses (8 morphological classes per slide, up to 1000 individuals per hole). The classification is based on machine learning using Convolutional Neural Networks (CNNs), similar to the approach used in the coccolithophorid imaging system SYRACO. To prove its feasibility, we built two training image datasets of modern planktonic foraminifera containing approximately 2000 and 5000 images each, corresponding to 15 & 25 morphological classes. Using a CNN with a residual topology (ResNet) we achieve over 95% correct classification for each dataset. We tested the network on 160,000 images from 45 depths of a sediment core from the Pacific ocean, for which we have human counts. The current algorithm is able to reproduce the downcore variability in both Globigerinoides ruber and the fragmentation index (r2 = 0.58 and 0.88 respectively). The FIRST prototype yields some promising results for high

  15. Microbes on building materials — Evaluation of DNA extraction protocols as common basis for molecular analysis

    International Nuclear Information System (INIS)

    Ettenauer, Jörg D.; Piñar, Guadalupe; Lopandic, Ksenija; Spangl, Bernhard; Ellersdorfer, Günther; Voitl, Christian; Sterflinger, Katja

    2012-01-01

    The study of microbial life in building materials is an emerging topic concerning biodeterioration of materials as well as health risks in houses and at working places. Biodegradation and potential health implications associated with microbial growth in our residues claim for more precise methods for quantification and identification. To date, cultivation experiments are commonly used to gain insight into the microbial diversity. Nowadays, molecular techniques for the identification of microorganisms provide efficient methods that can be applied in this field. The efficiency of DNA extraction is decisive in order to perform a reliable and reproducible quantification of the microorganisms by qPCR or to characterize the structure of the microbial community. In this study we tested thirteen DNA extraction methods and evaluated their efficiency for identifying (1) the quantity of DNA, (2) the quality and purity of DNA and (3) the ability of the DNA to be amplified in a PCR reaction using three universal primer sets for the ITS region of fungi as well as one primer pair targeting the 16S rRNA of bacteria with three typical building materials — common plaster, red brick and gypsum cardboard. DNA concentration measurements showed strong variations among the tested methods and materials. Measurement of the DNA yield showed up to three orders of magnitude variation from the same samples, whereas A260/A280 ratios often prognosticated biases in the PCR amplifications. Visualization of the crude DNA extracts and the comparison of DGGE fingerprints showed additional drawbacks of some methods. The FastDNA Spin kit for soil showed to be the best DNA extraction method and could provide positive results for all tests with the three building materials. Therefore, we suggest this method as a gold standard for quantification of indoor fungi and bacteria in building materials. -- Highlights: ► Up to thirteen extraction methods were evaluated with three building materials.

  16. Detection of alcoholism based on EEG signals and functional brain network features extraction

    NARCIS (Netherlands)

    Ahmadi, N.; Pei, Y.; Pechenizkiy, M.

    2017-01-01

    Alcoholism is a common disorder that leads to brain defects and associated cognitive, emotional and behavioral impairments. Finding and extracting discriminative biological markers, which are correlated to healthy brain pattern and alcoholic brain pattern, helps us to utilize automatic methods for

  17. Exploring DBpedia and Wikipedia for Portuguese Semantic Relationship Extraction

    Directory of Open Access Journals (Sweden)

    David Soares Batista

    2013-07-01

    Full Text Available The identification of semantic relationships, as expressed between named entities in text, is an important step for extracting knowledge from large document collections, such as the Web. Previous works have addressed this task for the English language through supervised learning techniques for automatic classification. The current state of the art involves the use of learning methods based on string kernels. However, such approaches require manually annotated training data for each type of semantic relationship, and have scalability problems when tens or hundreds of different types of relationships have to be extracted. This article discusses an approach for distantly supervised relation extraction over texts written in the Portuguese language, which uses an efficient technique for measuring similarity between relation instances, based on minwise hashing and on locality sensitive hashing. In the proposed method, the training examples are automatically collected from Wikipedia, corresponding to sentences that express semantic relationships between pairs of entities extracted from DBPedia. These examples are represented as sets of character quadgrams and other representative elements. The sets are indexed in a data structure that implements the idea of locality-sensitive hashing. To check which semantic relationship is expressed between a given pair of entities referenced in a sentence, the most similar training examples are searched, based on an approximation to the Jaccard coefficient, obtained through min-hashing. The relation class is assigned with basis on the weighted votes of the most similar examples. Tests with a dataset from Wikipedia validate the suitability of the proposed method, showing, for instance, that the method is able to extract 10 different types of semantic relations, 8 of them corresponding to asymmetric relations, with an average score of 55.6%, measured in terms of F1.

  18. Bengali text summarization by sentence extraction

    OpenAIRE

    Sarkar, Kamal

    2012-01-01

    Text summarization is a process to produce an abstract or a summary by selecting significant portion of the information from one or more texts. In an automatic text summarization process, a text is given to the computer and the computer returns a shorter less redundant extract or abstract of the original text(s). Many techniques have been developed for summarizing English text(s). But, a very few attempts have been made for Bengali text summarization. This paper presents a method for Bengali ...

  19. Preparation of High Purity, High Molecular-Weight Chitin from Ionic Liquids for Use as an Adsorbate for the Extraction of Uranium from Seawater

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, Robin [Univ. of Alabama, Tuscaloosa, AL (United States)

    2013-12-21

    Ensuring a domestic supply of uranium is a key issue facing the wider implementation of nuclear power. Uranium is mostly mined in Kazakhstan, Australia, and Canada, and there are few high-grade uranium reserves left worldwide. Therefore, one of the most appealing potential sources of uranium is the vast quantity dissolved in the oceans (estimated to be 4.4 billion tons worldwide). There have been research efforts centered on finding a means to extract uranium from seawater for decades, but so far none have resulted in an economically viable product, due in part to the fact that the materials that have been successfully demonstrated to date are too costly (in terms of money and energy) to produce on the necessary scale. Ionic Liquids (salts which melt below 100{degrees}C) can completely dissolve raw crustacean shells, leading to recovery of a high purity, high molecular weight chitin powder and to fibers and films which can be spun directly from the extract solution suggesting that continuous processing might be feasible. The work proposed here will utilize the unprecedented control this makes possible over the chitin fiber a) to prepare electrospun nanofibers of very high surface area and in specific architectures, b) to modify the fiber surfaces chemically with selective extractant capacity, and c) to demonstrate their utility in the direct extraction and recovery of uranium from seawater. This approach will 1) provide direct extraction of chitin from shellfish waste thus saving energy over the current industrial process for obtaining chitin; 2) allow continuous processing of nanofibers for very high surface area fibers in an economical operation; 3) provide a unique high molecular weight chitin not available from the current industrial process, leading to stronger, more durable fibers; and 4) allow easy chemical modification of the large surface areas of the fibers for appending uranyl selective functionality providing selectivity and ease of stripping. The

  20. Preparation of High Purity, High Molecular-Weight Chitin from Ionic Liquids for Use as an Adsorbate for the Extraction of Uranium from Seawater

    International Nuclear Information System (INIS)

    Rogers, Robin

    2013-01-01

    Ensuring a domestic supply of uranium is a key issue facing the wider implementation of nuclear power. Uranium is mostly mined in Kazakhstan, Australia, and Canada, and there are few high-grade uranium reserves left worldwide. Therefore, one of the most appealing potential sources of uranium is the vast quantity dissolved in the oceans (estimated to be 4.4 billion tons worldwide). There have been research efforts centered on finding a means to extract uranium from seawater for decades, but so far none have resulted in an economically viable product, due in part to the fact that the materials that have been successfully demonstrated to date are too costly (in terms of money and energy) to produce on the necessary scale. Ionic Liquids (salts which melt below 100 deg C) can completely dissolve raw crustacean shells, leading to recovery of a high purity, high molecular weight chitin powder and to fibers and films which can be spun directly from the extract solution suggesting that continuous processing might be feasible. The work proposed here will utilize the unprecedented control this makes possible over the chitin fiber a) to prepare electrospun nanofibers of very high surface area and in specific architectures, b) to modify the fiber surfaces chemically with selective extractant capacity, and c) to demonstrate their utility in the direct extraction and recovery of uranium from seawater. This approach will 1) provide direct extraction of chitin from shellfish waste thus saving energy over the current industrial process for obtaining chitin; 2) allow continuous processing of nanofibers for very high surface area fibers in an economical operation; 3) provide a unique high molecular weight chitin not available from the current industrial process, leading to stronger, more durable fibers; and 4) allow easy chemical modification of the large surface areas of the fibers for appending uranyl selective functionality providing selectivity and ease of stripping. The resulting