WorldWideScience

Sample records for extract quantitative information

  1. Information extraction

    NARCIS (Netherlands)

    Zhang, Lei; Hoede, C.

    2002-01-01

    In this paper we present a new approach to extract relevant information by knowledge graphs from natural language text. We give a multiple level model based on knowledge graphs for describing template information, and investigate the concept of partial structural parsing. Moreover, we point out that

  2. Quantitative metamaterial property extraction

    CERN Document Server

    Schurig, David

    2015-01-01

    We examine an extraction model for metamaterials, not previously reported, that gives precise, quantitative and causal representation of S parameter data over a broad frequency range, up to frequencies where the free space wavelength is only a modest factor larger than the unit cell dimension. The model is comprised of superposed, slab shaped response regions of finite thickness, one for each observed resonance. The resonance dispersion is Lorentzian and thus strictly causal. This new model is compared with previous models for correctness likelihood, including an appropriate Occam's factor for each fit parameter. We find that this new model is by far the most likely to be correct in a Bayesian analysis of model fits to S parameter simulation data for several classic metamaterial unit cells.

  3. Information extraction system

    Science.gov (United States)

    Lemmond, Tracy D; Hanley, William G; Guensche, Joseph Wendell; Perry, Nathan C; Nitao, John J; Kidwell, Paul Brandon; Boakye, Kofi Agyeman; Glaser, Ron E; Prenger, Ryan James

    2014-05-13

    An information extraction system and methods of operating the system are provided. In particular, an information extraction system for performing meta-extraction of named entities of people, organizations, and locations as well as relationships and events from text documents are described herein.

  4. Multimedia Information Extraction

    CERN Document Server

    Maybury, Mark T

    2012-01-01

    The advent of increasingly large consumer collections of audio (e.g., iTunes), imagery (e.g., Flickr), and video (e.g., YouTube) is driving a need not only for multimedia retrieval but also information extraction from and across media. Furthermore, industrial and government collections fuel requirements for stock media access, media preservation, broadcast news retrieval, identity management, and video surveillance.  While significant advances have been made in language processing for information extraction from unstructured multilingual text and extraction of objects from imagery and vid

  5. The Quantitative Theory of Information

    DEFF Research Database (Denmark)

    Topsøe, Flemming; Harremoës, Peter

    2008-01-01

    Information Theory as developed by Shannon and followers is becoming more and more important in a number of sciences. The concepts appear to be just the right ones with intuitively appealing operational interpretations. Furthermore, the information theoretical quantities are connected by powerful...... identities and inequalities. The article introduces the concepts code, entropy, divergence, redundancy and mutual information which are considered to be the most important ones....

  6. Quantitative Information on Oncology Prescription Drug Websites.

    Science.gov (United States)

    Sullivan, Helen W; Aikin, Kathryn J; Squiers, Linda B

    2016-09-02

    Our objective was to determine whether and how quantitative information about drug benefits and risks is presented to consumers and healthcare professionals on cancer-related prescription drug websites. We analyzed the content of 65 active cancer-related prescription drug websites. We assessed the inclusion and presentation of quantitative information for two audiences (consumers and healthcare professionals) and two types of information (drug benefits and risks). Websites were equally likely to present quantitative information for benefits (96.9 %) and risks (95.4 %). However, the amount of the information differed significantly: Both consumer-directed and healthcare-professional-directed webpages were more likely to have quantitative information for every benefit (consumer 38.5 %; healthcare professional 86.1 %) compared with every risk (consumer 3.1 %; healthcare professional 6.2 %). The numeric and graphic presentations also differed by audience and information type. Consumers have access to quantitative information about oncology drugs and, in particular, about the benefits of these drugs. Research has shown that using quantitative information to communicate treatment benefits and risks can increase patients' and physicians' understanding and can aid in treatment decision-making, although some numeric and graphic formats are more useful than others.

  7. Extracting useful information from images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic...... and heterogeneous, congruent and non-congruent images are used to illustrate how the described methods work and to compare some of them...

  8. Informed consent in dental extractions.

    Directory of Open Access Journals (Sweden)

    José Luis Capote Femenías

    2009-07-01

    Full Text Available When performing any oral intervention, particularly dental extractions, the specialist should have the oral or written consent of the patient. This consent includes the explanation of all possible complications, whether typical, very serious or personalized associated with the previous health condition, age, profession, religion or any other characteristic of the patient, as well as the possi.ble benefits of the intervention. This article is related with the bioethical aspects related with dental extractions, in order to determine the main elements that the informed consent should include.

  9. Web-Based Information Extraction Technology

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Information extraction techniques on the Web are the current research hotspot. Now many information extraction techniques based on different principles have appeared and have different capabilities. We classify the existing information extraction techniques by the principle of information extraction and analyze the methods and principles of semantic information adding, schema defining,rule expression, semantic items locating and object locating in the approaches. Based on the above survey and analysis,several open problems are discussed.

  10. Information Extraction and Webpage Understanding

    Directory of Open Access Journals (Sweden)

    M.Sharmila Begum

    2011-11-01

    Full Text Available The two most important tasks in information extraction from the Web are webpage structure understanding and natural language sentences processing. However, little work has been done toward an integrated statistical model for understanding webpage structures and processing natural language sentences within the HTML elements. Our recent work on webpage understanding introduces a joint model of Hierarchical Conditional Random Fields (HCRFs and extended Semi-Markov Conditional Random Fields (Semi-CRFs to leverage the page structure understanding results in free text segmentation and labeling. In this top-down integration model, the decision of the HCRF model could guide the decision making of the Semi-CRF model. However, the drawback of the topdown integration strategy is also apparent, i.e., the decision of the Semi-CRF model could not be used by the HCRF model to guide its decision making. This paper proposed a novel framework called WebNLP, which enables bidirectional integration of page structure understanding and text understanding in an iterative manner. We have applied the proposed framework to local business entity extraction and Chinese person and organization name extraction. Experiments show that the WebNLP framework achieved significantly better performance than existing methods.

  11. On Bounding Problems of Quantitative Information Flow

    CERN Document Server

    Yasuoka, Hirotoshi

    2011-01-01

    Researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the Shannon entropy, the min entropy, the guessing entropy, belief, and channel capacity. This paper investigates the hardness of precisely checking the quantitative information flow of a program according to such definitions. More precisely, we study the "bounding problem" of quantitative information flow, defined as follows: Given a program M and a positive real number q, decide if the quantitative information flow of M is less than or equal to q. We prove that the bounding problem is not a k-safety property for any k (even when q is fixed, for the Shannon-entropy-based definition with the uniform distribution), and therefore is not amenable to the self-composition technique that has been successfully applied to checking non-interference. We also prove complexity theoretic hardness results for the case when the program is restricted to loop-free boolean programs. Specifically, we show...

  12. Extracting information from multiplex networks.

    Science.gov (United States)

    Iacovacci, Jacopo; Bianconi, Ginestra

    2016-06-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering, and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from big data. For these reasons, characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper, we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function Θ̃(S) for describing their mesoscale organization and community structure. As working examples for studying these measures, we consider three multiplex network datasets coming for social science.

  13. Extracting Information from Multiplex Networks

    CERN Document Server

    Iacovacci, Jacopo

    2016-01-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from Big Data. For these reasons characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function $\\widetilde{\\Theta}^{S}$ for describing their mesoscale organization and community structure. As working examples for studying thes...

  14. Extracting information from multiplex networks

    Science.gov (United States)

    Iacovacci, Jacopo; Bianconi, Ginestra

    2016-06-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering, and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from big data. For these reasons, characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper, we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function Θ ˜ S for describing their mesoscale organization and community structure. As working examples for studying these measures, we consider three multiplex network datasets coming for social science.

  15. Quantitative Information Flow - Verification Hardness and Possibilities

    CERN Document Server

    Yasuoka, Hirotoshi

    2010-01-01

    Researchers have proposed formal definitions of quantitative information flow based on information theoretic notions such as the Shannon entropy, the min entropy, the guessing entropy, and channel capacity. This paper investigates the hardness and possibilities of precisely checking and inferring quantitative information flow according to such definitions. We prove that, even for just comparing two programs on which has the larger flow, none of the definitions is a k-safety property for any k, and therefore is not amenable to the self-composition technique that has been successfully applied to precisely checking non-interference. We also show a complexity theoretic gap with non-interference by proving that, for loop-free boolean programs whose non-interference is coNP-complete, the comparison problem is #P-hard for all of the definitions. For positive results, we show that universally quantifying the distribution in the comparison problem, that is, comparing two programs according to the entropy based definit...

  16. Personalized Web Services for Web Information Extraction

    CERN Document Server

    Jarir, Zahi; Erradi, Mahammed

    2011-01-01

    The field of information extraction from the Web emerged with the growth of the Web and the multiplication of online data sources. This paper is an analysis of information extraction methods. It presents a service oriented approach for web information extraction considering both web data management and extraction services. Then we propose an SOA based architecture to enhance flexibility and on-the-fly modification of web extraction services. An implementation of the proposed architecture is proposed on the middleware level of Java Enterprise Edition (JEE) servers.

  17. Extracting Quantitative Data from Lunar Soil Spectra

    Science.gov (United States)

    Noble, S. K.; Pieters, C. M.; Hiroi, T.

    2005-01-01

    Using the modified Gaussian model (MGM) developed by Sunshine et al. [1] we compared the spectral properties of the Lunar Soil Characterization Consortium (LSCC) suite of lunar soils [2,3] with their petrologic and chemical compositions to obtain quantitative data. Our initial work on Apollo 17 soils [4] suggested that useful compositional data could be elicited from high quality soil spectra. We are now able to expand upon those results with the full suite of LSCC soils that allows us to explore a much wider range of compositions and maturity states. The model is shown to be sensitive to pyroxene abundance and can evaluate the relative portion of high-Ca and low-Ca pyroxenes in the soils. In addition, the dataset has provided unexpected insights into the nature and causes of absorption bands in lunar soils. For example, it was found that two distinct absorption bands are required in the 1.2 m region of the spectrum. Neither of these bands can be attributed to plagioclase or agglutinates, but both appear to be largely due to pyroxene.

  18. Information- Theoretic Analysis for the Difficulty of Extracting Hidden Information

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei-ming; LI Shi-qu; CAO Jia; LIU Jiu-fen

    2005-01-01

    The difficulty of extracting hidden information,which is essentially a kind of secrecy, is analyzed by information-theoretic method. The relations between key rate, message rate, hiding capacity and difficulty of extraction are studied in the terms of unicity distance of stego-key, and the theoretic conclusion is used to analyze the actual extracting attack on Least Significant Bit(LSB) steganographic algorithms.

  19. Enhanced Pattern Representation in Information Extraction

    Institute of Scientific and Technical Information of China (English)

    廖乐健; 曹元大; 张映波

    2004-01-01

    Traditional pattern representation in information extraction lack in the ability of representing domain-specific concepts and are therefore devoid of flexibility. To overcome these restrictions, an enhanced pattern representation is designed which includes ontological concepts, neighboring-tree structures and soft constraints. An information-extraction inference engine based on hypothesis-generation and conflict-resolution is implemented. The proposed technique is successfully applied to an information extraction system for Chinese-language query front-end of a job-recruitment search engine.

  20. Information Extraction From Chemical Patents

    Directory of Open Access Journals (Sweden)

    Sandra Bergmann

    2012-01-01

    Full Text Available The development of new chemicals or pharmaceuticals is preceded by an indepth analysis of published patents in this field. This information retrieval is a costly and time inefficient step when done by a human reader, yet it is mandatory for potential success of an investment. The goal of the research project UIMA-HPC is to automate and hence speed-up the process of knowledge mining about patents. Multi-threaded analysis engines, developed according to UIMA (Unstructured Information Management Architecture standards, process texts and images in thousands of documents in parallel. UNICORE (UNiform Interface to COmputing Resources workflow control structures make it possible to dynamically allocate resources for every given task to gain best cpu-time/realtime ratios in an HPC environment.

  1. Extracting laboratory test information from biomedical text

    Directory of Open Access Journals (Sweden)

    Yanna Shen Kang

    2013-01-01

    Full Text Available Background: No previous study reported the efficacy of current natural language processing (NLP methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens was very limited or when lexical morphology of the entity was distinctive (as in units of measures, yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure.

  2. Application of GIS to Geological Information Extraction

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    GIS. a powerful tool for processing spatial data, is advantageous in its spatial overlaying. In this paper, GIS is applied to the extraction of geological information. Information associated with mineral resources is chosen to delineate the geo-anomalies, the basis of ore-forming anomalies and of mineral-deposit location. This application is illustrated with an example in Weixi area, Yunnan Province.

  3. Are extraction methods in quantitative assays of pharmacopoeia monographs exhaustive? A comparison with pressurized liquid extraction.

    Science.gov (United States)

    Basalo, Carlos; Mohn, Tobias; Hamburger, Matthias

    2006-10-01

    The extraction methods in selected monographs of the European and the Swiss Pharmacopoeia were compared to pressurized liquid extraction (PLE) with respect to the yield of constituents to be dosed in the quantitative assay for the respective herbal drugs. The study included five drugs, Belladonnae folium, Colae semen, Boldo folium, Tanaceti herba and Agni casti fructus. They were selected to cover different classes of compounds to be analyzed and different extraction methods to be used according to the monographs. Extraction protocols for PLE were optimized by varying the solvents and number of extraction cycles. In PLE, yields > 97 % of extractable analytes were typically achieved with two extraction cycles. For alkaloid-containing drugs, the addition of ammonia prior to extraction significantly increased the yield and reduced the number of extraction cycles required for exhaustive extraction. PLE was in all cases superior to the extraction protocol of the pharmacopoeia monographs (taken as 100 %), with differences ranging from 108 % in case of parthenolide in Tanaceti herba to 343 % in case of alkaloids in Boldo folium.

  4. Extraction of information from a single quantum

    OpenAIRE

    Paraoanu, G. S.

    2011-01-01

    We investigate the possibility of performing quantum tomography on a single qubit with generalized partial measurements and the technique of measurement reversal. Using concepts from statistical decision theory, we prove that, somewhat surprisingly, no information can be obtained using this scheme. It is shown that, irrespective of the measurement technique used, extraction of information from single quanta is at odds with other general principles of quantum physics.

  5. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...

  6. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...... independently or with the Stanford NLP toolkit....

  7. Markovian Processes for Quantitative Information Leakage

    DEFF Research Database (Denmark)

    Biondi, Fabrizio

    and randomized processes with Markovian models and to compute their information leakage for a very general model of attacker. We present the QUAIL tool that automates such analysis and is able to compute the information leakage of an imperative WHILE language. Finally, we show how to use QUAIL to analyze some...

  8. Web Information Extraction%Web信息抽取

    Institute of Scientific and Technical Information of China (English)

    李晶; 陈恩红

    2003-01-01

    With the tremendous amount of information available on the Web, the ability to quickly obtain information has become a crucial problem. It is not enough for us to acquire information only with Web information retrieval technology. Therefore more and more people pay attention to Web information extraction technology. This paper first in- troduces some concepts of information extraction technology, then introduces and analyzes several typical Web information extraction methods based on the differences in extraction patterns.

  9. Differential Privacy versus Quantitative Information Flow

    CERN Document Server

    Alvim, Mário S; Degano, Pierpaolo; Palamidessi, Catuscia

    2010-01-01

    Differential privacy is a notion of privacy that has become very popular in the database community. Roughly, the idea is that a randomized query mechanism provides sufficient privacy protection if the ratio between the probabilities of two different entries to originate a certain answer is bound by e^\\epsilon. In the fields of anonymity and information flow there is a similar concern for controlling information leakage, i.e. limiting the possibility of inferring the secret information from the observables. In recent years, researchers have proposed to quantify the leakage in terms of the information-theoretic notion of mutual information. There are two main approaches that fall in this category: One based on Shannon entropy, and one based on R\\'enyi's min entropy. The latter has connection with the so-called Bayes risk, which expresses the probability of guessing the secret. In this paper, we show how to model the query system in terms of an information-theoretic channel, and we compare the notion of differen...

  10. Automated information extraction from web APIs documentation

    OpenAIRE

    Ly, Papa Alioune; Pedrinaci, Carlos; Domingue, John

    2012-01-01

    A fundamental characteristic of Web APIs is the fact that, de facto, providers hardly follow any standard practices while implementing, publishing, and documenting their APIs. As a consequence, the discovery and use of these services by third parties is significantly hampered. In order to achieve further automation while exploiting Web APIs we present an approach for automatically extracting relevant technical information from the Web pages documenting them. In particular we have devised two ...

  11. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  12. Extracting the information backbone in online system.

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  13. Extracting the information backbone in online system

    CERN Document Server

    Zhang, Qian-Ming; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers mainly dedicated to improve the recommendation performance (accuracy and diversity) of the algorithms while overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improve both of...

  14. Quantitative Metabolomics:Analysis on Active Components in Extracts from Kaki Folium

    Institute of Scientific and Technical Information of China (English)

    DAI Li-peng; GU Yuan; YIN Ren-jie; LIU Chang-xiao; SI Duan-yun

    2012-01-01

    Objective In order to analyze the active components in the extracts from Kaki Folium(KF),quantitative metabolomics approach was adopted to investigate the number of active components existing among the different extracts and their variation.Methods LC-MS method was established for the quantitative determination of the active components taking the mixture with reference substance as tested sample.Results In terms of the number of active components and amount presented in the different tested samples of KF extracted by many types of solvents,variation was observed.But rutin,astragalin,and kaempferol were presented in all samples.Difference was found between the samples extracted from the products on market and from the raw materials of KF processed by polar solvents with different recipes.However,the three active components were found in all samples examined.Conclusion These results might be valuable as all information and could be used for the optimization of raw materials extraction procedure to enhance the productivity.

  15. Quantitative Information Flow as Safety and Liveness Hyperproperties

    Directory of Open Access Journals (Sweden)

    Hirotoshi Yasuoka

    2012-07-01

    Full Text Available We employ Clarkson and Schneider's "hyperproperties" to classify various verification problems of quantitative information flow. The results of this paper unify and extend the previous results on the hardness of checking and inferring quantitative information flow. In particular, we identify a subclass of liveness hyperproperties, which we call "k-observable hyperproperties", that can be checked relative to a reachability oracle via self composition.

  16. Recent developments in quantitative graph theory: information inequalities for networks.

    Directory of Open Access Journals (Sweden)

    Matthias Dehmer

    Full Text Available In this article, we tackle a challenging problem in quantitative graph theory. We establish relations between graph entropy measures representing the structural information content of networks. In particular, we prove formal relations between quantitative network measures based on Shannon's entropy to study the relatedness of those measures. In order to establish such information inequalities for graphs, we focus on graph entropy measures based on information functionals. To prove such relations, we use known graph classes whose instances have been proven useful in various scientific areas. Our results extend the foregoing work on information inequalities for graphs.

  17. Digital image processing for information extraction.

    Science.gov (United States)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  18. Quantitative information measurement and application for machine component classification codes

    Institute of Scientific and Technical Information of China (English)

    LI Ling-Feng; TAN Jian-rong; LIU Bo

    2005-01-01

    Information embodied in machine component classification codes has internal relation with the probability distribution of the code symbol. This paper presents a model considering codes as information source based on Shannon's information theory. Using information entropy, it preserves the mathematical form and quantitatively measures the information amount of a symbol and a bit in the machine component classification coding system. It also gets the maximum value of information amount and the corresponding coding scheme when the category of symbols is fixed. Samples are given to show how to evaluate the information amount of component codes and how to optimize a coding system.

  19. Extraction of information from unstructured text

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H.; DeLand, S.M.; Crowder, S.V.

    1995-11-01

    Extracting information from unstructured text has become an emphasis in recent years due to the large amount of text now electronically available. This status report describes the findings and work done by the end of the first year of a two-year LDRD. Requirements of the approach included that it model the information in a domain independent way. This means that it would differ from current systems by not relying on previously built domain knowledge and that it would do more than keyword identification. Three areas that are discussed and expected to contribute to a solution include (1) identifying key entities through document level profiling and preprocessing, (2) identifying relationships between entities through sentence level syntax, and (3) combining the first two with semantic knowledge about the terms.

  20. Extracting the information backbone in online system.

    Directory of Open Access Journals (Sweden)

    Qian-Ming Zhang

    Full Text Available Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  1. Extracting the Information Backbone in Online System

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such “less can be more” feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency. PMID:23690946

  2. Extraction of quantitative surface characteristics from AIRSAR data for Death Valley, California

    Science.gov (United States)

    Kierein-Young, K. S.; Kruse, F. A.

    1992-01-01

    Polarimetric Airborne Synthetic Aperture Radar (AIRSAR) data were collected for the Geologic Remote Sensing Field Experiment (GRSFE) over Death Valley, California, USA, in Sep. 1989. AIRSAR is a four-look, quad-polarization, three frequency instrument. It collects measurements at C-band (5.66 cm), L-band (23.98 cm), and P-band (68.13 cm), and has a GIFOV of 10 meters and a swath width of 12 kilometers. Because the radar measures at three wavelengths, different scales of surface roughness are measured. Also, dielectric constants can be calculated from the data. The AIRSAR data were calibrated using in-scene trihedral corner reflectors to remove cross-talk; and to calibrate the phase, amplitude, and co-channel gain imbalance. The calibration allows for the extraction of accurate values of rms surface roughness, dielectric constants, sigma(sub 0) backscatter, and polarization information. The radar data sets allow quantitative characterization of small scale surface structure of geologic units, providing information about the physical and chemical processes that control the surface morphology. Combining the quantitative information extracted from the radar data with other remotely sensed data sets allows discrimination, identification and mapping of geologic units that may be difficult to discern using conventional techniques.

  3. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  4. From information theory to quantitative description of steric effects.

    Science.gov (United States)

    Alipour, Mojtaba; Safari, Zahra

    2016-07-21

    Immense efforts have been made in the literature to apply the information theory descriptors for investigating the electronic structure theory of various systems. In the present study, the information theoretic quantities, such as Fisher information, Shannon entropy, Onicescu information energy, and Ghosh-Berkowitz-Parr entropy, have been used to present a quantitative description for one of the most widely used concepts in chemistry, namely the steric effects. Taking the experimental steric scales for the different compounds as benchmark sets, there are reasonable linear relationships between the experimental scales of the steric effects and theoretical values of steric energies calculated from information theory functionals. Perusing the results obtained from the information theoretic quantities with the two representations of electron density and shape function, the Shannon entropy has the best performance for the purpose. On the one hand, the usefulness of considering the contributions of functional groups steric energies and geometries, and on the other hand, dissecting the effects of both global and local information measures simultaneously have also been explored. Furthermore, the utility of the information functionals for the description of steric effects in several chemical transformations, such as electrophilic and nucleophilic reactions and host-guest chemistry, has been analyzed. The functionals of information theory correlate remarkably with the stability of systems and experimental scales. Overall, these findings show that the information theoretic quantities can be introduced as quantitative measures of steric effects and provide further evidences of the quality of information theory toward helping theoreticians and experimentalists to interpret different problems in real systems.

  5. SEMANTIC INFORMATION EXTRACTION IN UNIVERSITY DOMAIN

    Directory of Open Access Journals (Sweden)

    Swathi

    2012-07-01

    Full Text Available Today’s conventional search engines hardly do provide the essential content relevant to the user’s search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.

  6. The Limitations of Quantitative Social Science for Informing Public Policy

    Science.gov (United States)

    Jerrim, John; de Vries, Robert

    2017-01-01

    Quantitative social science (QSS) has the potential to make an important contribution to public policy. However it also has a number of limitations. The aim of this paper is to explain these limitations to a non-specialist audience and to identify a number of ways in which QSS research could be improved to better inform public policy.

  7. The Limitations of Quantitative Social Science for Informing Public Policy

    Science.gov (United States)

    Jerrim, John; de Vries, Robert

    2017-01-01

    Quantitative social science (QSS) has the potential to make an important contribution to public policy. However it also has a number of limitations. The aim of this paper is to explain these limitations to a non-specialist audience and to identify a number of ways in which QSS research could be improved to better inform public policy.

  8. Respiratory Information Extraction from Electrocardiogram Signals

    KAUST Repository

    Amin, Gamal El Din Fathy

    2010-12-01

    The Electrocardiogram (ECG) is a tool measuring the electrical activity of the heart, and it is extensively used for diagnosis and monitoring of heart diseases. The ECG signal reflects not only the heart activity but also many other physiological processes. The respiratory activity is a prominent process that affects the ECG signal due to the close proximity of the heart and the lungs. In this thesis, several methods for the extraction of respiratory process information from the ECG signal are presented. These methods allow an estimation of the lung volume and the lung pressure from the ECG signal. The potential benefit of this is to eliminate the corresponding sensors used to measure the respiration activity. A reduction of the number of sensors connected to patients will increase patients’ comfort and reduce the costs associated with healthcare. As a further result, the efficiency of diagnosing respirational disorders will increase since the respiration activity can be monitored with a common, widely available method. The developed methods can also improve the detection of respirational disorders that occur while patients are sleeping. Such disorders are commonly diagnosed in sleeping laboratories where the patients are connected to a number of different sensors. Any reduction of these sensors will result in a more natural sleeping environment for the patients and hence a higher sensitivity of the diagnosis.

  9. Method for Extracting Product Information from TV Commercial

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2011-09-01

    Full Text Available Television (TV Commercial program contains important product information that displayed only in seconds. People who need that information has no insufficient time for noted it, even just for reading that information. This research work focus on automatically detect text and extract important information from a TV commercial to provide information in real time and for video indexing. We propose method for product information extraction from TV commercial using knowledge based system with pattern matching rule based method. Implementation and experiments on 50 commercial screenshot images achieved a high accuracy result on text extraction and information recognition.

  10. Information Extraction on the Web with Credibility Guarantee

    OpenAIRE

    Nguyen, Thanh Tam

    2015-01-01

    The Web became the central medium for valuable sources of information extraction applications. However, such user-generated resources are often plagued by inaccuracies and misinformation due to the inherent openness and uncertainty of the Web. In this work we study the problem of extracting structured information out of Web data with a credibility guarantee. The ultimate goal is that not only the structured information should be extracted as much as possible but also its credibility is high. ...

  11. Information Extraction from Large-Multi-Layer Social Networks

    Science.gov (United States)

    2015-08-06

    paper we introduce a novel method to extract information from such multi-layer networks, where each type of link forms its own layer. Using the concept...Approved for public release; distribution is unlimited. Information extraction from large-multi-layer social networks The views, opinions and/or findings...Information extraction from large-multi-layer social networks Report Title Social networks often encode community structure using multiple distinct

  12. Visible light scatter as quantitative information source on milk constituents

    DEFF Research Database (Denmark)

    Melentieva, Anastasiya; Kucheryavskiy, Sergey; Bogomolov, Andrey

    2012-01-01

    VISIBLE LIGHT SCATTER AS A QUANTITATIVE INFORMATION SOURCE ON MILK CONSTITUENTS A. Melenteva 1, S. Kucheryavski 2, A. Bogomolov 1,31Samara State Technical University, Molodogvardeyskaya Street 244, 443100 Samara, Russia. 2Aalborg University, campus Esbjerg, Niels Bohrs vej 8, 6700 Esbjerg, Denmar...... research area are presented and discussed. References: [1] A. Bogomolov, S. Dietrich, B. Boldrini, R.W. Kessler, Food Chemistry (2012), doi:10.1016/j.foodchem.2012.02.077....

  13. A Quantitative Theory of Information and Unsecured Credit

    OpenAIRE

    Kartik Athreya

    2008-01-01

    Over the past three decades four striking features of aggregates in the unsecured credit market have been documented: (1) rising availability of credit along both the intensive and extensive margins, (2) rising debt accumulation, (3) rising bankruptcy rates and discharge in bankruptcy, and (4) rising dispersion in interest rates across households. We provide a quantitative model of unsecured credit to interpret these facts; a novel contribution is that we allow for asymmetric information with...

  14. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  15. Extraction and Quantitative HPLC Analysis of Coumarin in Hydroalcoholic Extracts of Mikania glomerata Spreng: ("guaco" Leaves

    Directory of Open Access Journals (Sweden)

    Celeghini Renata M. S.

    2001-01-01

    Full Text Available Methods for preparation of hydroalcoholic extracts of "guaco" (Mikania glomerata Spreng. leaves were compared: maceration, maceration under sonication, infusion and supercritical fluid extraction. Evaluation of these methods showed that maceration under sonication had the best results, when considering the ratio extraction yield/extraction time. A high performance liquid chromatography (HPLC procedure for the determination of coumarin in these hydroalcoholic extracts of "guaco" leaves is described. The HPLC method is shown to be sensitive and reproducible.

  16. Quantitative secondary electron imaging for work function extraction at atomic level and layer identification of graphene.

    Science.gov (United States)

    Zhou, Yangbo; Fox, Daniel S; Maguire, Pierce; O'Connell, Robert; Masters, Robert; Rodenburg, Cornelia; Wu, Hanchun; Dapor, Maurizio; Chen, Ying; Zhang, Hongzhou

    2016-02-16

    Two-dimensional (2D) materials usually have a layer-dependent work function, which require fast and accurate detection for the evaluation of their device performance. A detection technique with high throughput and high spatial resolution has not yet been explored. Using a scanning electron microscope, we have developed and implemented a quantitative analytical technique which allows effective extraction of the work function of graphene. This technique uses the secondary electron contrast and has nanometre-resolved layer information. The measurement of few-layer graphene flakes shows the variation of work function between graphene layers with a precision of less than 10 meV. It is expected that this technique will prove extremely useful for researchers in a broad range of fields due to its revolutionary throughput and accuracy.

  17. Quantitative health research in an emerging information economy.

    Science.gov (United States)

    More, A; Martin, D

    1998-09-01

    This paper is concerned with the changing information environment in the U.K. National Health Service and its implications for the quantitative analysis of health and health care. The traditionally available data series are contrasted with those sources that are being created or enhanced as a result of the post-1991 market-orientation of the health care system. The likely research implications of the commodification of health data are assessed and illustrated with reference to the specific example of the geography of asthma. The paper warns against a future in which large-scale quantitative health research is only possible in relation to projects which may yield direct financial or market benefits to the data providers.

  18. The Agent of extracting Internet Information with Lead Order

    Science.gov (United States)

    Mo, Zan; Huang, Chuliang; Liu, Aijun

    In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.

  19. A method for the extraction and quantitation of phycoerythrin from algae

    Science.gov (United States)

    Stewart, D. E.

    1982-01-01

    A summary of a new technique for the extraction and quantitation of phycoerythrin (PHE) from algal samples is described. Results of analysis of four extracts representing three PHE types from algae including cryptomonad and cyanophyte types are presented. The method of extraction and an equation for quantitation are given. A graph showing the relationship of concentration and fluorescence units that may be used with samples fluorescing around 575-580 nm (probably dominated by cryptophytes in estuarine waters) and 560 nm (dominated by cyanophytes characteristics of the open ocean) is provided.

  20. Pattern information extraction from crystal structures

    OpenAIRE

    Okuyan, Erhan

    2005-01-01

    Cataloged from PDF version of article. Determining crystal structure parameters of a material is a quite important issue in crystallography. Knowing the crystal structure parameters helps to understand physical behavior of material. For complex structures, particularly for materials which also contain local symmetry as well as global symmetry, obtaining crystal parameters can be quite hard. This work provides a tool that will extract crystal parameters such as primitive vect...

  1. Ultrasonic Signal Processing Algorithm for Crack Information Extraction on the Keyway of Turbine Rotor Disk

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hong Kyu; Seo, Won Chan; Park, Chan [Pukyong National University, Busan (Korea, Republic of); Lee, Jong O; Son, Young Ho [KIMM, Daejeon (Korea, Republic of)

    2009-10-15

    An ultrasonic signal processing algorithm was developed for extracting the information of cracks generated around the keyway of a turbine rotor disk. B-scan images were obtained by using keyway specimens and an ultrasonic scan system with x-y position controller. The B-scan images were used as input images for 2-Dimensional signal processing, and the algorithm was constructed with four processing stages of pre-processing, crack candidate region detection, crack region classification and crack information extraction. It is confirmed by experiments that the developed algorithm is effective for the quantitative evaluation of cracks generated around the keyway of turbine rotor disk

  2. Real-Time Information Extraction from Big Data

    Science.gov (United States)

    2015-10-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Real-Time Information Extraction from Big Data Robert M. Rolfe...Information Extraction from Big Data Jagdeep Shah Robert M. Rolfe Francisco L. Loaiza-Lemos October 7, 2015 I N S T I T U T E F O R D E F E N S E...AN A LY S E S Abstract We are drowning under the 3 Vs (volume, velocity and variety) of big data . Real-time information extraction from big

  3. An architecture for biological information extraction and representation.

    Science.gov (United States)

    Vailaya, Aditya; Bluvas, Peter; Kincaid, Robert; Kuchinsky, Allan; Creech, Michael; Adler, Annette

    2005-02-15

    Technological advances in biomedical research are generating a plethora of heterogeneous data at a high rate. There is a critical need for extraction, integration and management tools for information discovery and synthesis from these heterogeneous data. In this paper, we present a general architecture, called ALFA, for information extraction and representation from diverse biological data. The ALFA architecture consists of: (i) a networked, hierarchical, hyper-graph object model for representing information from heterogeneous data sources in a standardized, structured format; and (ii) a suite of integrated, interactive software tools for information extraction and representation from diverse biological data sources. As part of our research efforts to explore this space, we have currently prototyped the ALFA object model and a set of interactive software tools for searching, filtering, and extracting information from scientific text. In particular, we describe BioFerret, a meta-search tool for searching and filtering relevant information from the web, and ALFA Text Viewer, an interactive tool for user-guided extraction, disambiguation, and representation of information from scientific text. We further demonstrate the potential of our tools in integrating the extracted information with experimental data and diagrammatic biological models via the common underlying ALFA representation. aditya_vailaya@agilent.com.

  4. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  5. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    Directory of Open Access Journals (Sweden)

    Hongchun Zhu

    Full Text Available Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  6. Information Extraction from Unstructured Text for the Biodefense Knowledge Center

    Energy Technology Data Exchange (ETDEWEB)

    Samatova, N F; Park, B; Krishnamurthy, R; Munavalli, R; Symons, C; Buttler, D J; Cottom, T; Critchlow, T J; Slezak, T

    2005-04-29

    The Bio-Encyclopedia at the Biodefense Knowledge Center (BKC) is being constructed to allow an early detection of emerging biological threats to homeland security. It requires highly structured information extracted from variety of data sources. However, the quantity of new and vital information available from every day sources cannot be assimilated by hand, and therefore reliable high-throughput information extraction techniques are much anticipated. In support of the BKC, Lawrence Livermore National Laboratory and Oak Ridge National Laboratory, together with the University of Utah, are developing an information extraction system built around the bioterrorism domain. This paper reports two important pieces of our effort integrated in the system: key phrase extraction and semantic tagging. Whereas two key phrase extraction technologies developed during the course of project help identify relevant texts, our state-of-the-art semantic tagging system can pinpoint phrases related to emerging biological threats. Also we are enhancing and tailoring the Bio-Encyclopedia by augmenting semantic dictionaries and extracting details of important events, such as suspected disease outbreaks. Some of these technologies have already been applied to large corpora of free text sources vital to the BKC mission, including ProMED-mail, PubMed abstracts, and the DHS's Information Analysis and Infrastructure Protection (IAIP) news clippings. In order to address the challenges involved in incorporating such large amounts of unstructured text, the overall system is focused on precise extraction of the most relevant information for inclusion in the BKC.

  7. Source-specific Informative Prior for i-Vector Extraction

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2015-01-01

    -informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows that extracting i-vectors for a heterogeneous dataset, containing speech samples recorded from multiple sources, using informative priors instead is applicable, and leads to favorable results...

  8. Determining the best extractant and extraction conditions for fulvic acid through qualitative and quantitative analysis of vermicompost

    Directory of Open Access Journals (Sweden)

    Omid Shabnani Moghadam

    2015-06-01

    Full Text Available Fulvic acid is a natural tampon and an appropriate chelating agent that has high ion-exchange ability and increasing absorption of minerals in plants. This paper examined the effects of extractants (sodium hydroxide, sodium polyphosphate, urea and EDTA and the extraction time (1, 7 and 9 days on the physicochemical properties of vermicompost-produced fulvic acid. Finally, various methods were compared to the universal method of humic substances. Different qualitative and quantitative analysis such as detecting the functional groups (FTIR, measurement of functional groups, spectrophotometric ratios and humification indices were carried out on fulvic acid. Results showed that the highest and lowest amounts of fulvic acid were extracted by the sodium hydroxide and urea, respectively. Various extractants made negligible changes in the type and quality of the Fulvic acid functional groups. Results indicated that sodium hydroxide was the best extractant and the minimum amounts of fulvic acid were extracted by the urea. Urea-extracted fulvic acid had the most functional groups of total acidity and phenolic OH. The most carboxyl functional groups and spectrophotometric ratios were detected in EDTA solution. At the end, by the comparison of various methods, universal method despite the low extraction amount had more functional groups and higher efficiencies compare to others.

  9. Overcoming Information Aesthetics: In Defense of a Non-Quantitative Informational Understanding of Artworks

    Directory of Open Access Journals (Sweden)

    Rodrigo Hernández-Ramírez

    2016-11-01

    Full Text Available Attempts to describe aesthetic artefacts through informational models have existed at least since the late 1950s; but they have not been as successful as their proponents expected nor are they popular among art scholars because of their (mostly quantitative nature. However, given how information technology has deeply shifted every aspect of our world, it is fair to ask whether aesthetic value continues to be immune to informational interpretations. This paper discusses the ideas of the late Russian biophysicist, Mikhail Volkenstein concerning art and aesthetic value. It contrasts them with Max Bense’s ‘information aesthetics’, and with contemporary philosophical understandings of information. Overall, this paper shows that an informational but not necessarily quantitative approach serves not only as an effective means to describe our interaction with artworks, but also contributes to explain why purely quantitative models struggle to formalise aesthetic value. Finally, it makes the case that adopting an informational outlook helps overcome the ‘analogue vs digital’ dichotomy by arguing the distinction is epistemological rather than ontological, and therefore the two notions need not be incompatible.

  10. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    Science.gov (United States)

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  11. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    Science.gov (United States)

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  12. Mars Target Encyclopedia: Information Extraction for Planetary Science

    Science.gov (United States)

    Wagstaff, K. L.; Francis, R.; Gowda, T.; Lu, Y.; Riloff, E.; Singh, K.

    2017-06-01

    Mars surface targets / and published compositions / Seek and ye will find. We used text mining methods to extract information from LPSC abstracts about the composition of Mars surface targets. Users can search by element, mineral, or target.

  13. Can we replace curation with information extraction software?

    Science.gov (United States)

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL.

  14. Moving Target Information Extraction Based on Single Satellite Image

    Directory of Open Access Journals (Sweden)

    ZHAO Shihu

    2015-03-01

    Full Text Available The spatial and time variant effects in high resolution satellite push broom imaging are analyzed. A spatial and time variant imaging model is established. A moving target information extraction method is proposed based on a single satellite remote sensing image. The experiment computes two airplanes' flying speed using ZY-3 multispectral image and proves the validity of spatial and time variant model and moving information extracting method.

  15. Qualitative and quantitative analysis of steroidal saponins in crude extract and bark powder of Yucca schidigera Roezl.

    Science.gov (United States)

    Kowalczyk, Mariusz; Pecio, Łukasz; Stochmal, Anna; Oleszek, Wiesław

    2011-08-10

    Steroidal saponins in commercial stem syrup and in extract of a bark of Yucca schidigera were identified with high-performance liquid chromatography ion trap mass spectrometry and quantitated using ultraperformance liquid chromatography with quadrupole mass spectrometric detection. Fragmentation patterns of yucca saponins were generated using collision-induced dissociation and compared with fragmentation of authentic standards as well as with published spectrometric information. In addition to detection of twelve saponins known to occur in Y. schidigera, collected fragmentation data led to tentative identifications of seven new saponins. A quantitation method for all 19 detected compounds was developed and validated. Samples derived from the syrup and the bark of yucca were quantitatively measured and compared. Obtained results indicate that yucca bark accumulates polar, bidesmosidic saponins, while in the stem steroidal glycosides with middle- and short-length saccharide chains are predominant. The newly developed method provides an opportunity to evaluate the composition of yucca products available on the market.

  16. Pattern information extraction from crystal structures

    Science.gov (United States)

    Okuyan, Erhan; Güdükbay, Uğur; Gülseren, Oğuz

    2007-04-01

    Determining the crystal structure parameters of a material is an important issue in crystallography and material science. Knowing the crystal structure parameters helps in understanding the physical behavior of material. It can be difficult to obtain crystal parameters for complex structures, particularly those materials that show local symmetry as well as global symmetry. This work provides a tool that extracts crystal parameters such as primitive vectors, basis vectors and space groups from the atomic coordinates of crystal structures. A visualization tool for examining crystals is also provided. Accordingly, this work could help crystallographers, chemists and material scientists to analyze crystal structures efficiently. Program summaryTitle of program: BilKristal Catalogue identifier: ADYU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYU_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Programming language used: C, C++, Microsoft .NET Framework 1.1 and OpenGL Libraries Computer: Personal Computers with Windows operating system Operating system: Windows XP Professional RAM: 20-60 MB No. of lines in distributed program, including test data, etc.:899 779 No. of bytes in distributed program, including test date, etc.:9 271 521 Distribution format:tar.gz External routines/libraries: Microsoft .NET Framework 1.1. For visualization tool, graphics card driver should also support OpenGL Nature of problem: Determining crystal structure parameters of a material is a quite important issue in crystallography. Knowing the crystal structure parameters helps to understand physical behavior of material. For complex structures, particularly, for materials which also contain local symmetry as well as global symmetry, obtaining crystal parameters can be quite hard. Solution method: The tool extracts crystal parameters such as primitive vectors, basis vectors and identify the space group from

  17. Integrating Information Extraction Agents into a Tourism Recommender System

    Science.gov (United States)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  18. Mining knowledge from text repositories using information extraction: A review

    Indian Academy of Sciences (India)

    Sandeep R Sirsat; Dr Vinay Chavan; Dr Shrinivas P Deshpande

    2014-02-01

    There are two approaches to mining text form online repositories. First, when the knowledge to be discovered is expressed directly in the documents to be mined, Information Extraction (IE) alone can serve as an effective tool for such text mining. Second, when the documents contain concrete data in unstructured form rather than abstract knowledge, Information Extraction (IE) can be used to first transform the unstructured data in the document corpus into a structured database, and then use some state-of-the-art data mining algorithms/tools to identify abstract patterns in this extracted data. This paper presents the review of several methods related to these two approaches.

  19. Comparative and quantitative analysis of antioxidant and scavenging potential of Indigofera tinctoria Linn. extracts

    Institute of Scientific and Technical Information of China (English)

    Rashmi Singh; Shatruhan Sharma; Veena Sharma

    2015-01-01

    OBJECTIVE: To compare and elucidate the antioxidant efifcacy of ethanolic and hydroethanolic extracts ofIndigofera tinctoriaLinn. (Fabaceae family). METHODS: Variousin-vitro antioxidant assays and free radical-scavenging assays were done. Quantitative measurements of various phytoconstituents, reductive abilities and chelating potential were carried out along with standard compounds. Half inhibitory concentration (IC50) values for ethanol and hydroethanol extracts were analyzed and compared with respective standards. RESULTS: Hydroethanolic extracts showed considerably more potent antioxidant activity in comparison to ethanol extracts. Hydroethanolic extracts had lower IC50 values than ethanol extracts in the case of DPPH, metal chelation and hydroxyl radical-scavenging capacity (829, 659 and 26.7 μg/mL) but had slightly higher values than ethanol in case of SO2- and NO2-scavenging activity (P< 0.001vs standard). Quantitative measurements also showed that the abundance of phenolic and lfavonoid bioactive phytoconstituents were signiifcantly (P< 0.001) greater in hydroethanol extracts (212.920 and 149.770 mg GAE and rutin/g of plant extract respectively) than in ethanol extracts (211.691 and 132.603 mg GAE and rutin/g of plant extract respectively). Karl Pearson’s correlation analysis (r2) between various antioxidant parameters and bioactive components also associated the antioxidant potential ofI. tinctoria with various phytoconstituents, especialy phenolics, lfavonoids, saponins and tannins. CONCLUSION: This study may be helpful to draw the attention of researchers towards the hydroethanol extracts ofI. tinctoria, which has a high yield, and great prospects in herbal industries to produce inexpensive and powerful herbal products.

  20. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    Science.gov (United States)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  1. Improving information extraction using a probability-based approach

    DEFF Research Database (Denmark)

    Kim, S.; Ahmed, Saeema; Wallace, K.

    2007-01-01

    or retire. It is becoming essential to retrieve vital information from archived product documents, if it is available. There is, therefore, great interest in ways of extracting relevant and sharable information from documents. A keyword-based search is commonly used, but studies have shown...

  2. The study of the extraction of 3-D informations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Ki [Korea Univ., Seoul (Korea); Kim, Jin Hun; Kim, Hui Yung; Lee, Gi Sik; Lee, Yung Shin [Sokyung Univ., Seoul (Korea)

    1998-04-01

    To extract three dimensional information in 3 dimensional real world two methods are applied (stereo image method, virtual reality environment method). 1. Stereo image method. From the paris of stereo image matching methods are applied to find the corresponding points in the two images. To solve the problem various methods are applied 2. Virtual reality environment method. As an alternate method to extract 3-D information, virtual reality environment is use. It is very useful to fine 6 DOF for a some given target points in 3-D space. We considered the accuracies and reliability of the 3-D informations. 34 figs., 4 tabs. (Author)

  3. Extracting quantitative three-dimensional unsteady flow direction from tuft flow visualizations

    Science.gov (United States)

    Omata, Noriyasu; Shirayama, Susumu

    2017-10-01

    We focus on the qualitative but widely used method of tuft flow visualization, and propose a method for quantifying it using information technology. By applying stereo image processing and computer vision, the three-dimensional (3D) flow direction in a real environment can be obtained quantitatively. In addition, we show that the flow can be divided temporally by performing appropriate machine learning on the data. Acquisition of flow information in real environments is important for design development, but it is generally considered difficult to apply simulations or quantitative experiments to such environments. Hence, qualitative methods including the tuft method are still in use today. Although attempts have been made previously to quantify such methods, it has not been possible to acquire 3D information. Furthermore, even if quantitative data could be acquired, analysis was often performed empirically or qualitatively. In contrast, we show that our method can acquire 3D information and analyze the measured data quantitatively.

  4. QUANTITATIVE ELISA OF POLYCHLORINATED BIPHENYLS IN AN OILY SOIL MATRIX USING SUPERCRITICAL FLUID EXTRACTION

    Science.gov (United States)

    Soil samples from the GenCorp Lawrence Brownfields site were analyzed with a commercial semi-quantitative enzyme-linked immunosorbent assay (ELISA) using a methanol shake extraction. Many of the soil samples were extremely oily, with total petroleum hydrocarbon levels up to 240...

  5. Extraction of Coupling Information From $Z' \\to jj$

    OpenAIRE

    Rizzo, T. G.

    1993-01-01

    An analysis by the ATLAS Collaboration has recently shown, contrary to popular belief, that a combination of strategic cuts, excellent mass resolution, and detailed knowledge of the QCD backgrounds from direct measurements can be used to extract a signal in the $Z' \\to jj$ channel in excess of $6\\sigma$ for certain classes of extended electroweak models. We explore the possibility that the data extracted from $Z$ dijet peak will have sufficient statistical power as to supply information on th...

  6. PDF text classification to leverage information extraction from publication reports.

    Science.gov (United States)

    Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha

    2016-06-01

    Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (pPDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Extracting quantitative measures from EAP: a small clinical study using BFOR.

    Science.gov (United States)

    Hosseinbor, A Pasha; Chung, Moo K; Wu, Yu-Chien; Fleming, John O; Field, Aaron S; Alexander, Andrew L

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents, and hence providing rich information about complex tissue microstructure properties. Bessel Fourier orientation reconstruction (BFOR) is one of several analytical, non-Cartesian EAP reconstruction schemes employing multiple shell acquisitions that have recently been proposed. Such modeling bases have not yet been fully exploited in the extraction of rotationally invariant q-space indices that describe the degree of diffusion anisotropy/restrictivity. Such quantitative measures include the zero-displacement probability (P(o)), mean squared displacement (MSD), q-space inverse variance (QIV), and generalized fractional anisotropy (GFA), and all are simply scalar features of the EAP. In this study, a general relationship between MSD and q-space diffusion signal is derived and an EAP-based definition of GFA is introduced. A significant part of the paper is dedicated to utilizing BFOR in a clinical dataset, comprised of 5 multiple sclerosis (MS) patients and 4 healthy controls, to estimate P(o), MSD, QIV, and GFA of corpus callosum, and specifically, to see if such indices can detect changes between normal appearing white matter (NAWM) and healthy white matter (WM). Although the sample size is small, this study is a proof of concept that can be extended to larger sample sizes in the future.

  8. Ontology-Based Information Extraction for Business Intelligence

    Science.gov (United States)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  9. Extracting an entanglement signature from only classical mutual information

    Energy Technology Data Exchange (ETDEWEB)

    Starling, David J.; Howell, John C. [Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627 (United States); Broadbent, Curtis J. [Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627 (United States); Rochester Theory Center, University of Rochester, Rochester, New York 14627 (United States)

    2011-09-15

    We introduce a quantity which is formed using classical notions of mutual information and which is computed using the results of projective measurements. This quantity constitutes a sufficient condition for entanglement and represents the amount of information that can be extracted from a bipartite system for spacelike separated observers. In addition to discussion, we provide simulations as well as experimental results for the singlet and maximally correlated mixed states.

  10. Extracting clinical information to support medical decision based on standards.

    Science.gov (United States)

    Gomoi, Valentin; Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara; Stoicu-Tivadar, Vasile

    2011-01-01

    The paper presents a method connecting medical databases to a medical decision system, and describes a service created to extract the necessary information that is transferred based on standards. The medical decision can be improved based on many inputs from different medical locations. The developed solution is described for a concrete case concerning the management for chronic pelvic pain, based on the information retrieved from diverse healthcare databases.

  11. Extracting an entanglement signature from only classical mutual information

    Science.gov (United States)

    Starling, David J.; Broadbent, Curtis J.; Howell, John C.

    2011-09-01

    We introduce a quantity which is formed using classical notions of mutual information and which is computed using the results of projective measurements. This quantity constitutes a sufficient condition for entanglement and represents the amount of information that can be extracted from a bipartite system for spacelike separated observers. In addition to discussion, we provide simulations as well as experimental results for the singlet and maximally correlated mixed states.

  12. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Some techniques and methods for deriving water information from SPOT -4 (XI) image were investigatedand discussed in this paper. An algorithm of decision-tree (DT) classification which includes several classifiers based onthe spectral responding characteristics of water bodies and other objects, was developed and put forward to delineate wa-ter bodies. Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary informa-tion of DEM and slope (DTDS) was also designed for water bodies extraction. In addition, supervised classificationmethod of maximum-likelyhood classification (MLC), and unsupervised method of interactive self-organizing dada analy-sis technique (ISODATA) were used to extract waterbodies for comparison purpose. An index was designed and used toassess the accuracy of different methods adopted in the research. Results have shown that water extraction accuracy wasvariable with respect to the various techniques applied. It was low using ISODATA, very high using DT algorithm andmuch higher using both DTDS and MLC.

  13. Quantitation of genistein and genistin in soy dry extracts by UV-Visible spectrophotometric method

    Directory of Open Access Journals (Sweden)

    Isabela da Costa César

    2008-01-01

    Full Text Available This paper describes the development and validation of an UV-Visible spectrophotometric method for quantitation of genistein and genistin in soy dry extracts, after reaction with aluminum chloride. The method showed to be linear (r²= 0.9999, precise (R.S.D. < 2%, accurate (recovery of 101.56% and robust. Seven samples of soy dry extracts were analyzed by the spectrophotometric validated method and by RP-HPLC. Genistein concentrations determined by spectrophotometry (0.63% - 16.05% were slightly higher than values obtained by HPLC analysis (0.40% - 12.79%; however, the results of both methods showed a strong correlation.

  14. Tumor information extraction in radiology reports for hepatocellular carcinoma patients

    Science.gov (United States)

    Yim, Wen-wai; Denman, Tyler; Kwan, Sharon W.; Yetisgen, Meliha

    2016-01-01

    Hepatocellular carcinoma (HCC) is a deadly disease affecting the liver for which there are many available therapies. Targeting treatments towards specific patient groups necessitates defining patients by stage of disease. Criteria for such stagings include information on tumor number, size, and anatomic location, typically only found in narrative clinical text in the electronic medical record (EMR). Natural language processing (NLP) offers an automatic and scale-able means to extract this information, which can further evidence-based research. In this paper, we created a corpus of 101 radiology reports annotated for tumor information. Afterwards we applied machine learning algorithms to extract tumor information. Our inter-annotator partial match agreement scored at 0.93 and 0.90 F1 for entities and relations, respectively. Based on the annotated corpus, our sequential labeling entity extraction achieved 0.87 F1 partial match, and our maximum entropy classification relation extraction achieved scores 0.89 and 0. 74 F1 with gold and system entities, respectively. PMID:27570686

  15. Information Extraction and Linking in a Retrieval Context

    NARCIS (Netherlands)

    Moens, M.F.; Hiemstra, Djoerd

    We witness a growing interest and capabilities of automatic content recognition (often referred to as information extraction) in various media sources that identify entities (e.g. persons, locations and products) and their semantic attributes (e.g., opinions expressed towards persons or products,

  16. Spatiotemporal Information Extraction from a Historic Expedition Gazetteer

    Directory of Open Access Journals (Sweden)

    Mafkereseb Kassahun Bekele

    2016-11-01

    Full Text Available Historic expeditions are events that are flavored by exploratory, scientific, military or geographic characteristics. Such events are often documented in literature, journey notes or personal diaries. A typical historic expedition involves multiple site visits and their descriptions contain spatiotemporal and attributive contexts. Expeditions involve movements in space that can be represented by triplet features (location, time and description. However, such features are implicit and innate parts of textual documents. Extracting the geospatial information from these documents requires understanding the contextualized entities in the text. To this end, we developed a semi-automated framework that has multiple Information Retrieval and Natural Language Processing components to extract the spatiotemporal information from a two-volume historic expedition gazetteer. Our framework has three basic components, namely, the Text Preprocessor, the Gazetteer Processing Machine and the JAPE (Java Annotation Pattern Engine Transducer. We used the Brazilian Ornithological Gazetteer as an experimental dataset and extracted the spatial and temporal entities from entries that refer to three expeditioners’ site visits (which took place between 1910 and 1926 and mapped the trajectory of each expedition using the extracted information. Finally, one of the mapped trajectories was manually compared with a historical reference map of that expedition to assess the reliability of our framework.

  17. Preparatory information for third molar extraction: does preference for information and behavioral involvement matter?

    NARCIS (Netherlands)

    van Wijk, A.J.; Buchanan, H.; Coulson, N.; Hoogstraten, J.

    2010-01-01

    Objective: The objectives of the present study were to: (1) evaluate the impact of high versus low information provision in terms of anxiety towards third molar extraction (TME) as well as satisfaction with information provision. (2) Investigate how preference for information and behavioral

  18. Microplate assay for quantitation of neutral lipids in extracts from microalgae.

    Science.gov (United States)

    Higgins, Brendan T; Thornton-Dunwoody, Alexander; Labavitch, John M; VanderGheynst, Jean S

    2014-11-15

    Lipid quantitation is widespread in the algae literature, but popular methods such as gravimetry, gas chromatography and mass spectrometry (GC-MS), and Nile red cell staining suffer drawbacks, including poor quantitation of neutral lipids, expensive equipment, and variable results among algae species, respectively. A high-throughput microplate assay was developed that uses Nile red dye to quantify neutral lipids that have been extracted from algae cells. Because the algal extracts contained pigments that quenched Nile red fluorescence, a mild bleach solution was used to destroy pigments, resulting in a nearly linear response for lipid quantities in the range of 0.75 to 40 μg. Corn oil was used as a standard for quantitation, although other vegetable oils displayed a similar response. The assay was tested on lipids extracted from three species of Chlorella and resulted in close agreement with triacylglycerol (TAG) levels determined by thin layer chromatography. The assay was found to more accurately measure algal lipids conducive to biodiesel production and nutrition applications than the widely used gravimetric assay. Assay response was also consistent among different species, in contrast to Nile red cell staining procedures. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Extending a geocoding database by Web information extraction

    Science.gov (United States)

    Wu, Yunchao; Niu, Zheng

    2008-10-01

    Local Search has recently attracted much attention. And the popular architecture of Local Search is map-and-hyperlinks, which links geo-referenced Web content to a map interface. This architecture shows that a good Local Search not only depends on search engine techniques, but also on a perfect geocoding database. The process of building and updating a geocoding database is laborious and time consuming so that it is usually difficult to keep up with the change of the real world. However, the Web provides a rich resource of location related information, which would be a supplementary information source for geocoding. Therefore, this paper introduces how to extract geographic information from Web documents to extend a geocoding database. Our approach involves two major steps. First, geographic named entities are identified and extracted from Web content. Then, named entities are geocoded and put into storage. By this way, we can extend a geocoding database to provide better local Web search services.

  20. Spatial Quantitation of Drugs in tissues using Liquid Extraction Surface Analysis Mass Spectrometry Imaging

    Science.gov (United States)

    Swales, John G.; Strittmatter, Nicole; Tucker, James W.; Clench, Malcolm R.; Webborn, Peter J. H.; Goodwin, Richard J. A.

    2016-11-01

    Liquid extraction surface analysis mass spectrometry imaging (LESA-MSI) has been shown to be an effective tissue profiling and imaging technique, producing robust and reliable qualitative distribution images of an analyte or analytes in tissue sections. Here, we expand the use of LESA-MSI beyond qualitative analysis to a quantitative analytical technique by employing a mimetic tissue model previously shown to be applicable for MALDI-MSI quantitation. Liver homogenate was used to generate a viable and molecularly relevant control matrix for spiked drug standards which can be frozen, sectioned and subsequently analyzed for the generation of calibration curves to quantify unknown tissue section samples. The effects of extraction solvent composition, tissue thickness and solvent/tissue contact time were explored prior to any quantitative studies in order to optimize the LESA-MSI method across several different chemical entities. The use of a internal standard to normalize regional differences in ionization response across tissue sections was also investigated. Data are presented comparing quantitative results generated by LESA-MSI to LC-MS/MS. Subsequent analysis of adjacent tissue sections using DESI-MSI is also reported.

  1. Spatial Quantitation of Drugs in tissues using Liquid Extraction Surface Analysis Mass Spectrometry Imaging.

    Science.gov (United States)

    Swales, John G; Strittmatter, Nicole; Tucker, James W; Clench, Malcolm R; Webborn, Peter J H; Goodwin, Richard J A

    2016-11-24

    Liquid extraction surface analysis mass spectrometry imaging (LESA-MSI) has been shown to be an effective tissue profiling and imaging technique, producing robust and reliable qualitative distribution images of an analyte or analytes in tissue sections. Here, we expand the use of LESA-MSI beyond qualitative analysis to a quantitative analytical technique by employing a mimetic tissue model previously shown to be applicable for MALDI-MSI quantitation. Liver homogenate was used to generate a viable and molecularly relevant control matrix for spiked drug standards which can be frozen, sectioned and subsequently analyzed for the generation of calibration curves to quantify unknown tissue section samples. The effects of extraction solvent composition, tissue thickness and solvent/tissue contact time were explored prior to any quantitative studies in order to optimize the LESA-MSI method across several different chemical entities. The use of a internal standard to normalize regional differences in ionization response across tissue sections was also investigated. Data are presented comparing quantitative results generated by LESA-MSI to LC-MS/MS. Subsequent analysis of adjacent tissue sections using DESI-MSI is also reported.

  2. The Study on Information Extraction Technology of Seismic Damage

    Directory of Open Access Journals (Sweden)

    Huang Zuo-wei

    2013-01-01

    Full Text Available In order to improve the information extraction technology of seismic damage assessment and information publishing of earthquake damage. Based on past earthquake experience it was constructed technical flow of earthquake damage assessment rapidly, this study, take Yushu earthquake as example, studies the framework and establishment of the information service system by means of Arc IMS and distributed database technology. It analysis some key technologies, build web publishing architecture of massive remote sensing images. The system implements joint application of remote sensing image processing technology, database technology and Web GIS technology, the result could provide the important basis for earthquake damage assessment, emergency management and rescue mission.

  3. Extraction of Information from Images using Dewrapping Techniques

    Directory of Open Access Journals (Sweden)

    Khalid Nazim S. A.

    2010-11-01

    Full Text Available An image containing textual information is called a document image. The textual information in document images is useful in areas like vehicle number plate reading, passport reading and cargo container reading and so on. Thus extracting useful textual information in the document image plays an important role in many applications. One of the major challenges in camera document analysis is to deal with the wrap and perspective distortions. In spite of the prevalence of dewrapping techniques, there is no standard efficient algorithm for the performance evaluation that concentrates on visualization. Wrapping is a common appearance document image before recognition. In order to capture the document images a mobile camera of 2megapixel resolution is used. A database is developed with variations in background, size and colour along with wrapped images, blurred and clean images. This database will be explored and text extraction from those document images is performed. In case of wrapped images no efficient dewrapping techniques have been implemented till date. Thus extracting the text from the wrapped images is done by maintaining a suitable template database. Further, the extracted text from the wrapped or other document images will be converted into an editable form such as Notepad or MS word document. The experimental results were corroborated on various objects of database.

  4. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  5. Extracting Semantic Information from Visual Data: A Survey

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2016-03-01

    Full Text Available The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods.

  6. Quantitative analysis of perfumes in talcum powder by using headspace sorptive extraction.

    Science.gov (United States)

    Ng, Khim Hui; Heng, Audrey; Osborne, Murray

    2012-03-01

    Quantitative analysis of perfume dosage in talcum powder has been a challenge due to interference of the matrix and has so far not been widely reported. In this study, headspace sorptive extraction (HSSE) was validated as a solventless sample preparation method for the extraction and enrichment of perfume raw materials from talcum powder. Sample enrichment is performed on a thick film of poly(dimethylsiloxane) (PDMS) coated onto a magnetic stir bar incorporated in a glass jacket. Sampling is done by placing the PDMS stir bar in the headspace vial by using a holder. The stir bar is then thermally desorbed online with capillary gas chromatography-mass spectrometry. The HSSE method is based on the same principles as headspace solid-phase microextraction (HS-SPME). Nevertheless, a relatively larger amount of extracting phase is coated on the stir bar as compared to SPME. Sample amount and extraction time were optimized in this study. The method has shown good repeatability (with relative standard deviation no higher than 12.5%) and excellent linearity with correlation coefficients above 0.99 for all analytes. The method was also successfully applied in the quantitative analysis of talcum powder spiked with perfume at different dosages. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Quantitative measurement of cerebral oxygen extraction fraction using MRI in patients with MELAS.

    Directory of Open Access Journals (Sweden)

    Lei Yu

    Full Text Available OBJECTIVE: To quantify the cerebral OEF at different phases of stroke-like episodes in patients with mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS by using MRI. METHODS: We recruited 32 patients with MELAS confirmed by gene analysis. Conventional MRI scanning, as well as functional MRI including arterial spin labeling and oxygen extraction fraction imaging, was undertaken to obtain the pathological and metabolic information of the brains at different stages of stroke-like episodes in patients. A total of 16 MRI examinations at the acute and subacute phase and 19 examinations at the interictal phase were performed. In addition, 24 healthy volunteers were recruited for control subjects. Six regions of interest were placed in the anterior, middle, and posterior parts of the bilateral hemispheres to measure the OEF of the brain or the lesions. RESULTS: OEF was reduced significantly in brains of patients at both the acute and subacute phase (0.266 ± 0.026 and at the interictal phase (0.295 ± 0.009, compared with normal controls (0.316 ± 0.025. In the brains at the acute and subacute phase of the episode, 13 ROIs were prescribed on the stroke-like lesions, which showed decreased OEF compared with the contralateral spared brain regions. Increased blood flow was revealed in the stroke-like lesions at the acute and subacute phase, which was confined to the lesions. CONCLUSION: MRI can quantitatively show changes in OEF at different phases of stroke-like episodes. The utilization of oxygen in the brain seems to be reduced more severely after the onset of episodes in MELAS, especially for those brain tissues involved in the episodes.

  8. Extracellular terpenoid hydrocarbon extraction and quantitation from the green microalgae Botryococcus braunii var. Showa.

    Science.gov (United States)

    Eroglu, Ela; Melis, Anastasios

    2010-04-01

    Mechanical fractionation and aqueous or aqueous/organic two-phase partition approaches were applied for extraction and separation of extracellular terpenoid hydrocarbons from Botryococcus braunii var. Showa. A direct spectrophotometric method was devised for the quantitation of botryococcene and associated carotenoid hydrocarbons extracted by this method. Separation of extracellular botryococcene hydrocarbons from the Botryococcus was achieved upon vortexing of the micro-colonies with glass beads, either in water followed by buoyant density equilibrium to separate hydrocarbons from biomass, or in the presence of heptane as a solvent, followed by aqueous/organic two-phase separation of the heptane-solubilized hydrocarbons (upper phase) from the biomass (lower aqueous phase). Spectral analysis of the upper heptane phase revealed the presence of two distinct compounds, one absorbing in the UV-C, attributed to botryococcene(s), the other in the blue region of the spectrum, attributed to a carotenoid. Specific extinction coefficients were developed for the absorbance of triterpenes at 190nm (epsilon = 90 +/- 5 mM(-1) cm(-1)) and carotenoids at 450 nm (epsilon=165+/-5mM(-1) cm(-1)) in heptane. This enabled application of a direct spectrophotometric method for the quantitation of water- or heptane-extractable botryococcenes and carotenoids. B. braunii var. Showa constitutively accumulates approximately 30% of the dry biomass as extractable (extracellular) botryococcenes, and approximately 0.2% of the dry biomass in the form of a carotenoid. It was further demonstrated that heat-treatment of the Botryococcus biomass substantially accelerates the rate and yield of the extraction process. Advances in this work serve as foundation for a cyclic Botryococcus growth, non-toxic extraction of extracellular hydrocarbons, and return of the hydrocarbon-depleted biomass to growth conditions for further product generation.

  9. Advanced applications of natural language processing for performing information extraction

    CERN Document Server

    Rodrigues, Mário

    2015-01-01

    This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses.   ·         Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for t...

  10. Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    Directory of Open Access Journals (Sweden)

    C.-C. Jay Kuo

    2005-08-01

    Full Text Available A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1 the moving cast shadow effect, (2 the occlusion effect, and (3 nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

  11. Solvent effects on quantitative analysis of brominated flame retardants with Soxhlet extraction.

    Science.gov (United States)

    Zhong, Yin; Li, Dan; Zhu, Xifen; Huang, Weilin; Peng, Ping'an

    2017-05-18

    Reliable quantifications of brominated flame retardants (BFRs) not only ensure compliance with laws and regulations on the use of BFRs in commercial products, but also is key for accurate risk assessments of BFRs. Acetone is a common solvent widely used in the analytical procedure of BFRs, but our recent study found that acetone can react with some BFRs. It is highly likely that such reactions can negatively affect the quantifications of BFRs in environmental samples. In this study, the effects of acetone on the extraction yields of three representative BFRs [i.e., decabrominated diphenyl ether (decaBDE), hexabromocyclododecane (HBCD) and tetrabromobisphenol A (TBBPA)] were evaluated in the Soxhlet extraction (SE) system. The results showed that acetone-based SE procedure had no measureable effect for the recovery efficiencies of decaBDE but could substantially lower the extraction yields for both TBBPA and HBCD. After 24 h of extraction, the recovery efficiencies of TBBPA and HBCD by SE were 93 and 78% with acetone, 47 and 70% with 3:1 acetone:n-hexane, and 82 and 94% with 1:1 acetone:n-hexane, respectively. After 72 h of extraction, the extraction efficiencies of TBBPA and HBCD decreased to 68 and 55% with acetone, 0 and 5% with 3:1 acetone/n-hexane mixtures, and 0 and 13% with 1:1 acetone/n-hexane mixtures, respectively. The study suggested that the use of acetone alone or acetone-based mixtures should be restricted in the quantitative analysis of HBCD and TBBPA. We further evaluated nine alternative solvents for the extraction of the three BFRs. The result showed that diethyl ether might be reactive with HBCD and may not be considered as the alternative to acetone used solvents for the extraction of HBCD.

  12. Reference Information Extraction and Processing Using Random Conditional Fields

    Directory of Open Access Journals (Sweden)

    Tudor Groza

    2012-06-01

    Full Text Available Fostering both the creation and the linking of data with the scope of supporting the growth of the Linked Data Web requires us to improve the acquisition and extraction mechanisms of the underlying semantic metadata. This is particularly important for the scientific publishing domain, where currently most of the datasets are being created in an author-driven, manual manner. In addition, such datasets capture only fragments of the complete metadata, omitting usually, important elements such as the references, although they represent valuable information. In this paper we present an approach that aims at dealing with this aspect of extraction and processing of reference information. The experimental evaluation shows that, currently, our solution handles very well diverse types of reference format, thus making it usable for, or adaptable to, any area of scientific publishing.

  13. Information extraction from the GER 63-channel spectrometer data

    Science.gov (United States)

    Kiang, Richard K.

    1993-09-01

    The unprecedented data volume in the era of NASA's Mission to Planet Earth (MTPE) demands innovative information extraction methods and advanced processing techniques. The neural network techniques, which are intrinsic to distributed parallel processings and have shown promising results in analyzing remotely sensed data, could become the essential tools in the MTPE era. To evaluate the information content of data with higher dimension and the usefulness of neural networks in analyzing them, measurements from the GER 63-channel airborne imaging spectrometer data over Cuprite, Nevada, are used. The data are classified with 3-layer Perceptron of various architectures. It is shown that the neural network can achieve a level of performance similar to conventional methods, without the need for an explicit feature extraction step.

  14. Extracting Firm Information from Administrative Records: The ASSD Firm Panel

    OpenAIRE

    Fink, Martina; Segalla, Esther; Weber, Andrea; Zulehner, Christine

    2010-01-01

    This paper demonstrates how firm information can be extracted from administrative social security records. We use the Austrian Social Security Database (ASSD) and derive firms from employer identifiers in the universe of private sector workers. To correctly pin down entry end exits we use a worker flow approach which follows clusters of workers as they move across administrative entities. This procedure enables us to define different types of entry and exit such as start-ups, spinoffs, closur...

  15. OCR++: A Robust Framework For Information Extraction from Scholarly Articles

    OpenAIRE

    Singh, Mayank; Barua, Barnopriyo; Palod, Priyank; Garg, Manvi; Satapathy, Sidhartha; Bushi, Samuel; Ayush, Kumar; Rohith, Krishna Sai; Gamidi, Tulasi; Goyal, Pawan; Mukherjee, Animesh

    2016-01-01

    This paper proposes OCR++, an open-source framework designed for a variety of information extraction tasks from scholarly articles including metadata (title, author names, affiliation and e-mail), structure (section headings and body text, table and figure headings, URLs and footnotes) and bibliography (citation instances and references). We analyze a diverse set of scientific articles written in English language to understand generic writing patterns and formulate rules to develop this hybri...

  16. A new method for precursory information extraction: Slope-difference information method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new method for precursory information extraction, i.e.,slope-difference information method is proposed in the paper for the daily-mean-value precursory data sequence. Taking Tangshan station as an example, the calculation of full-time-domain leveling data is made, which is tested and compared with several other methods. The results indicate that the method is very effective for extracting short-term precursory information from the daily mean values after the optimization is made. Therefore, it is valuable for popularization and application.

  17. Extraction of hidden information by efficient community detection in networks

    CERN Document Server

    Lee, Juyong; Lee, Jooyoung

    2012-01-01

    Currently, we are overwhelmed by a deluge of experimental data, and network physics has the potential to become an invaluable method to increase our understanding of large interacting datasets. However, this potential is often unrealized for two reasons: uncovering the hidden community structure of a network, known as community detection, is difficult, and further, even if one has an idea of this community structure, it is not a priori obvious how to efficiently use this information. Here, to address both of these issues, we, first, identify optimal community structure of given networks in terms of modularity by utilizing a recently introduced community detection method. Second, we develop an approach to use this community information to extract hidden information from a network. When applied to a protein-protein interaction network, the proposed method outperforms current state-of-the-art methods that use only the local information of a network. The method is generally applicable to networks from many areas.

  18. A Semantic Approach for Geospatial Information Extraction from Unstructured Documents

    Science.gov (United States)

    Sallaberry, Christian; Gaio, Mauro; Lesbegueries, Julien; Loustau, Pierre

    Local cultural heritage document collections are characterized by their content, which is strongly attached to a territory and its land history (i.e., geographical references). Our contribution aims at making the content retrieval process more efficient whenever a query includes geographic criteria. We propose a core model for a formal representation of geographic information. It takes into account characteristics of different modes of expression, such as written language, captures of drawings, maps, photographs, etc. We have developed a prototype that fully implements geographic information extraction (IE) and geographic information retrieval (IR) processes. All PIV prototype processing resources are designed as Web Services. We propose a geographic IE process based on semantic treatment as a supplement to classical IE approaches. We implement geographic IR by using intersection computing algorithms that seek out any intersection between formal geocoded representations of geographic information in a user query and similar representations in document collection indexes.

  19. Puffed cereals with added chamomile - quantitative analysis of polyphenols and optimization of their extraction method.

    Science.gov (United States)

    Blicharski, Tomasz; Oniszczuk, Anna; Olech, Marta; Oniszczuk, Tomasz; Wójtowicz, Agnieszka; Krawczyk, Wojciech; Nowak, Renata

    2017-05-11

    [b]Abstract Introduction[/b]. Functional food plays an important role in the prevention, management and treatment of chronic diseases. One of the most interesting techniques of functional food production is extrusion-cooking. Functional foods may include such items as puffed cereals, breads and beverages that are fortified with vitamins, some nutraceuticals and herbs. Due to its pharmacological activity, chamomile flowers are the most popular components added to functional food. Quantitative analysis of polyphenolic antioxidants, as well as comparison of various methods for the extraction of phenolic compounds from corn puffed cereals, puffed cereals with an addition of chamomile (3, 5, 10 and 20%) and from [i]Chamomillae anthodium. [/i] [b]Materials and Methods[/b]. Two modern extraction methods - ultrasound assisted extraction (UAE) at 40 °C and 60 °C, as well as accelerated solvent extraction (ASE) at 100 °C and 120 °C were used for the isolation of polyphenols from functional food. Analysis of flavonoids and phenolic acids was carried out using reversed-phase high-performance liquid chromatography and electrospray ionization mass spectrometry (LC-ESI-MS/MS). [b]Results and Conclusions[/b]. For most of the analyzed compounds, the highest yields were obtained by ultrasound assisted extraction. The highest temperature during the ultrasonification process (60 °C) increased the efficiency of extraction, without degradation of polyphenols. UAE easily arrives at extraction equilibrium and therefore permits shorter periods of time, reducing the energy input. Furthermore, UAE meets the requirements of 'Green Chemistry'.

  20. Quantitative high-performance liquid chromatographic determination of delta 4-3-ketosteroids in adrenocortical extracts.

    Science.gov (United States)

    Ballerini, R; Chinol, M; Ghelardoni, M

    1980-05-30

    A high-performance liquid chromatographic method is described for the determination of seven steroids in adrenocortical extracts showing a delta 4-3-ketonic conjugated system. The seven steroids (cortisol, cortisone, 11-dehydrocorticosterone, corticosterone, 11-deoxycortisol, aldosterone and 11-deoxycorticosterone) were separated with a chloroform-methanol gradient on a 5-micron silica column and with a water-acetonitrile gradient on a 10-micron RP-8 column. Effluents were monitored by UV absorption at 242 nm. Quantitative analysis was performed by comparing peak areas, which are proportional to the amounts of the individual substances (external standard method). The method is rapid, sensitive, easy to perform and reproducible.

  1. Quantitation of recombinant protein in whole cells and cell extracts via solid-state NMR spectroscopy.

    Science.gov (United States)

    Vogel, Erica P; Weliky, David P

    2013-06-25

    Recombinant proteins (RPs) are commonly expressed in bacteria followed by solubilization and chromatography. Purified RP yield can be diminished by losses at any step with very different changes in methods that can improve the yield. Time and labor can therefore be saved by first identifying the specific reason for the low yield. This study describes a new solid-state nuclear magnetic resonance approach to RP quantitation in whole cells or cell extracts without solubilization or purification. The method is straightforward and inexpensive and requires only ∼50 mL culture and a low-field spectrometer.

  2. Comparison of proteases in DNA extraction via quantitative polymerase chain reaction.

    Science.gov (United States)

    Eychner, Alison M; Lebo, Roberta J; Elkins, Kelly M

    2015-06-01

    We compared four proteases in the QIAamp DNA Investigator Kit (Qiagen) to extract DNA for use in multiplex polymerase chain reaction (PCR) assays. The aim was to evaluate alternate proteases for improved DNA recovery as compared with proteinase K for forensic, biochemical research, genetic paternity and immigration, and molecular diagnostic purposes. The Quantifiler Kit TaqMan quantitative PCR assay was used to measure the recovery of DNA from human blood, semen, buccal cells, breastmilk, and earwax in addition to low-template samples, including diluted samples, computer keyboard swabs, chewing gum, and cigarette butts. All methods yielded amplifiable DNA from all samples.

  3. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    DUJin-kang; FENGXue-zhi; 等

    2002-01-01

    Some techniques and methods for deriving water information from SPOT-4(XI) image were investigated and discussed in this paper.An algorithmoif decision-tree(DT) classification which includes several classifiers based on the spectral responding characteristics of water bodies and other objects,was developed and put forward to delineate water bodies.Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary information of DEM and slope(DTDS) was also designed for water bodies extraction.In addition,supervised classification method of maximum-likelyhood classification(MLC),and unsupervised method of interactive self -organizing dada analysis technique(ISODATA) were used to extract waterbodies for comparison purpose.An index was designed and used to assess the accuracy of different methods abopted in the research.Results have shown that water extraction accuracy was variable with respect to the various techniques applied.It was low using ISODATA,very high using DT algorithm and much higher using both DTDS and MLC.

  4. Extraction of spatial information for low-bandwidth telerehabilitation applications

    Directory of Open Access Journals (Sweden)

    Kok Kiong Tan, PhD

    2014-09-01

    Full Text Available Telemedicine applications, based on two-dimensional (2D video conferencing technology, have been around for the past 15 to 20 yr. They have been demonstrated to be acceptable for face-to-face consultations and useful for visual examination of wounds and abrasions. However, certain telerehabilitation assessments need the use of spatial information in order to accurately assess the patient’s condition and sending three-dimensional video data over low-bandwidth networks is extremely challenging. This article proposes an innovative way of extracting the key spatial information from the patient’s movement during telerehabilitation assessment based on 2D video and then presenting the extracted data by using graph plots alongside the video to help physicians in assessments with minimum burden on existing video data transfer. Some common rehabilitation scenarios are chosen for illustrations, and experiments are conducted based on skeletal tracking and color detection algorithms using the Microsoft Kinect sensor. Extracted data are analyzed in detail and their usability discussed.

  5. Transliteration normalization for Information Extraction and Machine Translation

    Directory of Open Access Journals (Sweden)

    Yuval Marton

    2014-12-01

    Full Text Available Foreign name transliterations typically include multiple spelling variants. These variants cause data sparseness and inconsistency problems, increase the Out-of-Vocabulary (OOV rate, and present challenges for Machine Translation, Information Extraction and other natural language processing (NLP tasks. This work aims to identify and cluster name spelling variants using a Statistical Machine Translation method: word alignment. The variants are identified by being aligned to the same “pivot” name in another language (the source-language in Machine Translation settings. Based on word-to-word translation and transliteration probabilities, as well as the string edit distance metric, names with similar spellings in the target language are clustered and then normalized to a canonical form. With this approach, tens of thousands of high-precision name transliteration spelling variants are extracted from sentence-aligned bilingual corpora in Arabic and English (in both languages. When these normalized name spelling variants are applied to Information Extraction tasks, improvements over strong baseline systems are observed. When applied to Machine Translation tasks, a large improvement potential is shown.

  6. [Study on Information Extraction of Clinic Expert Information from Hospital Portals].

    Science.gov (United States)

    Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li

    2015-12-01

    Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.

  7. Evaluation of a fiber optic immunosensor for quantitating cocaine in coca leaf extracts.

    Science.gov (United States)

    Toppozada, A R; Wright, J; Eldefrawi, A T; Eldefrawi, M E; Johnson, E L; Emche, S D; Helling, C S

    1997-01-01

    A fiber optic evanescent fluoroimmunosensor was used to rapidly detect and quantitate coca alkaloids as cocaine equivalents in leaf extracts of five Erythroxylum species. A monoclonal antibody (mAb) made against benzoylecgonine (BE), a major metabolite of cocaine, was immobilized covalently on quartz fibers and used as the biological sensing element in the portable fluorometer. Benzoylecgonine-fluorescein (BE-FL) was used as the optical signal generator when it bound to the fiber. If present, cocaine competed for the mAb and interfered with the binding of BE-FL, thereby reducing the fluorescence transmitted by the fiber. Calibration curves were prepared by measuring (over 30 s) the rates of fluorescence increase in the absence, or presence of cocaine. Ethanol or acid extracts of dry coca leaves were assayed by this fiber optic biosensor, gas chromatography and a fluorescent polarization immune assay. Biosensor values of cocaine content of leaves from five Erythroxylum species were not significantly different from gas chromatography values, but had higher variance. The biosensor assay was rapid and did not require cleanup of the crude leaf extracts. Cocaine in acid extracts was reduced significantly after 4 weeks at 23 degrees C and after 3 weeks at 37 degrees C. Fibers (mAb-coated), stored at 37 degrees C in phosphate-buffered solution (0.02% NaN3), gave stable responses for 14 days.

  8. Extraction of Coupling Information From $Z' \\to jj$

    CERN Document Server

    Rizzo, T G

    1993-01-01

    An analysis by the ATLAS Collaboration has recently shown, contrary to popular belief, that a combination of strategic cuts, excellent mass resolution, and detailed knowledge of the QCD backgrounds from direct measurements can be used to extract a signal in the $Z' \\to jj$ channel in excess of $6\\sigma$ for certain classes of extended electroweak models. We explore the possibility that the data extracted from $Z$ dijet peak will have sufficient statistical power as to supply information on the couplings of the $Z'$ provided it is used in conjunction with complimentary results from the $Z' \\to \\ell^+ \\ell^-$ `discovery' channel. We show, for a 1 TeV $Z'$ produced at the SSC, that this technique can provide a powerful new tool with which to identify the origin of $Z'$'s.

  9. Extraction of coupling information from Z'-->jj

    Science.gov (United States)

    Rizzo, Thomas G.

    1993-11-01

    An analysis by the ATLAS Collaboration has recently shown, contrary to popular belief, that a combination of strategic cuts, excellent mass resolution, and detailed knowledge of the QCD backgrounds from direct measurements can be used to extract a signal in the Z'-->jj channel for certain classes of extended electroweak models. We explore the possibility that the data extracted from Z dijet peak will have sufficient statistical power as to supply information on the couplings of the Z' provided it is used in conjunction with complementary results from the Z'-->l+l- ``discovery'' channel. We show, for a 1 TeV Z' produced at the SSC, that this technique can provide a powerful new tool with which to identify the origin of Z'. Extensions of this analysis to the CERN LHC as well as for a more massive Z' are discussed.

  10. Extracting Backbones from Weighted Complex Networks with Incomplete Information

    Directory of Open Access Journals (Sweden)

    Liqiang Qian

    2015-01-01

    Full Text Available The backbone is the natural abstraction of a complex network, which can help people understand a networked system in a more simplified form. Traditional backbone extraction methods tend to include many outliers into the backbone. What is more, they often suffer from the computational inefficiency—the exhaustive search of all nodes or edges is often prohibitively expensive. In this paper, we propose a backbone extraction heuristic with incomplete information (BEHwII to find the backbone in a complex weighted network. First, a strict filtering rule is carefully designed to determine edges to be preserved or discarded. Second, we present a local search model to examine part of edges in an iterative way, which only relies on the local/incomplete knowledge rather than the global view of the network. Experimental results on four real-life networks demonstrate the advantage of BEHwII over the classic disparity filter method by either effectiveness or efficiency validity.

  11. Identification and quantitation of sorbitol-based nuclear clarifying agents extracted from common laboratory and consumer plasticware made of polypropylene.

    Science.gov (United States)

    McDonald, Jeffrey G; Cummins, Carolyn L; Barkley, Robert M; Thompson, Bonne M; Lincoln, Holly A

    2008-07-15

    Reported here is the mass spectral identification of sorbitol-based nuclear clarifying agents (NCAs) and the quantitative description of their extractability from common laboratory and household plasticware made of polypropylene. NCAs are frequently added to polypropylene to improve optical clarity, increase performance properties, and aid in the manufacturing process of this plastic. NCA addition makes polypropylene plasticware more aesthetically pleasing to the user and makes the product competitive with other plastic formulations. We show here that several NCAs are readily extracted with either ethanol or water from plastic labware during typical laboratory procedures. Observed levels ranged from a nanogram to micrograms of NCA. NCAs were also detected in extracts from plastic food storage containers; levels ranged from 1 to 10 microg in two of the three brands tested. The electron ionization mass spectra for three sorbitol-based nuclear clarifying agents (1,3:2,4-bis-O-(benzylidene)sorbitol, 1,3:2,4-bis-O-(p-methylbenzylidene)sorbitol, 1,3:2,4-bis-O-(3,4-dimethylbenzylidene)sorbitol) are presented for the native and trimethylsilyl-derivatized compounds together with the collision-induced dissociation mass spectra; gas and liquid chromatographic data are also reported. These NCAs now join other well-known plasticizers such as phthalate esters and bisphenol A as common laboratory contaminants. While the potential toxicity of NCAs in mammalian systems is unknown, the current data provide scientists and consumers the opportunity to make more informed decisions regarding the use of polypropylene plastics.

  12. Knowledge discovery: Extracting usable information from large amounts of data

    Energy Technology Data Exchange (ETDEWEB)

    Whiteson, R.

    1998-12-31

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best.

  13. Knowledge discovery: Extracting usable information from large amounts of data

    Energy Technology Data Exchange (ETDEWEB)

    Whiteson, R.

    1998-12-31

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best.

  14. INFORMAL HOUSING IN GREECE: A QUANTITATIVE SPATIAL ANALYSIS

    OpenAIRE

    Serafeim POLYZOS; MINETOS, Dionysios

    2009-01-01

    During the last 50 years in Greece growing demand for urban (residential and industrial) space has resulted in unplanned residential development and informal dwelling construction to the expense of agricultural and forest land uses. Despite the fact that the post-war challenge faced by the state in providing minimal housing for their citizens has been met the informal settlements phenomenon still proceeds. This situation tents to become an acute problem with serious economic, social and envir...

  15. Extraction of fish body oil from Sardinella longiceps by employing direct steaming method and its quantitative and qualitative assessment

    Directory of Open Access Journals (Sweden)

    Moorthy Pravinkumar

    2015-12-01

    Full Text Available Objective: To analyze the quantitative and qualitative properties of the extracted fish oil from Sardinella longiceps (S. longiceps. Methods: Four size groups of S. longiceps were examined for the extraction of fish oil based on length. The size groups included Group I (size range of 7.1–10.0 cm, Group II (size range of 10.1–13.0 cm, Group III (size range of 13.1–16.0 cm and Group IV (size range of 16.1– 19.0 cm. Fish oil was extracted from the tissues of S. longiceps by direct steaming method. The oil was then subjected to the determination of specific gravity, refractive index, moisture content, free fatty acids, iodine value, peroxide value, saponification value and observation of colour. Results: The four groups showed different yield of fish oil that Group IV recorded the highest values of (165.00 ± 1.00 mL/kg followed by Group III [(145.66 ± 1.15 mL/kg] and Group II [(129.33 ± 0.58 mL/kg], whereas Group I recorded the lowest values of (78.33 ± 0.58 mL/ kg in monsoon season, and the average yield was (180.0 ± 4.9 mL/kg fish tissues. These analytical values of the crude oil were well within the acceptable standard values for both fresh and stocked samples. Conclusions: The information generated in the present study pertaining to the quantitative and qualitative analysis of fish oil will serve as a reference baseline for entrepreneurs and industrialists in future for the successful commercial production of fish oil by employing oil sardines.

  16. Extraction of fish body oil fromSardinella longiceps by employing direct steaming method and its quantitative and qualitative assessment

    Institute of Scientific and Technical Information of China (English)

    Moorthy Pravinkumar; Lawrence Xavier Eugien; Chinnathambi Viswanathan; Sirajudeen Mohammad Raffi

    2015-01-01

    Objective:To analyze the quantitative and qualitative properties of the extracted fish oil from Sardinella longiceps(S. longiceps). Methods: Four size groups ofS. longiceps were examined for the extraction of fish oil based on length. The size groups included Group I (size range of 7.1–10.0 cm), Group II (size range of 10.1–13.0 cm), GroupIII (size range of 13.1–16.0 cm) and Group IV (size range of 16.1–19.0 cm). Fish oil was extracted from the tissues ofS. longiceps by direct steaming method. The oil was then subjected to the determination of specific gravity, refractive index, moisture content, free fatty acids, iodine value, peroxide value, saponification value and observation of colour. Results: The four groups showed different yield of fish oil that Group IV recorded the highest values of (165.00 ± 1.00) mL/kg followed by GroupIII [(145.66 ± 1.15) mL/kg] and Group II [(129.33 ± 0.58) mL/kg], whereas Group I recorded the lowest values of (78.33 ± 0.58) mL/kg in monsoon season, and the average yield was (180.0 ± 4.9) mL/kg fish tissues. These analytical values of the crude oil were well within the acceptable standard values for both fresh and stocked samples. Conclusions:The information generated in the present study pertaining to the quantitative and qualitative analysis of fish oil will serve as a reference baseline for entrepreneurs and industrialists in future for the successful commercial production of fish oil by employing oil sardines.

  17. Audio enabled information extraction system for cricket and hockey domains

    CERN Document Server

    Saraswathi, S; B., Sai Vamsi Krishna; S, Suresh Reddy

    2010-01-01

    The proposed system aims at the retrieval of the summarized information from the documents collected from web based search engine as per the user query related to cricket and hockey domain. The system is designed in a manner that it takes the voice commands as keywords for search. The parts of speech in the query are extracted using the natural language extractor for English. Based on the keywords the search is categorized into 2 types: - 1.Concept wise - information retrieved to the query is retrieved based on the keywords and the concept words related to it. The retrieved information is summarized using the probabilistic approach and weighted means algorithm.2.Keyword search - extracts the result relevant to the query from the highly ranked document retrieved from the search by the search engine. The relevant search results are retrieved and then keywords are used for summarizing part. During summarization it follows the weighted and probabilistic approaches in order to identify the data comparable to the k...

  18. Epistemology and Information Seeking Behaviour: Outcome of a Quantitative Research

    Directory of Open Access Journals (Sweden)

    Mahmood Khowsrojerdi

    2009-01-01

    Full Text Available The present study intended to study the state of information seeking among 158 graduate students in University of Tehran and examine its correlation with students’ epistemological beliefs. A survey employing two sets of questionnaires was carried out. One set was devised by the author while the other was Schommers 63-item epistemological beliefs questionnaire. Findings indicated that there is a positive significance at 0.05 level between knowledge organization and students’ information relevancy judgments. Furthermore there is a positive significance at 0.001 level between knowledge management and time allocation. Learning speed and internalization of past knowledge however displayed a negative significance at 0.01 level. No significance was observed among the rest of epistemological beliefs aspects with other aspects of information seeking behavior

  19. Using XBRL Technology to Extract Competitive Information from Financial Statements

    Directory of Open Access Journals (Sweden)

    Dominik Ditter

    2011-12-01

    Full Text Available The eXtensible Business Reporting Language, or XBRL, is a reporting format for the automatic and electronic exchange of business and financial data. In XBRL every single reported fact is marked with a unique tag, enabling a full computer-based readout of financial data. It has the potential to improve the collection and analysis of financial data for Competitive Intelligence (e.g., the profiling of publicly available financial statements. The article describes how easily information from XBRL reports can be extracted.

  20. A High Accuracy Method for Semi-supervised Information Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.

    2007-04-22

    Customization to specific domains of dis-course and/or user requirements is one of the greatest challenges for today’s Information Extraction (IE) systems. While demonstrably effective, both rule-based and supervised machine learning approaches to IE customization pose too high a burden on the user. Semi-supervised learning approaches may in principle offer a more resource effective solution but are still insufficiently accurate to grant realistic application. We demonstrate that this limitation can be overcome by integrating fully-supervised learning techniques within a semi-supervised IE approach, without increasing resource requirements.

  1. On the relation between Differential Privacy and Quantitative Information Flow

    CERN Document Server

    Alvim, Mário S; Chatzikokolakis, Konstantinos; Palamidessi, Catuscia; 10.1007/978-3-642-22012-8_4

    2011-01-01

    Differential privacy is a notion that has emerged in the community of statistical databases, as a response to the problem of protecting the privacy of the database's participants when performing statistical queries. The idea is that a randomized query satisfies differential privacy if the likelihood of obtaining a certain answer for a database $x$ is not too different from the likelihood of obtaining the same answer on adjacent databases, i.e. databases which differ from $x$ for only one individual. Information flow is an area of Security concerned with the problem of controlling the leakage of confidential information in programs and protocols. Nowadays, one of the most established approaches to quantify and to reason about leakage is based on the R\\'enyi min entropy version of information theory. In this paper, we analyze critically the notion of differential privacy in light of the conceptual framework provided by the R\\'enyi min information theory. We show that there is a close relation between differenti...

  2. Quantitation of drugs via molecularly imprinted polymer solid phase extraction and electrospray ionization mass spectrometry: benzodiazepines in human plasma

    OpenAIRE

    2011-01-01

    The association of solid phase extraction with molecularly imprinted polymers (MIP) and electrospray ionization mass spectrometry (ESI-MS) is applied to the direct extraction and quantitation of benzodiazepines in human plasma. The target analytes are sequestered by MIP and directly analyzed by ESI-MS. Due to the MIP highly selective extraction, ionic suppression during ESI is minimized; hence no separation is necessary prior to ESI-MS, which greatly increases analytical speed. Benzodiazepine...

  3. Student Use of Quantitative and Qualitative Information on Rate MyPprofessors.com for Course Selection

    Science.gov (United States)

    Hayes, Matthew W.; Prus, Joseph

    2014-01-01

    The present study examined whether students used qualitative information, quantitative information, or both when making course selection decisions. Participants reviewed information on four hypothetical courses in an advising context before indicating their likelihood to enroll in those courses and ranking them according to preference. Modeled…

  4. Student Use of Quantitative and Qualitative Information on Rate MyPprofessors.com for Course Selection

    Science.gov (United States)

    Hayes, Matthew W.; Prus, Joseph

    2014-01-01

    The present study examined whether students used qualitative information, quantitative information, or both when making course selection decisions. Participants reviewed information on four hypothetical courses in an advising context before indicating their likelihood to enroll in those courses and ranking them according to preference. Modeled…

  5. Quantitative analysis of extracted phycobilin pigments in cyanobacteria-an assessment of spectrophotometric and spectrofluorometric methods.

    Science.gov (United States)

    Sobiechowska-Sasim, Monika; Stoń-Egiert, Joanna; Kosakowska, Alicja

    2014-01-01

    Phycobilins are an important group of pigments that through complementary chromatic adaptation optimize the light-harvesting process in phytoplankton cells, exhibiting great potential as cyanobacteria species biomarkers. In their extracted form, concentrations of these water-soluble molecules are not easily determined using the chromatographic methods well suited to solvent-soluble pigments. Insights regarding the quantitative spectroscopic analysis of extracted phycobilins also remain limited. Here, we present an in-depth study of two methods that utilize the spectral properties of phycobilins in aqueous extracts. The technical work was carried out using high-purity standards of phycocyanin, phycoerythrin, and allophycocyanin. Calibration parameters for the spectrofluorometer and spectrophotometer were established. This analysis indicated the possibility of detecting pigments in concentrations ranging from 0.001 to 10 μg cm(-3). Fluorescence data revealed a reproducibility of 95 %. The differences in detection limits between the two methods enable the presence of phycobilins to be investigated and their amounts to be monitored from oligotrophic to eutrophic aquatic environments.

  6. Extraction of Profile Information from Cloud Contaminated Radiances. Appendixes 2

    Science.gov (United States)

    Smith, W. L.; Zhou, D. K.; Huang, H.-L.; Li, Jun; Liu, X.; Larar, A. M.

    2003-01-01

    Clouds act to reduce the signal level and may produce noise dependence on the complexity of the cloud properties and the manner in which they are treated in the profile retrieval process. There are essentially three ways to extract profile information from cloud contaminated radiances: (1) cloud-clearing using spatially adjacent cloud contaminated radiance measurements, (2) retrieval based upon the assumption of opaque cloud conditions, and (3) retrieval or radiance assimilation using a physically correct cloud radiative transfer model which accounts for the absorption and scattering of the radiance observed. Cloud clearing extracts the radiance arising from the clear air portion of partly clouded fields of view permitting soundings to the surface or the assimilation of radiances as in the clear field of view case. However, the accuracy of the clear air radiance signal depends upon the cloud height and optical property uniformity across the two fields of view used in the cloud clearing process. The assumption of opaque clouds within the field of view permits relatively accurate profiles to be retrieved down to near cloud top levels, the accuracy near the cloud top level being dependent upon the actual microphysical properties of the cloud. The use of a physically correct cloud radiative transfer model enables accurate retrievals down to cloud top levels and below semi-transparent cloud layers (e.g., cirrus). It should also be possible to assimilate cloudy radiances directly into the model given a physically correct cloud radiative transfer model using geometric and microphysical cloud parameters retrieved from the radiance spectra as initial cloud variables in the radiance assimilation process. This presentation reviews the above three ways to extract profile information from cloud contaminated radiances. NPOESS Airborne Sounder Testbed-Interferometer radiance spectra and Aqua satellite AIRS radiance spectra are used to illustrate how cloudy radiances can be used

  7. Karst rocky desertification information extraction with EO-1 Hyperion data

    Science.gov (United States)

    Yue, Yuemin; Wang, Kelin; Zhang, Bing; Jiao, Quanjun; Yu, Yizun

    2008-12-01

    Karst rocky desertification is a special kind of land desertification developed under violent human impacts on the vulnerable eco-geo-environment of karst ecosystem. The process of karst rocky desertification results in simultaneous and complex variations of many interrelated soil, rock and vegetation biogeophysical parameters, rendering it difficult to develop simple and robust remote sensing mapping and monitoring approaches. In this study, we aimed to use Earth Observing 1 (EO-1) Hyperion hyperspectral data to extract the karst rocky desertification information. A spectral unmixing model based on Monte Carlo approach, was employed to quantify the fractional cover of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV) and bare substrates. The results showed that SWIR (1.9-2.35μm) portions of the spectrum were significantly different in PV, NPV and bare rock spectral properties. It has limitations in using full optical range or only SWIR (1.9-2.35μm) region of Hyperion to decompose image into PV, NPV and bare substrates covers. However, when use the tied-SWIR, the sub-pixel fractional covers of PV, NPV and bare substrates were accurately estimated. Our study indicates that the "tied-spectrum" method effectively accentuate the spectral characteristics of materials, while the spectral unmixing model based on Monte Carlo approach is a useful tool to automatically extract mixed ground objects in karst ecosystem. Karst rocky desertification information can be accurately extracted with EO-1 Hyperion. Imaging spectroscopy can provide a powerful methodology toward understanding the extent and spatial pattern of land degradation in karst ecosystem.

  8. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  9. Extraction of chili, black pepper, and ginger with near-critical CO2, propane, and dimethyl ether: analysis of the extracts by quantitative nuclear magnetic resonance.

    Science.gov (United States)

    Catchpole, Owen J; Grey, John B; Perry, Nigel B; Burgess, Elaine J; Redmond, Wayne A; Porter, Noel G

    2003-08-13

    Ginger, black pepper, and chili powder were extracted using near-critical carbon dioxide, propane, and dimethyl ether on a laboratory scale to determine the overall yield and extraction efficiency for selected pungent components. The temperature dependency of extraction yield and efficiency was also determined for black pepper and chili using propane and dimethyl ether. The pungency of the extracts was determined by using an NMR technique developed for this work. The volatiles contents of ginger and black pepper extracts were also determined. Extraction of all spice types was carried out with acetone to compare overall yields. Subcritical dimethyl ether was as effective at extracting the pungent principles from the spices as supercritical carbon dioxide, although a substantial amount of water was also extracted. Subcritical propane was the least effective solvent. All solvents quantitatively extracted the gingerols from ginger. The yields of capsaicins obtained by supercritical CO(2) and dimethyl ether were similar and approximately double that extracted by propane. The yield of piperines obtained by propane extraction of black pepper was low at approximately 10% of that achieved with dimethyl ether and CO(2), but improved with increasing extraction temperature.

  10. Extraction of hidden information by efficient community detection in networks

    Science.gov (United States)

    Lee, Jooyoung; Lee, Juyong; Gross, Steven

    2013-03-01

    Currently, we are overwhelmed by a deluge of experimental data, and network physics has the potential to become an invaluable method to increase our understanding of large interacting datasets. However, this potential is often unrealized for two reasons: uncovering the hidden community structure of a network, known as community detection, is difficult, and further, even if one has an idea of this community structure, it is not a priori obvious how to efficiently use this information. Here, to address both of these issues, we, first, identify optimal community structure of given networks in terms of modularity by utilizing a recently introduced community detection method. Second, we develop an approach to use this community information to extract hidden information from a network. When applied to a protein-protein interaction network, the proposed method outperforms current state-of-the-art methods that use only the local information of a network. The method is generally applicable to networks from many areas. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 20120001222).

  11. Solid-phase extraction and liquid chromatographic quantitation of quinfamide in biological samples.

    Science.gov (United States)

    Morales, J M; Jung, C H; Alarcón, A; Barreda, A

    2000-09-15

    This paper describes a high-performance liquid chromatographic method for the assay of quinfamide and its main metabolite, 1-(dichloroacetyl)-1,2,3,4,-tetrahydro-6-quinolinol, in plasma, urine and feces. It requires 1 ml of biological fluid, an extraction using Sep-Pack cartridges and acetonitrile for drug elution. Analysis was performed on a CN column (5 microm) using water-acetonitrile-methanol (40:50:10) as a mobile phase at 269 nm. Results showed that the assay was linear in the range between 0.08 and 2.0 microg/ml. The limit of quantitation was 0.08 microg/ml. Maximum assay coefficient of variation was 14%. Recovery obtained in plasma, urine and feces ranged from 82% to 98%.

  12. Quantitative Mass Spectrometric Analysis and Post-Extraction Stability Assessment of the Euglenoid Toxin Euglenophycin

    Directory of Open Access Journals (Sweden)

    Paul V. Zimba

    2013-09-01

    Full Text Available Euglenophycin is a recently discovered toxin produced by at least one species of euglenoid algae. The toxin has been responsible for several fish mortality events. To facilitate the identification and monitoring of euglenophycin in freshwater ponds, we have developed a specific mass spectrometric method for the identification and quantitation of euglenophycin. The post-extraction stability of the toxin was assessed under various conditions. Euglenophycin was most stable at room temperature. At 8 °C there was a small, but statistically significant, loss in toxin after one day. These methods and knowledge of the toxin’s stability will facilitate identification of the toxin as a causative agent in fish kills and determination of the toxin’s distribution in the organs of exposed fish.

  13. Chemical fingerprint and quantitative analysis for quality control of polyphenols extracted from pomegranate peel by HPLC.

    Science.gov (United States)

    Li, Jianke; He, Xiaoye; Li, Mengying; Zhao, Wei; Liu, Liu; Kong, Xianghong

    2015-06-01

    A simple and efficient HPLC fingerprint method was developed and validated for quality control of the polyphenols extracted from pomegranate peel (PPPs). Ten batches of pomegranate collected from different orchards in Shaanxi Lintong of China were used to establish the fingerprint. For the fingerprint analysis, 15 characteristic peaks were selected to evaluate the similarities of 10 batches of the PPPs. The similarities of the PPPs samples were all more than 0.968, indicating that the samples from different areas of Lintong were consistent. Additionally, simultaneous quantification of eight monophenols (including gallic acid, punicalagin, catechin, chlorogenic acid, caffeic acid, epicatechin, rutin, and ellagic acid) in the PPPs was conducted to interpret the consistency of the quality test. The results demonstrated that the HPLC fingerprint as a characteristic distinguishing method combining similarity evaluation and quantitative analysis can be successfully used to assess the quality and to identify the authenticity of the PPPs.

  14. Quantitative Detection of Trace Malachite Green in Aquiculture Water Samples by Extractive Electrospray Ionization Mass Spectrometry.

    Science.gov (United States)

    Fang, Xiaowei; Yang, Shuiping; Chingin, Konstantin; Zhu, Liang; Zhang, Xinglei; Zhou, Zhiquan; Zhao, Zhanfeng

    2016-01-01

    Exposure to malachite green (MG) may pose great health risks to humans; thus, it is of prime importance to develop fast and robust methods to quantitatively screen the presence of malachite green in water. Herein the application of extractive electrospray ionization mass spectrometry (EESI-MS) has been extended to the trace detection of MG within lake water and aquiculture water, due to the intensive use of MG as a biocide in fisheries. This method has the advantage of obviating offline liquid-liquid extraction or tedious matrix separation prior to the measurement of malachite green in native aqueous medium. The experimental results indicate that the extrapolated detection limit for MG was ~3.8 μg·L(-1) (S/N = 3) in lake water samples and ~0.5 μg·L(-1) in ultrapure water under optimized experimental conditions. The signal intensity of MG showed good linearity over the concentration range of 10-1000 μg·L(-1). Measurement of practical water samples fortified with MG at 0.01, 0.1 and 1.0 mg·L(-1) gave a good validation of the established calibration curve. The average recoveries and relative standard deviation (RSD) of malachite green in lake water and Carassius carassius fish farm effluent water were 115% (6.64% RSD), 85.4% (9.17% RSD) and 96.0% (7.44% RSD), respectively. Overall, the established EESI-MS/MS method has been demonstrated suitable for sensitive and rapid (<2 min per sample) quantitative detection of malachite green in various aqueous media, indicating its potential for online real-time monitoring of real life samples.

  15. Automated Extraction of Substance Use Information from Clinical Texts.

    Science.gov (United States)

    Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.

  16. Extraction of neutron spectral information from Bonner-Sphere data

    CERN Document Server

    Haney, J H; Zaidins, C S

    1999-01-01

    We have extended a least-squares method of extracting neutron spectral information from Bonner-Sphere data which was previously developed by Zaidins et al. (Med. Phys. 5 (1978) 42). A pulse-height analysis with background stripping is employed which provided a more accurate count rate for each sphere. Newer response curves by Mares and Schraube (Nucl. Instr. and Meth. A 366 (1994) 461) were included for the moderating spheres and the bare detector which comprise the Bonner spectrometer system. Finally, the neutron energy spectrum of interest was divided using the philosophy of fuzzy logic into three trapezoidal regimes corresponding to slow, moderate, and fast neutrons. Spectral data was taken using a PuBe source in two different environments and the analyzed data is presented for these cases as slow, moderate, and fast neutron fluences. (author)

  17. ONTOGRABBING: Extracting Information from Texts Using Generative Ontologies

    DEFF Research Database (Denmark)

    Nilsson, Jørgen Fischer; Szymczak, Bartlomiej Antoni; Jensen, P.A.

    2009-01-01

    We describe principles for extracting information from texts using a so-called generative ontology in combination with syntactic analysis. Generative ontologies are introduced as semantic domains for natural language phrases. Generative ontologies extend ordinary finite ontologies with rules...... analysis is primarily to identify paraphrases, thereby achieving a search functionality beyond mere keyword search with synsets. We further envisage use of the generative ontology as a phrase-based rather than word-based browser into text corpora....... for producing recursively shaped terms representing the ontological content (ontological semantics) of NL noun phrases and other phrases. We focus here on achieving a robust, often only partial, ontology-driven parsing of and ascription of semantics to a sentence in the text corpus. The aim of the ontological...

  18. Domain-independent information extraction in unstructured text

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H. [Sandia National Labs., Albuquerque, NM (United States). Software Surety Dept.

    1996-09-01

    Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness when compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.

  19. Querying and Extracting Timeline Information from Road Traffic Sensor Data.

    Science.gov (United States)

    Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen

    2016-08-23

    The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system-a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset.

  20. Querying and Extracting Timeline Information from Road Traffic Sensor Data

    Directory of Open Access Journals (Sweden)

    Ardi Imawan

    2016-08-01

    Full Text Available The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset.

  1. Simultaneous extraction and quantitation of carotenoids, chlorophylls, and tocopherols in Brassica vegetables.

    Science.gov (United States)

    Guzman, Ivette; Yousef, Gad G; Brown, Allan F

    2012-07-25

    Brassica oleracea vegetables, such as broccoli (B. oleracea L. var. italica) and cauliflower (B. oleracea L. var. botrytis), are known to contain bioactive compounds associated with health, including three classes of photosynthetic lipid-soluble compounds: carotenoids, chlorophylls, and tocopherols. Carotenoids and chlorophylls are photosynthetic pigments. Tocopherols have vitamin E activity. Due to genetic and environmental variables, the amounts present in vegetables are not constant. To aid breeders in the development of Brassica cultivars with higher provitamin A and vitamin E contents and antioxidant activity, a more efficient method was developed to quantitate carotenoids, chlorophylls, and tocopherols in the edible portions of broccoli and cauliflower. The novel UPLC method separated five carotenoids, two chlorophylls, and two tocopherols in a single 30 min run, reducing the run time by half compared to previously published protocols. The objective of the study was to develop a faster, more effective extraction and quantitation methodology to screen large populations of Brassica germplasm, thus aiding breeders in producing superior vegetables with enhanced phytonutrient profiles.

  2. Research of information classification and strategy intelligence extract algorithm based on military strategy hall

    Science.gov (United States)

    Chen, Lei; Li, Dehua; Yang, Jie

    2007-12-01

    Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.

  3. An Improved DNA Extraction Method for Efficient and Quantitative Recovery of Phytoplankton Diversity in Natural Assemblages.

    Directory of Open Access Journals (Sweden)

    Jian Yuan

    Full Text Available Marine phytoplankton are highly diverse with different species possessing different cell coverings, posing challenges for thoroughly breaking the cells in DNA extraction yet preserving DNA integrity. While quantitative molecular techniques have been increasingly used in phytoplankton research, an effective and simple method broadly applicable to different lineages and natural assemblages is still lacking. In this study, we developed a bead-beating protocol based on our previous experience and tested it against 9 species of phytoplankton representing different lineages and different cell covering rigidities. We found the bead-beating method enhanced the final yield of DNA (highest as 2 folds in comparison with the non-bead-beating method, while also preserving the DNA integrity. When our method was applied to a field sample collected at a subtropical bay located in Xiamen, China, the resultant ITS clone library revealed a highly diverse assemblage of phytoplankton and other micro-eukaryotes, including Archaea, Amoebozoa, Chlorophyta, Ciliphora, Bacillariophyta, Dinophyta, Fungi, Metazoa, etc. The appearance of thecate dinoflagellates, thin-walled phytoplankton and "naked" unicellular organisms indicates that our method could obtain the intact DNA of organisms with different cell coverings. All the results demonstrate that our method is useful for DNA extraction of phytoplankton and environmental surveys of their diversity and abundance.

  4. An Improved DNA Extraction Method for Efficient and Quantitative Recovery of Phytoplankton Diversity in Natural Assemblages

    Science.gov (United States)

    Yuan, Jian; Li, Meizhen; Lin, Senjie

    2015-01-01

    Marine phytoplankton are highly diverse with different species possessing different cell coverings, posing challenges for thoroughly breaking the cells in DNA extraction yet preserving DNA integrity. While quantitative molecular techniques have been increasingly used in phytoplankton research, an effective and simple method broadly applicable to different lineages and natural assemblages is still lacking. In this study, we developed a bead-beating protocol based on our previous experience and tested it against 9 species of phytoplankton representing different lineages and different cell covering rigidities. We found the bead-beating method enhanced the final yield of DNA (highest as 2 folds) in comparison with the non-bead-beating method, while also preserving the DNA integrity. When our method was applied to a field sample collected at a subtropical bay located in Xiamen, China, the resultant ITS clone library revealed a highly diverse assemblage of phytoplankton and other micro-eukaryotes, including Archaea, Amoebozoa, Chlorophyta, Ciliphora, Bacillariophyta, Dinophyta, Fungi, Metazoa, etc. The appearance of thecate dinoflagellates, thin-walled phytoplankton and “naked” unicellular organisms indicates that our method could obtain the intact DNA of organisms with different cell coverings. All the results demonstrate that our method is useful for DNA extraction of phytoplankton and environmental surveys of their diversity and abundance. PMID:26218575

  5. Gas purge microsyringe extraction for quantitative direct gas chromatographic-mass spectrometric analysis of volatile and semivolatile chemicals.

    Science.gov (United States)

    Yang, Cui; Piao, Xiangfan; Qiu, Jinxue; Wang, Xiaoping; Ren, Chunyan; Li, Donghao

    2011-03-25

    Sample pretreatment before chromatographic analysis is the most time consuming and error prone part of analytical procedures, yet it is a key factor in the final success of the analysis. A quantitative and fast liquid phase microextraction technique termed as gas purge microsyringe extraction (GP-MSE) has been developed for simultaneous direct gas chromatography-mass spectrometry (GC-MS) analysis of volatile and semivolatile chemicals without cleanup process. Use of a gas flowing system, temperature control and a conventional microsyringe greatly increased the surface area of the liquid phase micro solvent, and led to quantitative recoveries of both volatile and semivolatile chemicals within short extraction time of only 2 min. Recoveries of polycyclic aromatic hydrocarbons (PAHs), organochlorine pesticides (OCPs) and alkylphenols (APs) determined were 85-107%, and reproducibility was between 2.8% and 8.5%. In particular, the technique shows high sensitivity for semivolatile chemicals which is difficult to achieve in other sample pretreatment techniques such as headspace-liquid phase microextraction. The variables affecting extraction efficiency such as gas flow rate, extraction time, extracting solvent type, temperature of sample and extracting solvent were investigated. Finally, the technique was evaluated to determine PAHs, APs and OCPs from plant and soil samples. The experimental results demonstrated that the technique is economic, sensitive to both volatile and semivolatile chemicals, is fast, simple to operate, and allows quantitative extraction. On-site monitoring of volatile and semivolatile chemicals is now possible using this technique due to the simplification and speed of sample treatment.

  6. Determination of ethyl glucuronide in human hair samples: A multivariate analysis of the impact of extraction conditions on quantitative results.

    Science.gov (United States)

    Mueller, Alexander; Jungen, Hilke; Iwersen-Bergmann, Stefanie; Raduenz, Lars; Lezius, Susanne; Andresen-Streichert, Hilke

    2017-02-01

    Ethyl glucuronide (EtG), a minor metabolite of ethanol, is used as a direct alcohol biomarker for the prolonged detection of ethanol consumption. Hair testing for EtG offers retrospective, long-term detection of ethanol exposition for several months and has gained practical importance in forensic and clinical toxicology. Since quantitative results of EtG hair testings are included in interpretations, a rugged quantitation of EtG in hair matrix is important. As generally known, sample preparation is critical in hair testing, and the scope of this study was on extraction of EtG from hair matrix. The influence of extraction solvent, ultrasonication, incubation temperature, incubation time, solvent amount and hair particle size on quantitative results was investigated by a multifactorial experimental design using a validated analytical method and twelve different batches of authentic human hair material. Eight series of extraction experiments in a Plackett-Burman setup were carried out on each hair material with the studied factors at high or low levels. The effect of pulverization was further studied by two additional experimental series. Five independent samplings were performed for each run, resulting in a total number of 600 determinations. Considerable differences in quantitative EtG results were observed, concentrations above and below interpretative cut-offs were obtained from the same hair materials using different extraction conditions. Statistical analysis revealed extraction solvent and temperature as the most important experimental factors with significant influence on quantitative results. The impact of pulverization depended on other experimental factors and the different hair matrices themselves proved to be important predictors of extraction efficiency. A standardization of extraction procedures should be discussed, since it will probably reduce interlaboratory variabilities and improve the quality and acceptance of hair EtG analysis. Copyright © 2016

  7. Method Specific Calibration Corrects for DNA Extraction Method Effects on Relative Telomere Length Measurements by Quantitative PCR

    Science.gov (United States)

    Holland, Rebecca; Underwood, Sarah; Fairlie, Jennifer; Psifidi, Androniki; Ilska, Joanna J.; Bagnall, Ainsley; Whitelaw, Bruce; Coffey, Mike; Banos, Georgios; Nussey, Daniel H.

    2016-01-01

    Telomere length (TL) is increasingly being used as a biomarker in epidemiological, biomedical and ecological studies. A wide range of DNA extraction techniques have been used in telomere experiments and recent quantitative PCR (qPCR) based studies suggest that the choice of DNA extraction method may influence average relative TL (RTL) measurements. Such extraction method effects may limit the use of historically collected DNA samples extracted with different methods. However, if extraction method effects are systematic an extraction method specific (MS) calibrator might be able to correct for them, because systematic effects would influence the calibrator sample in the same way as all other samples. In the present study we tested whether leukocyte RTL in blood samples from Holstein Friesian cattle and Soay sheep measured by qPCR was influenced by DNA extraction method and whether MS calibration could account for any observed differences. We compared two silica membrane-based DNA extraction kits and a salting out method. All extraction methods were optimized to yield enough high quality DNA for TL measurement. In both species we found that silica membrane-based DNA extraction methods produced shorter RTL measurements than the non-membrane-based method when calibrated against an identical calibrator. However, these differences were not statistically detectable when a MS calibrator was used to calculate RTL. This approach produced RTL measurements that were highly correlated across extraction methods (r > 0.76) and had coefficients of variation lower than 10% across plates of identical samples extracted by different methods. Our results are consistent with previous findings that popular membrane-based DNA extraction methods may lead to shorter RTL measurements than non-membrane-based methods. However, we also demonstrate that these differences can be accounted for by using an extraction method-specific calibrator, offering researchers a simple means of accounting for

  8. Method Specific Calibration Corrects for DNA Extraction Method Effects on Relative Telomere Length Measurements by Quantitative PCR.

    Science.gov (United States)

    Seeker, Luise A; Holland, Rebecca; Underwood, Sarah; Fairlie, Jennifer; Psifidi, Androniki; Ilska, Joanna J; Bagnall, Ainsley; Whitelaw, Bruce; Coffey, Mike; Banos, Georgios; Nussey, Daniel H

    2016-01-01

    Telomere length (TL) is increasingly being used as a biomarker in epidemiological, biomedical and ecological studies. A wide range of DNA extraction techniques have been used in telomere experiments and recent quantitative PCR (qPCR) based studies suggest that the choice of DNA extraction method may influence average relative TL (RTL) measurements. Such extraction method effects may limit the use of historically collected DNA samples extracted with different methods. However, if extraction method effects are systematic an extraction method specific (MS) calibrator might be able to correct for them, because systematic effects would influence the calibrator sample in the same way as all other samples. In the present study we tested whether leukocyte RTL in blood samples from Holstein Friesian cattle and Soay sheep measured by qPCR was influenced by DNA extraction method and whether MS calibration could account for any observed differences. We compared two silica membrane-based DNA extraction kits and a salting out method. All extraction methods were optimized to yield enough high quality DNA for TL measurement. In both species we found that silica membrane-based DNA extraction methods produced shorter RTL measurements than the non-membrane-based method when calibrated against an identical calibrator. However, these differences were not statistically detectable when a MS calibrator was used to calculate RTL. This approach produced RTL measurements that were highly correlated across extraction methods (r > 0.76) and had coefficients of variation lower than 10% across plates of identical samples extracted by different methods. Our results are consistent with previous findings that popular membrane-based DNA extraction methods may lead to shorter RTL measurements than non-membrane-based methods. However, we also demonstrate that these differences can be accounted for by using an extraction method-specific calibrator, offering researchers a simple means of accounting for

  9. Application de la chromatographie sur couche mince à l'étude quantitative et qualitative des extraits de roches et des huiles Application of Thin-Layer Chromatography to the Quantitative and Qualitative Analysis of Rock and Oil Extracts

    Directory of Open Access Journals (Sweden)

    Huc A. Y.

    2006-11-01

    Full Text Available La technique développée ici répond à un besoin de miniaturisation des analyses des huiles et des extraits de roche. La chromatographie sur couche mince permet en effet l'analyse qualitative et quantitative de faibles quantités d'échantillons. On peut grace à cette méthode étudier les extraits obtenus à partir des cuttings (5 à 10 g de roche. Nous nous sommes attachés à faire une étude critique de l'information obtenue et de comparer cette dernière aux résultats fournis par les autres méthodes analytiques (chromatographie liquide. The technique described here meets the need to miniaturize analyses of ails and rock extracts. Thin-layer chromatography tan be used for the qualitative and quantitative analysis of small amounts of samples. This method is capable of onalyzing extracts from cuttings (5 ta 10 g of rock. This article attempts to make a critical study of the information obtained and to compare il with results using other analytical methods (liquid chromatography.

  10. HPLC quantitative analysis of rhein and antidermatophytic activity of Cassia fistula pod pulp extracts of various storage conditions.

    Science.gov (United States)

    Chewchinda, Savita; Wuthi-udomlert, Mansuang; Gritsanapan, Wandee

    2013-01-01

    Cassia fistula is well known for its laxative and antifungal properties due to anthraquinone compounds in the pods. This study quantitatively analyzed rhein in the C. fistula pod pulp decoction extracts kept under various storage conditions using HPLC. The antifungal activity of the extracts and their hydrolyzed mixture was also evaluated against dermatophytes. The contents of rhein in all stored decoction extracts remained more than 95% (95.69-100.66%) of the initial amount (0.0823 ± 0.001% w/w). There was no significant change of the extracts kept in glass vials and in aluminum foil bags. The decoction extract of C. fistula pod pulp and its hydrolyzed mixture containing anthraquinone aglycones were tested against clinical strains of dermatophytes by broth microdilution technique. The results revealed good chemical and antifungal stabilities against dermatophytes of C. fistula pod pulp decoction extracts stored under various accelerated and real time storage conditions.

  11. Information Behavior of Japanese Now and the Future : Centering On Quantitative Analysis

    Science.gov (United States)

    Tsuneki, Teruo

    Our behavior surrounded by information has become complicated nowadays. To take such an approach that a specific behavior is relatively located in total information behavior is effective when we cope with how we manipulate newly coming media. Purpose of this study is to grasp our present information behavior quantitatively from the comprehensive and systematic viewpoints. Collecting data time-sequentially as much as possible the author 1) clarified Japanese characteristics of information behavior by comparing with those of foreign people, and 2) indicated information behavior in the future quantitatively by using Delfy method. He points out that international comparison of information behavior amount, the future prediction and so on should be conducted more in detail and delicately from now on.

  12. A Quantitative Study into the Information Technology Project Portfolio Practice: The Impact on Information Technology Project Deliverables

    Science.gov (United States)

    Yu, Wei

    2013-01-01

    This dissertation applied the quantitative approach to the data gathered from online survey questionnaires regarding the three objects: Information Technology (IT) Portfolio Management, IT-Business Alignment, and IT Project Deliverables. By studying this data, this dissertation uncovered the underlying relationships that exist between the…

  13. A Quantitative Study into the Information Technology Project Portfolio Practice: The Impact on Information Technology Project Deliverables

    Science.gov (United States)

    Yu, Wei

    2013-01-01

    This dissertation applied the quantitative approach to the data gathered from online survey questionnaires regarding the three objects: Information Technology (IT) Portfolio Management, IT-Business Alignment, and IT Project Deliverables. By studying this data, this dissertation uncovered the underlying relationships that exist between the…

  14. A Quantitative Analysis of Extraction of Organic Molecules from Terrestrial Sedimentary Deposits

    Science.gov (United States)

    Kanik, I.; Beegle, L. W.; Abbey, W. A.; Tsapin, A. T.

    2004-12-01

    There are several factors determining the ability to detect organic molecules as part of a robotic astrobiology mission to planets. These include the quantity of organics present in a sample, the efficiency of extracting those organics from the matrix that they reside in (i.e. sample processing) and finally the detection efficiencies of the analytical instrumentation aboard the robotic platform. Once the detection limits of the analytical instrumentation is established, the efficiency of extraction becomes the overriding factor in the detectability of these molecules, and needs to be factored in. We analyzed four different terrestrial field samples, which were initially created in aqueous environments, are sedimentary in nature. These particular samples were chosen because they possibly represent a terrestrial analog of Mars [1] and they represent a best case scenarios for finding organic molecules on the Martian surface. The extraction efficiencies of amino acids (smallest building blocks of life) from the samples using pyrolysis and solvent extraction techniques (with seven different solvents: water, hydrochloric acid, butane, ethanol, isoproponal, methanol, n=propanal) are reported. In order to remove any instrumental bias, we used a standard laboratory bench-top high pressure liquid chromatograph (HPLC). We determined both absolute quantity of organics as well as the D/L ratio to determine the preservation of that information in the processing step. Acknowledgment: The research described here was carried out at the Jet Propulsion Laboratory, and was sponsored by the NASA PIDDP and ASTID program offices. References: [1] Malin M.C. and Edgett K.S. (2003) Science 302 1931-1934.

  15. Rapid and quantitative determination of 10 major active components in Lonicera japonica Thunb. by ultrahigh pressure extraction-HPLC/DAD

    Science.gov (United States)

    Fan, Li; Lin, Changhu; Duan, Wenjuan; Wang, Xiao; Liu, Jianhua; Liu, Feng

    2015-01-01

    An ultrahigh pressure extraction (UPE)-high performance liquid chromatography (HPLC)/diode array detector (DAD) method was established to evaluate the quality of Lonicera japonica Thunb. Ten active components, including neochlorogenic acid, chlorogenic acid, 4-dicaffeoylquinic acid, caffeic acid, rutin, luteoloside, isochlorogenic acid B, isochlorogenic acid A, isochlorogenic acid C, and quercetin, were qualitatively evaluated and quantitatively determined. Scanning electron microscope images elucidated the bud surface microstructure and extraction mechanism. The optimal extraction conditions of the UPE were 60% methanol solution, 400 MPa of extraction pressure, 3 min of extraction time, and 1:30 (g/mL) solid:liquid ratio. Under the optimized conditions, the total extraction yield of 10 active components was 57.62 mg/g. All the components showed good linearity (r2 ≥ 0.9994) and recoveries. This method was successfully applied to quantify 10 components in 22 batches of L. japonica samples from different areas. Compared with heat reflux extraction and ultrasonic-assisted extraction, UPE can be considered as an alternative extraction technique for fast extraction of active ingredient from L. japonica.

  16. Extracting information in spike time patterns with wavelets and information theory.

    Science.gov (United States)

    Lopes-dos-Santos, Vítor; Panzeri, Stefano; Kayser, Christoph; Diamond, Mathew E; Quian Quiroga, Rodrigo

    2015-02-01

    We present a new method to assess the information carried by temporal patterns in spike trains. The method first performs a wavelet decomposition of the spike trains, then uses Shannon information to select a subset of coefficients carrying information, and finally assesses timing information in terms of decoding performance: the ability to identify the presented stimuli from spike train patterns. We show that the method allows: 1) a robust assessment of the information carried by spike time patterns even when this is distributed across multiple time scales and time points; 2) an effective denoising of the raster plots that improves the estimate of stimulus tuning of spike trains; and 3) an assessment of the information carried by temporally coordinated spikes across neurons. Using simulated data, we demonstrate that the Wavelet-Information (WI) method performs better and is more robust to spike time-jitter, background noise, and sample size than well-established approaches, such as principal component analysis, direct estimates of information from digitized spike trains, or a metric-based method. Furthermore, when applied to real spike trains from monkey auditory cortex and from rat barrel cortex, the WI method allows extracting larger amounts of spike timing information. Importantly, the fact that the WI method incorporates multiple time scales makes it robust to the choice of partly arbitrary parameters such as temporal resolution, response window length, number of response features considered, and the number of available trials. These results highlight the potential of the proposed method for accurate and objective assessments of how spike timing encodes information. Copyright © 2015 the American Physiological Society.

  17. QUALITATIVE AND QUANTITATIVE PROFILE OF CURCUMIN FROM ETHANOLIC EXTRACT OF CURCUMA LONGA

    Directory of Open Access Journals (Sweden)

    Soni Himesh

    2011-04-01

    Full Text Available Turmeric, derived from the plant Curcuma longa, is a gold-colored spice commonly used in the Indian subcontinent, not only for health care but also for the preservation of food and as a yellow dye for textiles. Curcumin, which gives the yellow color to turmeric, was first isolated almost two centuries ago, and its structure as diferuloylmethane was determined in 1910. Since the time of Ayurveda (1900 B.C numerous therapeutic activities have been assigned to turmeric for a wide variety of diseases and conditions, including those of the skin, pulmonary, and gastrointestinal systems, aches, pains, wounds, sprains, and liver disorders. Extensive research within the last half century has proven that most of these activities, once associated with turmeric, are due to curcumin. Curcumin has been shown to exhibit antioxidant, anti-inflammatory, antiviral, antibacterial, antifungal, and anticancer activities and thus has a potential against various malignant diseases, diabetes, allergies, arthritis, Alzheimer’s disease, and other chronic illnesses. Curcumin can be considered an ideal “Spice for Life”. Curcumin is the most important fraction of turmeric which is responsible for its biological activity. In the present work we have investigated the qualitative and quantitative determination of curcumin in the ethanolic extract of C.longa. Qualitative estimation was carried out by thin layer chromatographic (TLC method. The total phenolic content of the ethanolic extract of C.longa was found to be 11.24 as mg GAE/g. The simultaneous determination of the pharmacologically important active curcuminoids viz. curcumin, demethoxycurcumin and bis-demethoxycurcumin in Curcuma longa was carried out by spectrophotometric and HPLC techniques. HPLC separation was performed on a Cyber Lab C-18 column (250 x 4.0 mm, 5μ using acetonitrile and 0.1 % orthophosphoric acid solution in water in the ratio 60 : 40 (v/v at flow rate of 0.5 mL/min. Detection of curcuminoids

  18. Earth Science Data Analytics: Preparing for Extracting Knowledge from Information

    Science.gov (United States)

    Kempler, Steven; Barbieri, Lindsay

    2016-01-01

    Data analytics is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. Data analytics is a broad term that includes data analysis, as well as an understanding of the cognitive processes an analyst uses to understand problems and explore data in meaningful ways. Analytics also include data extraction, transformation, and reduction, utilizing specific tools, techniques, and methods. Turning to data science, definitions of data science sound very similar to those of data analytics (which leads to a lot of the confusion between the two). But the skills needed for both, co-analyzing large amounts of heterogeneous data, understanding and utilizing relevant tools and techniques, and subject matter expertise, although similar, serve different purposes. Data Analytics takes on a practitioners approach to applying expertise and skills to solve issues and gain subject knowledge. Data Science, is more theoretical (research in itself) in nature, providing strategic actionable insights and new innovative methodologies. Earth Science Data Analytics (ESDA) is the process of examining, preparing, reducing, and analyzing large amounts of spatial (multi-dimensional), temporal, or spectral data using a variety of data types to uncover patterns, correlations and other information, to better understand our Earth. The large variety of datasets (temporal spatial differences, data types, formats, etc.) invite the need for data analytics skills that understand the science domain, and data preparation, reduction, and analysis techniques, from a practitioners point of view. The application of these skills to ESDA is the focus of this presentation. The Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster was created in recognition of the practical need to facilitate the co-analysis of large amounts of data and information for Earth science. Thus, from a to

  19. High-performance liquid chromatographic (HPLC) separation and quantitation of endogenous glucocorticoids after solid-phase extraction from plasma.

    Science.gov (United States)

    Dawson, R; Kontur, P; Monjan, A

    1984-01-01

    This study describes a method for the extraction and simultaneous measurement of cortisone, cortisol and corticosterone using dexamethasone as an internal standard. Solid-phase extraction of plasma steroids with C18 columns allows the samples to be extracted, washed and concentrated in a single step with minimal sample handling and without the use of large volumes of organic solvents. HPLC separation of the steroids is accomplished within 10 min and the individual steroid peaks are quantitated by UV detection at 239 nm. This assay was examined for linearity, extraction efficiency, precision and potential interference by commonly used drugs. Plasma values of glucocorticoids are reported for samples obtained from human subjects as well as from rats. HPLC was also compared to RIA for the determination of plasma levels of corticosterone in the rat. Solid-phase extraction and assay by HPLC provides a rapid and specific method for the simultaneous determination of plasma glucocorticoids.

  20. Freezing fecal samples prior to DNA extraction affects the Firmicutes to Bacteroidetes ratio determined by downstream quantitative PCR analysis

    DEFF Research Database (Denmark)

    Bahl, Martin Iain; Bergström, Anders; Licht, Tine Rask

    2012-01-01

    Freezing stool samples prior to DNA extraction and downstream analysis is widely used in metagenomic studies of the human microbiota but may affect the inferred community composition. In this study, DNA was extracted either directly or following freeze storage of three homogenized human fecal...... samples using three different extraction methods. No consistent differences were observed in DNA yields between extractions on fresh and frozen samples; however, differences were observed between extraction methods. Quantitative PCR analysis was subsequently performed on all DNA samples using six...... different primer pairs targeting 16S rRNA genes of significant bacterial groups, and the community composition was evaluated by comparing specific ratios of the calculated abundances. In seven of nine cases, the Firmicutes to Bacteroidetes 16S rRNA gene ratio was significantly higher in fecal samples...

  1. Freezing fecal samples prior to DNA extraction affects the Firmicutes to Bacteroidetes ratio determined by downstream quantitative PCR analysis

    DEFF Research Database (Denmark)

    Bahl, Martin Iain; Bergström, Anders; Licht, Tine Rask

    Freezing stool samples prior to DNA extraction and downstream analysis is widely used in metagenomic studies of the human microbiota but may affect the inferred community composition. In this study DNA was extracted either directly or following freeze storage of three homogenized human fecal...... samples using three different extraction methods. No consistent differences were observed in DNA yields between extractions on fresh and frozen samples, however differences were observed between extraction methods. Quantitative PCR analysis was subsequently performed on all DNA samples using six different...... primer pairs targeting 16S rRNA genes of significant bacterial groups and the community composition was evaluated by comparing specific ratios of the calculated abundances. In seven out of nine cases the Firmicutes to Bacteroidetes 16S rRNA gene ratio was significantly higher in fecal-samples that had...

  2. In vitro prebiotic effects and quantitative analysis of Bulnesia sarmienti extract

    Directory of Open Access Journals (Sweden)

    Md Ahsanur Reza

    2016-10-01

    Full Text Available Prebiotics are used to influence the growth, colonization, survival, and activity of probiotics, and enhance the innate immunity, thus improving the health status of the host. The survival, growth, and activity of probiotics are often interfered with by intrinsic factors and indigenous microbes in the gastrointestinal tract. In this study, Bulnesia sarmienti aqueous extract (BSAE was evaluated for the growth-promoting activity of different strains of Lactobacillus acidophilus, and a simple, precise, cost-effective high-performance liquid chromatography (HPLC method was developed and validated for the determination of active prebiotic ingredients in the extract. Different strains of L. acidophilus (probiotic were incubated in de Man, Rogosa, and Sharpe (MRS medium with the supplementation of BSAE in a final concentration of 0.0%, 1.0%, and 3.0% (w/v as the sole carbon source. Growth of the probiotics was determined by measuring the pH changes and colony-forming units (CFU/mL using the microdilution method for a period of 24 hours. The HPLC method was designed by optimizing mobile-phase composition, flow rate, column temperature, and detection wavelength. The method was validated according to the requirements of a new method, including accuracy, precision, linearity, limit of detection, limit of quantitation, and specificity. The major prebiotic active ingredients in BSAE were determined using the validated HPLC method. The rapid growth rate of different strains of L. acidophilus was observed in growth media with BSAE, whereas the decline of pH values of cultures varied in different strains of probiotics depending on the time of culture. (+-Catechin and (−-epicatechin were identified on the basis of their retention time, absorbance spectrum, and mass spectrometry fragmentation pattern. The developed method met the limit of all validation parameters. The prebiotic active components, (+-catechin and (−-epicatechin, were quantified as 1.27% and 0

  3. Information flow and controlling in regularization inversion of quantitative remote sensing

    Institute of Scientific and Technical Information of China (English)

    YANG Hua; XU Wangli; ZHAO Hongrui; CHEN Xue; WANG Jindi

    2005-01-01

    In order to minimize uncertainty of the inversed parameters to the largest extent by making full use of the limited information in remote sensing data, it is necessary to understand what the information flow in quantitative remote sensing model inversion is, thus control the information flow. Aiming at this, the paper takes the linear kernel-driven model inversion as an example. At first, the information flow in different inversion methods is calculated and analyzed, then the effect of information flow controlled by multi-stage inversion strategy is studied, finally, an information matrix based on USM is defined to control information flow in inversion. It shows that using Shannon entropy decrease of the inversed parameters can express information flow more properly. Changing the weight of a priori knowledge in inversion or fixing parameters and partitioning datasets in multi-stage inversion strategy can control information flow. In regularization inversion of remote sensing, information matrix based on USM may be a better tool for quantitatively controlling information flow.

  4. Quantitative aspects of informed consent: considering the dose response curve when estimating quantity of information.

    Science.gov (United States)

    Lynöe, N; Hoeyer, K

    2005-12-01

    Information is usually supposed to be a prerequisite for people making decisions on whether or not to participate in a clinical trial. Previously conducted studies and research ethics scandals indicate that participants have sometimes lacked important pieces of information. Over the past few decades the quantity of information believed to be adequate has increased significantly, and in some instances a new maxim seems to be in place: the more information, the better the ethics in terms of respecting a participant's autonomy. The authors hypothesise that the dose-response curve from pharmacology or toxicology serves as a model to illustrate that a large amount of written information does not equal optimality. Using the curve as a pedagogical analogy when teaching ethics to students in clinical sciences, and also in engaging in dialogue with research institutions, may promote reflection on how to adjust information in relation to the preferences of individual participants, thereby transgressing the maxim that more information means better ethics.

  5. Extraction, Identification and Quantitative HPLC Analysis of Flavonoids From Fruit Extracts of Arbutus unedo L from Tiaret Area (Western Algeria

    Directory of Open Access Journals (Sweden)

    Khadidja Bouzid

    2014-12-01

    Full Text Available The aim of the current study was to evaluate the total phenolic, flavonoid content and to investigate the antioxidant capacities of the fruit extracts of Arbutus unedo L. that grows in Tiaret area (Western Algeria. First we have extracted the fruit by some non-polar solvent (chloroform, ethyl acetate, 1-butanol. Total phenolic content and total flavonoid content were evaluated according to the Folin-Ciocalteu procedure, and a colorimetric method, respectively. Extracts content was determined by using a high-performance liquid chromatography (HPLC-UV method. The total phenolic contents of A.unedo L. varied between 12.75±0.06 to 34.17±1.36 mg gallic acid equivalent/g of dry weight of extract. The total flavonoid varied from 2.18±0.10 to 6.54±1.14 mg catechin equivalent/g. The antioxidant potential of all extracts was evaluated using 1,1-diphenyl-2-picrylhydrazyl (DPPH free radical scavenging activity, the IC50 of acetate ethyl was the best by 0,009 mg/ml may due to the phenolic compound, in the second was the chloroform extract by IC50=0,015mg/ml, in the third was butanol extract by IC50= 0,022 mg/ml and in the last was water extract by IC50= 0,048mg/ml. the antioxidant activity of all extracts was better than ascorbic acid. The extract obtained under optimum conditions was analyzed by HPLC and five flavonoid compounds were identified; they are catechin, apiginin, silybin, fisetine and naringin.

  6. Perceived relevance and information needs regarding food topics and preferred information sources among Dutch adults: results of a quantitative consumer study

    NARCIS (Netherlands)

    Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.

    2004-01-01

    Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research.

  7. Quantitative Mitochondrial Proteomics Study on Protective Mechanism of Grape Seed Proanthocyanidin Extracts Against Ischemia/Reperfusion Heart Injury in Rat

    Institute of Scientific and Technical Information of China (English)

    LU Wei-da; QIU Jie; ZHAO Gai-xia; QIE Liang-yi; WEI Xin-bing; GAO Hai-qing

    2012-01-01

    Cardiac ischemia/reperfusion(I/R) injury is a critical condition,often associated with high morbidity and mortality.The cardioprotective effect of grape seed proanthocyanidin extracts(GSPE) against oxidant injury during I/R has been described in previous studies.However,the underlying molecular mechanisms have not been fully elucidated.This study investigated the effect of GSPE on reperfusion arrhythmias especially ventricular tachycardia(VT)and ventricular fibrillation(VF),the lactic acid accumulation and the ultrastructure of ischemic cardiomyocytes as well as the global changes of mitochondria proteins in in vivo rat heart model against I/R injury.GSPE significantly reduced the incidence of VF and VT,lessened the lactic acid accumulation and attenuated the ultrastructure damage.Twenty differential proteins related to cardiac protection were revealed by isobaric tag for relative and absolute quantitation(iTRAQ) profiling.These proteins were mainly involved in energy metabolism.Besides,monoamine oxidase A(MAOA) was also identified.The differential expression of several proteins was validated by Western blot.Our study offered important information on the mechanism of GSPE treatment in ischemic heart disease.

  8. Quantitative Analysis of Qualitative Information from Interviews: A Systematic Literature Review

    Science.gov (United States)

    Fakis, Apostolos; Hilliam, Rachel; Stoneley, Helen; Townend, Michael

    2014-01-01

    Background: A systematic literature review was conducted on mixed methods area. Objectives: The overall aim was to explore how qualitative information from interviews has been analyzed using quantitative methods. Methods: A contemporary review was undertaken and based on a predefined protocol. The references were identified using inclusion and…

  9. Forty Years of the "Journal of Librarianship and Information Science": A Quantitative Analysis, Part I

    Science.gov (United States)

    Furner, Jonathan

    2009-01-01

    This paper reports on the first part of a two-part quantitative analysis of volume 1-40 (1969-2008) of the "Journal of Librarianship and Information Science" (formerly the "Journal of Librarianship"). It provides an overview of the current state of LIS research journal publishing in the UK; a review of the publication and…

  10. A Framework for General Education Assessment: Assessing Information Literacy and Quantitative Literacy with ePortfolios

    Science.gov (United States)

    Hubert, David A.; Lewis, Kati J.

    2014-01-01

    This essay presents the findings of an authentic and holistic assessment, using a random sample of one hundred student General Education ePortfolios, of two of Salt Lake Community College's (SLCC) college-wide learning outcomes: quantitative literacy (QL) and information literacy (IL). Performed by four faculty from biology, humanities, and…

  11. Quantitative Analysis of Qualitative Information from Interviews: A Systematic Literature Review

    Science.gov (United States)

    Fakis, Apostolos; Hilliam, Rachel; Stoneley, Helen; Townend, Michael

    2014-01-01

    Background: A systematic literature review was conducted on mixed methods area. Objectives: The overall aim was to explore how qualitative information from interviews has been analyzed using quantitative methods. Methods: A contemporary review was undertaken and based on a predefined protocol. The references were identified using inclusion and…

  12. Quantitative and Qualitative Analysis of Nutrition and Food Safety Information in School Science Textbooks of India

    Science.gov (United States)

    Subba Rao, G. M.; Vijayapushapm, T.; Venkaiah, K.; Pavarala, V.

    2012-01-01

    Objective: To assess quantity and quality of nutrition and food safety information in science textbooks prescribed by the Central Board of Secondary Education (CBSE), India for grades I through X. Design: Content analysis. Methods: A coding scheme was developed for quantitative and qualitative analyses. Two investigators independently coded the…

  13. Climate Change Education: Quantitatively Assessing the Impact of a Botanical Garden as an Informal Learning Environment

    Science.gov (United States)

    Sellmann, Daniela; Bogner, Franz X.

    2013-01-01

    Although informal learning environments have been studied extensively, ours is one of the first studies to quantitatively assess the impact of learning in botanical gardens on students' cognitive achievement. We observed a group of 10th graders participating in a one-day educational intervention on climate change implemented in a botanical…

  14. 76 FR 27384 - Agency Information Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys...

    Science.gov (United States)

    2011-05-11

    ... better understand Veterans and their families' awareness of VA's suicide prevention and mental health... AFFAIRS Agency Information Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys.... Veterans Online Survey, VA Form 10-0513. b. Veterans Family Online Survey, VA Form 10-0513a. c....

  15. Quantitative Analysis of Volatile Impurities in Diallyldimethylammonium Chloride Monomer Solution by Gas Chromatography Coupled with Liquid-Liquid Extraction

    Directory of Open Access Journals (Sweden)

    Cheng Liu

    2017-01-01

    Full Text Available The quantitative analysis method for volatile impurities in diallyldimethylammonium chloride (DADMAC monomer solution was established in this paper. The volatile impurities were quantitatively analyzed with trichloromethane as extraction solvent and n-hexane as internal standard by using gas chromatography (GC coupled with solvent extraction, and the chromatographic conditions, quantitative methods, and extraction conditions were systematically investigated in detail. The results showed that excellent linear relationships of 5 volatile impurities (dimethylamine, allyldimethylamine, allyl chloride, allyl alcohol, and allyl aldehyde were obtained in the range of 1–100 mg·L−1. The method also showed good specificity, recovery (95.0%–107.5%, and relative standard deviation (RSD, 1.40%–7.67%. This method could accurately detect the whole volatile impurities in DADMAC monomer solution quantitatively in one time with a low detection limit. Furthermore, this method is conducive to the preparation of highly pure DADMAC monomer and the development of national and international standards of the DADMAC monomer product quality, and the results could provide a strong foundation for the regulation and mechanism research of impurities on monomer reactivity in polymerization.

  16. An efficient extraction method for quantitation of adenosine triphosphate in mammalian tissues and cells.

    Science.gov (United States)

    Chida, Junji; Yamane, Kazuhiko; Takei, Tunetomo; Kido, Hiroshi

    2012-05-21

    Firefly bioluminescence is widely used in the measurement of adenosine 5'-triphosphate (ATP) levels in biological materials. For such assays in tissues and cells, ATP must be extracted away from protein in the initial step and extraction efficacy is the main determinant of the assay accuracy. Extraction reagents recommended in the commercially available ATP assay kits are chaotropic reagents, trichloroacetic acid (TCA), perchloric acid (PCA), and ethylene glycol (EG), which extract nucleotides through protein precipitation and/or nucleotidase inactivation. We found that these reagents are particularly useful for measuring ATP levels in materials with relatively low protein concentrations such as blood cells, cultured cells, and bacteria. However, these methods are not suitable for ATP extraction from tissues with high protein concentrations, because some ATP may be co-precipitated with the insolubilized protein during homogenization and extraction, and it could also be precipitated by neutralization in the acid extracts. Here we found that a phenol-based extraction method markedly increased the ATP and other nucleotides extracted from tissues. In addition, phenol extraction does not require neutralization before the luciferin-luciferase assay step. ATP levels analyzed by luciferase assay in various tissues extracted by Tris-EDTA-saturated phenol (phenol-TE) were over 17.8-fold higher than those extracted by TCA and over 550-fold higher than those in EG extracts. Here we report a simple, rapid, and reliable phenol-TE extraction procedure for ATP measurement in tissues and cells by luciferase assay.

  17. Noninvasive Assessment of Oxygen Extraction Fraction in Chronic Ischemia Using Quantitative Susceptibility Mapping at 7 Tesla.

    Science.gov (United States)

    Uwano, Ikuko; Kudo, Kohsuke; Sato, Ryota; Ogasawara, Kuniaki; Kameda, Hiroyuki; Nomura, Jun-Ichi; Mori, Futoshi; Yamashita, Fumio; Ito, Kenji; Yoshioka, Kunihiro; Sasaki, Makoto

    2017-08-01

    The oxygen extraction fraction (OEF) is an effective metric to evaluate metabolic reserve in chronic ischemia. However, OEF is considered to be accurately measured only when using positron emission tomography (PET). Thus, we investigated whether OEF maps generated by magnetic resonance quantitative susceptibility mapping (QSM) at 7 Tesla enabled detection of OEF changes when compared with those obtained with PET. Forty-one patients with chronic stenosis/occlusion of the unilateral internal carotid artery or middle cerebral artery were examined using 7 Tesla-MRI and PET scanners. QSM images were obtained from 3-dimensional T2*-weighted images, using a multiple dipole-inversion algorithm. OEF maps were generated based on susceptibility differences between venous structures and brain tissues on QSM images. OEF ratios of the ipsilateral middle cerebral artery territory against the contralateral side were calculated on the QSM-OEF and PET-OEF images, using an anatomic template. The OEF ratio in the middle cerebral artery territory showed significant correlations between QSM-OEF and PET-OEF maps (r=0.69; P1.09, as determined by receiver operating characteristic analysis, showed a sensitivity and specificity of 0.82 and 0.86, respectively, for the substantial increase in the PET-OEF ratio. Absolute QSM-OEF values were significantly correlated with PET-OEF values in the patients with increased PET-OEF. OEF ratios on QSM-OEF images at 7 Tesla showed a good correlation with those on PET-OEF images in patients with unilateral steno-occlusive internal carotid artery/middle cerebral artery lesions, suggesting that noninvasive OEF measurement by MRI can be a substitute for PET. © 2017 American Heart Association, Inc.

  18. Quantitation of Insulin Analogues in Serum Using Immunoaffinity Extraction, Liquid Chromatography, and Tandem Mass Spectrometry.

    Science.gov (United States)

    Van Der Gugten, J Grace; Wong, Sophia; Holmes, Daniel T

    2016-01-01

    Insulin analysis is used in combination with glucose, C-peptide, beta-hydroxybutyrate, and proinsulin determination for the investigation of adult hypoglycemia. The most common cause is the administration of too much insulin or insulin secretagogue to a diabetic patient or inadequate caloric intake after administration of either. Occasionally there is a question as to whether hypoglycemia has been caused by an exogenous insulin-whether by accident, intent, or even malicious intent. While traditionally this was confirmed by a low or undetectable C-peptide in a hypoglycemic specimen, this finding is not entirely specific and would also be expected in the context of impaired counter-regulatory response, fatty acid oxidation defects, and liver failure-though beta-hydroxybutyrate levels can lend diagnostic clarity. For this reason, insulin is often requested. However, popular automated chemiluminescent immunoassays for insulin have distinctly heterogeneous performance in detecting analogue synthetic insulins with cross-reactivities ranging from near 0 % to greater than 100 %. The ability to detect synthetic insulins is vendor-specific and varies between insulin products. Liquid Chromatography and Tandem Mass Spectrometry (LC-MS/MS) offers a means to circumvent these analytical issues and both quantify synthetic insulins and identify the specific type. We present an immunoaffinity extraction and LC-MS/MS method capable of independent identification and quantitation of native sequence insulins (endogenous, Insulin Regular, Insulin NPH), and analogues Glargine, Lispro, Detemir, and Aspart with an analytical sensitivity for endogenous insulin of between 1 and 2 μU/mL in patient serum samples.

  19. Medicaid Analytic eXtract (MAX) General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Analytic eXtract (MAX) data is a set of person-level data files on Medicaid eligibility, service utilization, and payments. The MAX data are created to...

  20. Medicaid Analytic eXtract (MAX) General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Analytic eXtract (MAX) data is a set of person-level data files on Medicaid eligibility, service utilization, and payments. The MAX data are created to...

  1. Cladosporium herbarum extract characterized by means of quantitative immunoelectrophoretic methods with special attention to immediate type allergy.

    Science.gov (United States)

    Lowenstein, H; Aukrust, L; Gravesen, S

    1977-01-01

    Freeze-dried extract of Cladosporium herbarum Link ex Fr. was obtained by growing, harvesting, extracting, centrifuging, dialysing and freeze-drying. Quantitative immunoelectrophoresis using rabbit antibodies revealed the extraction procedure to be reproducible and the extract to be composed of 57 antigens, none of which originated from the substrate used in the growth. The molecular weight distribution and the approximate molecular weight of some antigens of C. herbarum were obtained using gel filtration. The pI distribution and the approximate pIs of a few distinct antigens of C. herbarum were obtained by isoelectric focusing. Preliminary identification of 4 allergens from C. herbarum was performed by means of CRIE (crossed radioimmunoelectrophoresis).

  2. Sender-receiver systems and applying information theory for quantitative synthetic biology.

    Science.gov (United States)

    Barcena Menendez, Diego; Senthivel, Vivek Raj; Isalan, Mark

    2015-02-01

    Sender-receiver (S-R) systems abound in biology, with communication systems sending information in various forms. Information theory provides a quantitative basis for analysing these processes and is being applied to study natural genetic, enzymatic and neural networks. Recent advances in synthetic biology are providing us with a wealth of artificial S-R systems, giving us quantitative control over networks with a finite number of well-characterised components. Combining the two approaches can help to predict how to maximise signalling robustness, and will allow us to make increasingly complex biological computers. Ultimately, pushing the boundaries of synthetic biology will require moving beyond engineering the flow of information and towards building more sophisticated circuits that interpret biological meaning.

  3. Sender–receiver systems and applying information theory for quantitative synthetic biology

    Science.gov (United States)

    Barcena Menendez, Diego; Senthivel, Vivek Raj; Isalan, Mark

    2015-01-01

    Sender–receiver (S–R) systems abound in biology, with communication systems sending information in various forms. Information theory provides a quantitative basis for analysing these processes and is being applied to study natural genetic, enzymatic and neural networks. Recent advances in synthetic biology are providing us with a wealth of artificial S–R systems, giving us quantitative control over networks with a finite number of well-characterised components. Combining the two approaches can help to predict how to maximise signalling robustness, and will allow us to make increasingly complex biological computers. Ultimately, pushing the boundaries of synthetic biology will require moving beyond engineering the flow of information and towards building more sophisticated circuits that interpret biological meaning. PMID:25282688

  4. Qualitative and Quantitative Data on the Use of the Internet for Archaeological Information

    Directory of Open Access Journals (Sweden)

    Lorna-Jane Richardson

    2015-04-01

    Full Text Available These survey results are from an online survey of 577 UK-based archaeological volunteers, professional archaeologists and archaeological organisations. These data cover a variety of topics related to how and why people access the Internet for information about archaeology, including demographic information, activity relating to accessing information on archaeological topics, archaeological sharing and networking and the use of mobile phone apps and QR codes for public engagement. There is wide scope for further qualitative and quantitative analysis of these data.

  5. Quantitative extraction of bedrock exposed rate based on unmanned aerial vehicle data and TM image in Karst Environment

    Science.gov (United States)

    wang, hongyan; li, qiangzi; du, xin; zhao, longcai

    2016-04-01

    In the karst regions of Southwest China, rocky desertification is one of the most serious problems of land degradation. The bedrock exposed rate is one of the important indexes to assess the degree of rocky desertification in the karst regions. Because of the inherent merits of macro scale, frequency, efficiency and synthesis, remote sensing is the promising method to monitor and assess karst rocky desertification on large scale. However, the actual measurement of bedrock exposed rate is difficult and existing remote sensing methods cannot directly be exploited to extract the bedrock exposed rate owing to the high complexity and heterogeneity of karst environments. Therefore, based on the UAV and TM data, the paper selected Xingren County as the research area, and the quantitative extraction of the bedrock exposed rate based on the multi scale remote sensing data was developed. Firstly, we used the object oriented method to carry out the accurate classification of UAV image and based on the results of rock extraction, the bedrock exposed rate was calculated in the 30m grid scale. Parts of the calculated samples were as training data and another samples were as the model validation data. Secondly, in each grid the band reflectivity of TM data was extracted and we also calculated a variety of rock index and vegetation index (NDVI, SAVI etc.). Finally, the network model was established to extract the bedrock exposed rate, the correlation coefficient (R) of the network model was 0.855 and the correlation coefficient (R) of the validation model was 0.677, the root mean square error (RMSE) was 0.073. Based on the quantitative inversion model, the distribution map of the bedrock exposed rate in Xingren County was obtained. Keywords: Bedrock exposed rate, quantitative extraction, UAV and TM data, Karst rocky desertification.

  6. Puffed cereals with added chamomile – quantitative analysis of polyphenols and optimization of their extraction method

    Directory of Open Access Journals (Sweden)

    Tomasz Blicharski

    2017-05-01

    For most of the analyzed compounds, the highest yields were obtained by ultrasound assisted extraction. The highest temperature during the ultrasonification process (60oC increased the efficiency of extraction, without degradation of polyphenols. UAE easily arrives at extraction equilibrium and therefore permits shorter periods of time, reducing the energy input. Furthermore, UAE meets the requirements of ‘Green Chemistry’

  7. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus

    OpenAIRE

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Background Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from fre...

  8. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  9. Isolation and quantitation of amygdalin in Apricot-kernel and Prunus Tomentosa Thunb. by HPLC with solid-phase extraction.

    Science.gov (United States)

    Lv, Wei-Feng; Ding, Ming-Yu; Zheng, Rui

    2005-08-01

    Apricot-kernel and Prunus Tomentosa Thunb. are traditional Chinese herb medicines that contain amygdalin as their major effective ingredient. In this report, three methods for the extraction of amygdalin from the medicinal materials are compared: ultrasonic extraction by methanol, Soxhlet extraction by methanol, and reflux extraction by water. The results show that reflux extraction water containing 0.1% citric acid is the best option. The optimal reflux is 2.5 h and water bath temperature is 60 degrees C. The solid-phase extraction method using C18 and multiwalled carbon nanotube as adsorbents is established the pretreatment of reflux extract, and the result shows that the two adsorbents have greater adsorptive capacity for amygdalin and good separation effect. In order to quantitate amygdalin in Apricot-kernel and Prunus Tomentosa Thunb., a reversed-phase high-performance liquid chromatography method using methanol-water (15:85, for 30 min and pure methanol after 30 min) as mobile phase is developed and a good result is obtained.

  10. Semantic information extracting system for classification of radiological reports in radiology information system (RIS)

    Science.gov (United States)

    Shi, Liehang; Ling, Tonghui; Zhang, Jianguo

    2016-03-01

    Radiologists currently use a variety of terminologies and standards in most hospitals in China, and even there are multiple terminologies being used for different sections in one department. In this presentation, we introduce a medical semantic comprehension system (MedSCS) to extract semantic information about clinical findings and conclusion from free text radiology reports so that the reports can be classified correctly based on medical terms indexing standards such as Radlex or SONMED-CT. Our system (MedSCS) is based on both rule-based methods and statistics-based methods which improve the performance and the scalability of MedSCS. In order to evaluate the over all of the system and measure the accuracy of the outcomes, we developed computation methods to calculate the parameters of precision rate, recall rate, F-score and exact confidence interval.

  11. Tagline: Information Extraction for Semi-Structured Text Elements in Medical Progress Notes

    Science.gov (United States)

    Finch, Dezon Kile

    2012-01-01

    Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in…

  12. Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    NARCIS (Netherlands)

    Habib, Mena B.; Keulen, van Maurice

    2011-01-01

    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration meth

  13. Semantic Preview Benefit in English: Individual Differences in the Extraction and Use of Parafoveal Semantic Information

    Science.gov (United States)

    Veldre, Aaron; Andrews, Sally

    2016-01-01

    Although there is robust evidence that skilled readers of English extract and use orthographic and phonological information from the parafovea to facilitate word identification, semantic preview benefits have been elusive. We sought to establish whether individual differences in the extraction and/or use of parafoveal semantic information could…

  14. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    Science.gov (United States)

    Jonnalagadda, Siddhartha

    2011-01-01

    In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…

  15. Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2011-01-01

    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration meth

  16. Information Risk Management: Qualitative or Quantitative? Cross industry lessons from medical and financial fields

    Directory of Open Access Journals (Sweden)

    Upasna Saluja

    2012-06-01

    Full Text Available Enterprises across the world are taking a hard look at their risk management practices. A number of qualitative and quantitative models and approaches are employed by risk practitioners to keep risk under check. As a norm most organizations end up choosing the more flexible, easier to deploy and customize qualitative models of risk assessment. In practice one sees that such models often call upon the practitioners to make qualitative judgments on a relative rating scale which brings in considerable room for errors, biases and subjectivity. On the other hand under the quantitative risk analysis approach, estimation of risk is connected with application of numerical measures of some kind. Medical risk management models lend themselves as ideal candidates for deriving lessons for Information Security Risk Management. We can use this considerably developed understanding of risk management from the medical field especially Survival Analysis towards handling risks that information infrastructures face. Similarly, financial risk management discipline prides itself on perhaps the most quantifiable of models in risk management. Market Risk and Credit Risk Information Security Risk Management can make risk measurement more objective and quantitative by referring to the approach of Credit Risk. During the recent financial crisis many investors and financial institutions lost money or went bankrupt respectively, because they did not apply the basic principles of risk management. Learning from the financial crisis provides some valuable lessons for information risk management.

  17. Information Risk Management: Qualitative or Quantitative? Cross industry lessons from medical and financial fields

    Directory of Open Access Journals (Sweden)

    Upasna Saluja

    2012-06-01

    Full Text Available Enterprises across the world are taking a hard look at their risk management practices. A number of qualitative and quantitative models and approaches are employed by risk practitioners to keep risk under check. As a norm most organizations end up choosing the more flexible, easier to deploy and customize qualitative models of risk assessment. In practice one sees that such models often call upon the practitioners to make qualitative judgments on a relative rating scale which brings in considerable room for errors, biases and subjectivity. On the other hand under the quantitative risk analysis approach, estimation of risk is connected with application of numerical measures of some kind. Medical risk management models lend themselves as ideal candidates for deriving lessons for Information Security Risk Management. We can use this considerably developed understanding of risk management from the medical field especially Survival Analysis towards handling risks that information infrastructures face. Similarly, financial risk management discipline prides itself on perhaps the most quantifiable of models in risk management. Market Risk and Credit Risk Information Security Risk Management can make risk measurement more objective and quantitative by referring to the approach of Credit Risk. During the recent financial crisis many investors and financial institutions lost money or went bankrupt respectively, because they did not apply the basic principles of risk management. Learning from the financial crisis provides some valuable lessons for information risk management.

  18. Ionic liquids based microwave-assisted extraction of lichen compounds with quantitative spectrophotodensitometry analysis.

    Science.gov (United States)

    Bonny, Sarah; Paquin, Ludovic; Carrié, Daniel; Boustie, Joël; Tomasi, Sophie

    2011-11-30

    Ionic liquids based extraction method has been applied to the effective extraction of norstictic acid, a common depsidone isolated from Pertusaria pseudocorallina, a crustose lichen. Five 1-alkyl-3-methylimidazolium ionic liquids (ILs) differing in composition of alkyl chain and anion were investigated for extraction efficiency. The extraction amount of norstictic acid was determined after recovery on HPTLC with a spectrophotodensitometer. The proposed approaches (IL-MAE and IL-heat extraction (IL-HE)) have been evaluated in comparison with usual solvents such as tetrahydrofuran in heat-reflux extraction and microwave-assisted extraction (MAE). The results indicated that both the characteristics of the alkyl chain and anion influenced the extraction of polyphenolic compounds. The sulfate-based ILs [C(1)mim][MSO(4)] and [C(2)mim][ESO(4)] presented the best extraction efficiency of norstictic acid. The reduction of the extraction times between HE and MAE (2 h-5 min) and a non-negligible ratio of norstictic acid in total extract (28%) supports the suitability of the proposed method. This approach was successfully applied to obtain additional compounds from other crustose lichens (Pertusaria amara and Ochrolechia parella).

  19. Towards an information extraction and knowledge formation framework based on Shannon entropy

    Directory of Open Access Journals (Sweden)

    Iliescu Dragoș

    2017-01-01

    Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.

  20. Information Management Processes for Extraction of Student Dropout Indicators in Courses in Distance Mode

    Directory of Open Access Journals (Sweden)

    Renata Maria Abrantes Baracho

    2016-04-01

    Full Text Available This research addresses the use of information management processes in order to extract student dropout indicators in distance mode courses. Distance education in Brazil aims to facilitate access to information. The MEC (Ministry of Education announced, in the second semester of 2013, that the main obstacles faced by institutions offering courses in this mode were students dropping out and the resistance of both educators and students to this mode. The research used a mixed methodology, qualitative and quantitative, to obtain student dropout indicators. The factors found and validated in this research were: the lack of interest from students, insufficient training in the use of the virtual learning environment for students, structural problems in the schools that were chosen to offer the course, students without e-mail, incoherent answers to activities to the course, lack of knowledge on the part of the student when using the computer tool. The scenario considered was a course offered in distance mode called Aluno Integrado (Integrated Student

  1. Identifying contributors of DNA mixtures by means of quantitative information of STR typing

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2012-01-01

    identified using polymorphic genetic markers. However, modern typing techniques supply additional quantitative data, which contain very important information about the observed evidence. This is particularly true for cases of DNA mixtures, where more than one individual has contributed to the observed...... biological stain. This article presents a method for including the quantitative information of short tandem repeat (STR) DNA mixtures in the LR. Also, an efficient algorithmic method for finding the best matching combination of DNA mixture profiles is derived and implemented in an on-line tool for two......- and three-person DNA mixtures. Finally, we demonstrate for two-person mixtures how this best matching pair of profiles can be used in estimating the likelihood ratio using importance sampling. The reason for using importance sampling for estimating the likelihood ratio is the often vast number...

  2. Qualitative and quantitative analyses of Compound Danshen extract based on (1)H NMR method and its application for quality control.

    Science.gov (United States)

    Yan, Kai-Jing; Chu, Yang; Huang, Jian-Hua; Jiang, Miao-Miao; Li, Wei; Wang, Yue-Fei; Huang, Hui-Yong; Qin, Yu-Hui; Ma, Xiao-Hui; Zhou, Shui-Ping; Sun, Henry; Wang, Wei

    2016-11-30

    In this study, a new approach using (1)H NMR spectroscopy combined with chemometrics method was developed for qualitative and quantitative analyses of extracts of Compound Danshen Dripping Pills (CDDP). For the qualitative analysis, some metabolites presented in Compound Danshen extract (CDE, extraction intermediate of CDDP) were detected, including phenolic acids, saponins, saccharides, organic acids and amino acids, by the proposed (1)H NMR method, and metabolites profiles were further analyzed by selected chemometrics algorithms to define the threshold values for product quality evaluation. Moreover, three main phenolic acids (danshensu, salvianolic acid B, and procatechuic aldehyde) in CDE were determined simultaneously, and method validation in terms of linearity, precision, repeatability, accuracy, and stability of the dissolved target compounds in solution was performed. The average recoveries varied between 84.20% and 110.75% while the RSDs were below 6.34% for the three phenolic acids. This (1)H NMR method offers an integral view of the extract composition, allows the qualitative and quantitative analysis of CDDP, and has the potential to be a supplementary tool to UPLC/HPLC for quality assessment of Chinese herbal medicines.

  3. Guided protein extraction from formalin-fixed tissues for quantitative multiplex analysis avoids detrimental effects of histological stains.

    Science.gov (United States)

    Becker, Karl-Friedrich; Schott, Christina; Becker, Ingrid; Höfler, Heinz

    2008-05-01

    Formalin fixed and paraffin embedded (FFPE) tissues are the basis for histopathological diagnosis of many diseases around the world. For translational research and routine diagnostics, protein analysis from FFPE tissues is very important. We evaluated the potential influence of six histological stains, including hematoxylin (Mayer and Gill), fast red, light green, methyl blue and toluidine blue, for yield, electrophoretic mobility in 1-D gels, and immunoreactivity of proteins isolated from formalin-fixed breast cancer tissues. Proteins extracted from stained FFPE tissues using a recently established technique were compared with proteins obtained from the same tissues but without prior histological staining. Western blot and quantitative protein lysate microarray analysis demonstrated that histological staining can result in decreased protein yield but may not have much influence on immunoreactivity and electrophoretic mobility. Interestingly, not all staining protocols tested are compatible with subsequent protein analysis. The commonly used hematoxylin staining was found to be suitable for multiplexed quantitative protein measurement technologies although protein extraction was less efficient. For best results we suggest a guided protein extraction method, in which an adjacent hematoxylin/eosin-stained tissue section is used to control dissection of an unstained specimen for subsequent protein extraction and quantification for research and diagnosis.

  4. Automated information extraction of key trial design elements from clinical trial publications.

    Science.gov (United States)

    de Bruijn, Berry; Carini, Simona; Kiritchenko, Svetlana; Martin, Joel; Sim, Ida

    2008-11-06

    Clinical trials are one of the most valuable sources of scientific evidence for improving the practice of medicine. The Trial Bank project aims to improve structured access to trial findings by including formalized trial information into a knowledge base. Manually extracting trial information from published articles is costly, but automated information extraction techniques can assist. The current study highlights a single architecture to extract a wide array of information elements from full-text publications of randomized clinical trials (RCTs). This architecture combines a text classifier with a weak regular expression matcher. We tested this two-stage architecture on 88 RCT reports from 5 leading medical journals, extracting 23 elements of key trial information such as eligibility rules, sample size, intervention, and outcome names. Results prove this to be a promising avenue to help critical appraisers, systematic reviewers, and curators quickly identify key information elements in published RCT articles.

  5. Extraction of information of targets based on frame buffer

    Science.gov (United States)

    Han, Litao; Kong, Qiaoli; Zhao, Xiangwei

    2008-10-01

    In all ways of perception, vision is the main channel of getting environmental information for intelligent virtual agent (IVA). Reality and real-time computation of behavior simulation of intelligent objects in interactive virtual environment are required. This paper proposes a new method of getting environmental information. Firstly visual images are generated by setting a second view port in the location of viewpoint of IVA, and then the target location, distance, azimuth, and other basic geometric information and semantic information can be acquired based on the images. Experiments show that the method gives full play to the performance of computer graphic hardware with simple process and higher efficiency.

  6. Extracting local information from crowds through betting markets

    Science.gov (United States)

    Weijs, Steven

    2015-04-01

    In this research, a set-up is considered in which users can bet against a forecasting agency to challenge their probabilistic forecasts. From an information theory standpoint, a reward structure is considered that either provides the forecasting agency with better information, paying the successful providers of information for their winning bets, or funds excellent forecasting agencies through users that think they know better. Especially for local forecasts, the approach may help to diagnose model biases and to identify local predictive information that can be incorporated in the models. The challenges and opportunities for implementing such a system in practice are also discussed.

  7. Extraction of spatio-temporal information of earthquake event based on semantic technology

    Science.gov (United States)

    Fan, Hong; Guo, Dan; Li, Huaiyuan

    2015-12-01

    In this paper a web information extraction method is presented which identifies a variety of thematic events utilizing the event knowledge framework derived from text training, and then further uses the syntactic analysis to extract the event key information. The method which combines the text semantic information and domain knowledge of the event makes the extraction of information people interested more accurate. In this paper, web based earthquake news extraction is taken as an example. The paper firstly briefs the overall approaches, and then details the key algorithm and experiments of seismic events extraction. Finally, this paper conducts accuracy analysis and evaluation experiments which demonstrate that the proposed method is a promising way of hot events mining.

  8. Extracting Coherent Information from Noise Based Correlation Processing

    Science.gov (United States)

    2015-09-30

    LONG-TERM GOALS The goal of this research is to establish methodologies to utilize ambient noise in the ocean and to determine what scenarios...None PUBLICATIONS [1] “ Monitoring deep-ocean temperatures using acoustic ambinet noise,”K. W. Woolfe, S. Lani, K.G. Sabra, W. A. Kuperman...Geophys. Res. Lett., 42,2878–2884, doi:10.1002/2015GL063438 (2015). [2] “Optimized extraction of coherent arrivals from ambient noise correlations in

  9. Quantitative Ultrasound-Assisted Extraction for Trace-Metal Determination: An Experiment for Analytical Chemistry

    Science.gov (United States)

    Lavilla, Isela; Costas, Marta; Pena-Pereira, Francisco; Gil, Sandra; Bendicho, Carlos

    2011-01-01

    Ultrasound-assisted extraction (UAE) is introduced to upper-level analytical chemistry students as a simple strategy focused on sample preparation for trace-metal determination in biological tissues. Nickel extraction in seafood samples and quantification by electrothermal atomic absorption spectrometry (ETAAS) are carried out by a team of four…

  10. Quantitative Ultrasound-Assisted Extraction for Trace-Metal Determination: An Experiment for Analytical Chemistry

    Science.gov (United States)

    Lavilla, Isela; Costas, Marta; Pena-Pereira, Francisco; Gil, Sandra; Bendicho, Carlos

    2011-01-01

    Ultrasound-assisted extraction (UAE) is introduced to upper-level analytical chemistry students as a simple strategy focused on sample preparation for trace-metal determination in biological tissues. Nickel extraction in seafood samples and quantification by electrothermal atomic absorption spectrometry (ETAAS) are carried out by a team of four…

  11. A Novel Quantitative Analysis Model for Information System Survivability Based on Conflict Analysis

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Huiqiang; ZHAO Guosheng

    2007-01-01

    This paper describes a novel quantitative analysis model for system survivability based on conflict analysis, which provides a direct-viewing survivable situation. Based on the three-dimensional state space of conflict, each player's efficiency matrix on its credible motion set can be obtained. The player whose desire is the strongest in all initiates the moving and the overall state transition matrix of information system may be achieved. In addition, the process of modeling and stability analysis of conflict can be converted into a Markov analysis process, thus the obtained results with occurring probability of each feasible situation will help the players to quantitatively judge the probability of their pursuing situations in conflict. Compared with the existing methods which are limited to post-explanation of system's survivable situation, the proposed model is relatively suitable for quantitatively analyzing and forecasting the future development situation of system survivability. The experimental results show that the model may be effectively applied to quantitative analysis for survivability. Moreover, there will be a good application prospect in practice.

  12. Stability Test and Quantitative and Qualitative Analyses of the Amino Acids in Pharmacopuncture Extracted from Scolopendra subspinipes mutilans

    Science.gov (United States)

    Cho, GyeYoon; Han, KyuChul; Yoon, JinYoung

    2015-01-01

    Objectives: Scolopendra subspinipes mutilans (S. subspinipes mutilans) is known as a traditional medicine and includes various amino acids, peptides and proteins. The amino acids in the pharmacopuncture extracted from S. subspinipes mutilans by using derivatization methods were analyzed quantitatively and qualitatively by using high performance liquid chromatography (HPLC) over a 12 month period to confirm its stability. Methods: Amino acids of pharmacopuncture extracted from S. subspinipes mutilans were derived by using O-phthaldialdehyde (OPA) & 9-fluorenyl methoxy carbonyl chloride (FMOC) reagent and were analyzed using HPLC. The amino acids were detected by using a diode array detector (DAD) and a fluorescence detector (FLD) to compare a mixed amino acid standard (STD) to the pharmacopuncture from centipedes. The stability tests on the pharmacopuncture from centipedes were done using HPLC for three conditions: a room temperature test chamber, an acceleration test chamber, and a cold test chamber. Results: The pharmacopuncture from centipedes was prepared by using the method of the Korean Pharmacopuncture Institute (KPI) and through quantitative analyses was shown to contain 9 amino acids of the 16 amino acids in the mixed amino acid STD. The amounts of the amino acids in the pharmacopuncture from centipedes were 34.37 ppm of aspartate, 123.72 ppm of arginine, 170.63 ppm of alanine, 59.55 ppm of leucine and 57 ppm of lysine. The relative standard deviation (RSD %) results for the pharmacopuncture from centipedes had a maximum value of 14.95% and minimum value of 1.795% on the room temperature test chamber, the acceleration test chamber and the cold test chamber stability tests. Conclusion: Stability tests on and quantitative and qualitative analyses of the amino acids in the pharmacopuncture extracted from centipedes by using derivatization methods were performed by using HPLC. Through research, we hope to determine the relationship between time and the

  13. Extracting Metallic Nanoparticles from Soils for Quantitative Analysis: Method Development Using Engineered Silver Nanoparticles and SP-ICP-MS.

    Science.gov (United States)

    Schwertfeger, D M; Velicogna, Jessica R; Jesmer, Alexander H; Saatcioglu, Selin; McShane, Heather; Scroggins, Richard P; Princz, Juliska I

    2017-02-21

    The lack of an efficient and standardized method to disperse soil particles and quantitatively subsample the nanoparticulate fraction for characterization analyses is hindering progress in assessing the fate and toxicity of metallic engineered nanomaterials in the soil environment. This study investigates various soil extraction and extract preparation techniques for their ability to remove nanoparticulate Ag from a field soil amended with biosolids contaminated with engineered silver nanoparticles (AgNPs), while presenting a suitable suspension for quantitative single-particle inductively coupled plasma mass spectroscopy (SP-ICP-MS) analysis. Extraction parameters investigated included reagent type (water, NaNO3, KNO3, tetrasodium pyrophosphate (TSPP), tetramethylammonium hydroxide (TMAH)), soil-to-reagent ratio, homogenization techniques as well as procedures commonly used to separate nanoparticles from larger colloids prior to analysis (filtration, centrifugation, and sedimentation). We assessed the efficacy of the extraction procedure by testing for the occurrence of potential procedural artifacts (dissolution, agglomeration) using a dissolved/particulate Ag mass ratio and by monitoring the amount of Ag mass in discrete particles. The optimal method employed 2.5 mM TSPP used in a 1:100 (m/v) soil-to-reagent ratio, with ultrasonication to enhance particle dispersion and sedimentation to settle out the micrometer-sized particles. A spiked-sample recovery analysis shows that 96% ± 2% of the total Ag mass added as engineered AgNP is recovered, which includes the recovery of 84.1% of the particles added, while particle recovery in a spiked method blank is ∼100%, indicating that both the extraction and settling procedure have a minimal effect on driving transformation processes. A soil dilution experiment showed that the method extracted a consistent proportion of nanoparticulate Ag (9.2% ± 1.4% of the total Ag) in samples containing 100%, 50%, 25%, and 10

  14. Advanced remote sensing terrestrial information extraction and applications

    CERN Document Server

    Liang, Shunlin; Wang, Jindi

    2012-01-01

    Advanced Remote Sensing is an application-based reference that provides a single source of mathematical concepts necessary for remote sensing data gathering and assimilation. It presents state-of-the-art techniques for estimating land surface variables from a variety of data types, including optical sensors such as RADAR and LIDAR. Scientists in a number of different fields including geography, geology, atmospheric science, environmental science, planetary science and ecology will have access to critically-important data extraction techniques and their virtually unlimited application

  15. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  16. Extraction of information about periodic orbits from scattering functions

    CERN Document Server

    Bütikofer, T; Seligman, T H; Bütikofer, Thomas; Jung, Christof; Seligman, Thomas H.

    1999-01-01

    As a contribution to the inverse scattering problem for classical chaotic systems, we show that one can select sequences of intervals of continuity, each of which yields the information about period, eigenvalue and symmetry of one unstable periodic orbit.

  17. NEW METHOD OF EXTRACTING WEAK FAILURE INFORMATION IN GEARBOX BY COMPLEX WAVELET DENOISING

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhixin; XU Jinwu; YANG Debin

    2008-01-01

    Because the extract of the weak failure information is always the difficulty and focus of fault detection. Aiming for specific statistical properties of complex wavelet coefficients of gearbox vibration signals, a new signal-denoising method which uses local adaptive algorithm based on dual-tree complex wavelet transform (DT-CWT) is introduced to extract weak failure information in gear, especially to extract impulse components. By taking into account the non-Gaussian probability distribution and the statistical dependencies among wavelet coefficients of some signals, and by taking the advantage of near shift-invariance of DT-CWT, the higher signal-to-noise ratio (SNR) than common wavelet denoising methods can be obtained. Experiments of extracting periodic impulses in gearbox vibration signals indicate that the method can extract incipient fault feature and hidden information from heavy noise, and it has an excellent effect on identifying weak feature signals in gearbox vibration signals.

  18. The Extraction Model of Paddy Rice Information Based on GF-1 Satellite WFV Images.

    Science.gov (United States)

    Yang, Yan-jun; Huang, Yan; Tian, Qing-jiu; Wang, Lei; Geng, Jun; Yang, Ran-ran

    2015-11-01

    In the present, using the characteristics of paddy rice at different phenophase to identify it by remote sensing images is an efficient way in the information extraction. According to the remarkably properties of paddy rice different from other vegetation, which the surface of paddy fields is with a large number of water in the early stage, NDWI (normalized difference water index) which is used to extract water information can reasonably be applied in the extraction of paddy rice at the early stage of the growth. And using NDWI ratio of two phenophase can expand the difference between paddy rice and other surface features, which is an important part for the extraction of paddy rice with high accuracy. Then using the variation of NDVI (normalized differential vegetation index) in different phenophase can further enhance accuracy of paddy rice information extraction. This study finds that making full advantage of the particularity of paddy rice in different phenophase and combining two indices (NDWI and NDVI) associated with paddy rice can establish a reasonable, accurate and effective extraction model of paddy rice. This is also the main way to improve the accuracy of paddy rice extraction. The present paper takes Lai'an in Anhui Province as the research area, and rice as the research object. It constructs the extraction model of paddy rice information using NDVI and NDWI between tillering stage and heading stage. Then the model was applied to GF1-WFV remote sensing image on July 12, 2013 and August 30, 2013. And it effectively extracted out of paddy rice distribution in Lai'an and carried on the mapping. At last, the result of extraction was verified and evaluated combined with field investigation data in the study area. The result shows that using the extraction model can quickly and accurately obtain the distribution of rice information, and it has the very good universality.

  19. Rapid and selective extraction, isolation, preconcentration, and quantitation of small RNAs from cell lysate using on-chip isotachophoresis.

    Science.gov (United States)

    Schoch, Reto B; Ronaghi, Mostafa; Santiago, Juan G

    2009-08-07

    We present a technique which enables the separation of small RNAs-such as microRNAs (miRNAs), short interfering RNAs (siRNAs), and Piwi-interacting RNAs (piRNAs)-from >or=66 nucleotide RNAs and other biomolecules contained in a cell lysate. In particular, the method achieves separation of small RNAs from precursor miRNAs (pre-miRNAs) in less than 3 min. We use on-chip isotachophoresis (ITP) for the simultaneous extraction, isolation, preconcentration and quantitation of small RNAs (approximately 22 nucleotides) and employ the high-efficiency sieving matrix Pluronic F-127; a thermo-responsive triblock copolymer which allows convenient microchannel loading at low temperature. We present the isolation of small RNAs from the lysate of 293A human kidney cells, and quantitate the number of short RNA molecules per cell to be 2.9x10(7). We estimate this quantity is an aggregate of roughly 500 types of short RNA molecules per 293A cell. Currently, the minimal cell number for small RNA extraction and detection with our method is approximately 900 (from a 5 microL sample volume), and we believe that small RNA analysis from tens of cells is realizable. Techniques for rapid and sensitive extraction and isolation of small RNAs from cell lysate are much-needed to further uncover their full range and functionality, including RNA interference studies.

  20. Strategy for the extraction of yeast DNA from artisan agave must for quantitative PCR analysis.

    Science.gov (United States)

    Kirchmayr, Manuel Reinhart; Segura-Garcia, Luis Eduardo; Flores-Berrios, Ericka Patricia; Gschaedler, Anne

    2011-11-01

    An efficient method for the direct extraction of yeast genomic DNA from agave must was developed. The optimized protocol, which was based on silica-adsorption of DNA on microcolumns, included an enzymatic cell wall degradation step followed by prolonged lysis with hot detergent. The resulting extracts were suitable templates for subsequent qPCR assays that quantified mixed yeast populations in artisan Mexican mezcal fermentations. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  1. Information extraction from FN plots of tungsten microemitters

    Energy Technology Data Exchange (ETDEWEB)

    Mussa, Khalil O. [Department of Physics, Mu' tah University, Al-Karak (Jordan); Mousa, Marwan S., E-mail: mmousa@mutah.edu.jo [Department of Physics, Mu' tah University, Al-Karak (Jordan); Fischer, Andreas, E-mail: andreas.fischer@physik.tu-chemnitz.de [Institut für Physik, Technische Universität Chemnitz, Chemnitz (Germany)

    2013-09-15

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10{sup −8}mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  2. A hybrid DNA extraction method for the qualitative and quantitative assessment of bacterial communities from poultry production samples.

    Science.gov (United States)

    Rothrock, Michael J; Hiett, Kelli L; Gamble, John; Caudill, Andrew C; Cicconi-Hogan, Kellie M; Caporaso, J Gregory

    2014-12-10

    The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g., microbiomics, metagenomics) is becoming more prevalent in the agricultural and environmental sciences and many environmental samples within these disciplines can be physiochemically and microbiologically unique (e.g., fecal and litter/bedding samples from the poultry production spectrum), appropriate and effective DNA extraction methods need to be carefully chosen. Therefore, a novel semi-automated hybrid DNA extraction method was developed specifically for use with environmental poultry production samples. This method is a combination of the two major types of DNA extraction: mechanical and enzymatic. A two-step intense mechanical homogenization step (using bead-beating specifically formulated for environmental samples) was added to the beginning of the "gold standard" enzymatic DNA extraction method for fecal samples to enhance the removal of bacteria and DNA from the sample matrix and improve the recovery of Gram-positive bacterial community members. Once the enzymatic extraction portion of the hybrid method was initiated, the remaining purification process was automated using a robotic workstation to increase sample throughput and decrease sample processing error. In comparison to the strict mechanical and enzymatic DNA extraction methods, this novel hybrid method provided the best overall combined performance when considering quantitative (using 16S rRNA qPCR) and qualitative (using microbiomics) estimates of the total bacterial communities when processing poultry feces and litter samples.

  3. EXTRACTION AND QUANTITATIVE DETERMINATION OF ASCORBIC ACID FROM BANANA PEEL MUSA ACUMINATA ‘KEPOK’

    Directory of Open Access Journals (Sweden)

    Khairul Anwar Mohamad Said

    2016-04-01

    Full Text Available This paper discusses the extraction of an antioxidant compound, which is ascorbic acid or vitamin C, from a banana peel using an ultrasound-assisted extraction (UAE method. The type of banana used was Musa acuminata also known as “PisangKepok” in Malaysia. The investigation includes the effect of solvent/solid ratio (4.5, 5 g and 10  ml/g, sonication time (15, 30 and 45 mins and temperature variation (30 , 45  and 60oC on the extraction of ascorbic acid compounds from the banana peel to determine the best or optimum condition of the operation. Out of all extract samples analyzed by redox titration method using iodine solution, it was found that the highest yield was 0.04939 ± 0.00080 mg that resulted from an extraction at 30oC for 15 mins with 5 ml/g solvent-to-solute ratio.KEYWORDS:  Musa acuminata; ultrasound-assisted extraction; vitamin C; redox titration

  4. Theory and application for retrieval and fusion of spatial and temporal quantitative information from complex natural environment

    Institute of Scientific and Technical Information of China (English)

    JIN Yaqiu

    2007-01-01

    This paper briefly presents the research progress of the State Major Basic Research Project 2001CB309400,"Theory and Application for Retrieval and Fusion of Spatial and Temporal Quantitative Information from Complex Natural Environment". Based on the rapid advancement of synthetic aperture radar (SAR) imagery technology, informa-tion theory of fully polarimetric scattering and applications in polarimetric SAR remote sensing are developed. To promote the modeling of passive microwave remote sensing, thevector (polarized) radiative transfer theory (VRT) of complex natural media such as inhomogeneous, multi-layered and 3-dimensional VRT is developed. With these theoretical progresses, data validation and retrieval algorithms for some typical events and characteristic parameters of earth terrain surfaces, atmosphere, and oceans from operational and experimental remote sensing satellites are studied. Employ-ing remote sensing, radiative transfer simulation, geographic information systems (GIS), land hydrological process, and data assimilation, the Chinese land data assimilation system (CLDAS) is established. Towards the future development of China's microwave meteorological satellites, employing remote sensing data of currently available SSM/I (special sensor microwave/imager), AMSU (advanced microwave sounding unit), MTI (microwave temperature imager),etc., with ground-based measurements, several operation alalgorithms and databases for atmospheric precipitation, water vapor and liquid water in clouds, and other hydrological/hydrological applications are developed. To advance China's SAR and InSAR (interferometric SAR) technologies, the image processing and analysis of ERS (European remote sensing), Radarsat SAR, and Chinese SAR, etc., the software platforms are accomplished. Based on the researches of multi-information fusion, some simulations, identification,and information extractions of the targets from complex background clutter scenes are studied. Some

  5. Enhanced Trace-Fiber Color Discrimination by Electrospray Ionization Mass Spectrometry: A Quantitative and Qualitative Tool for the Analysis of Dyes Extracted from Sub-millimeter Nylon Fibers

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-09-26

    The application of electrospray-ionization mass spectrometry (ESI-MS) to trace-fiber color analysis is explored using acidic dyes commonly employed to color nylon-based fibers, as well as extracts from dyed nylon fibers. Qualitative information about constituent dyes and quantitative information about the relative amounts of those dyes present on a single fiber become readily available using this technique. Sample requirements for establishing the color-identity of different samples (i.e., comparative trace-fiber analysis) are shown to be sub-millimeter. Absolute verification of dye-mixture identity (beyond the comparison of molecular weights derived from ESI-MS) can be obtained by expanding the technique to include tandem mass spectrometry (ESI-MS/MS). For dyes of unknown origin, the ESI-MS/MS analyses may offer insights into the chemical structure of the compound--information not available from chromatographic techniques alone. This research demonstrates that ESI-MS is viable as a sensitive technique for distinguishing dye constituents extracted from a minute amount of trace fiber evidence. A protocol is suggested to establish/refute the proposition that two fibers--one of which is available in minute quantity only--are of the same origin.

  6. Advanced Extraction of Spatial Information from High Resolution Satellite Data

    Science.gov (United States)

    Pour, T.; Burian, J.; Miřijovský, J.

    2016-06-01

    In this paper authors processed five satellite image of five different Middle-European cities taken by five different sensors. The aim of the paper was to find methods and approaches leading to evaluation and spatial data extraction from areas of interest. For this reason, data were firstly pre-processed using image fusion, mosaicking and segmentation processes. Results going into the next step were two polygon layers; first one representing single objects and the second one representing city blocks. In the second step, polygon layers were classified and exported into Esri shapefile format. Classification was partly hierarchical expert based and partly based on the tool SEaTH used for separability distinction and thresholding. Final results along with visual previews were attached to the original thesis. Results are evaluated visually and statistically in the last part of the paper. In the discussion author described difficulties of working with data of large size, taken by different sensors and different also thematically.

  7. Extraction of Information on the Technical Effect from a Patent Document

    Science.gov (United States)

    Sakai, Hiroyuki; Nonaka, Hirohumi; Masuyama, Shigeru

    We propose a method for extracting information on the technical effect from a patent document. The information on the technical effect extracted by our method is useful for generating patent maps (see e.g., Figure 1.) automatically or analyzing the technical trend from patent documents. Our method extracts expressions containing the information on the technical effect by using frequent expressions and clue expressions effective for extracting them. The frequent expressions and clue expressions are extracted by using statistical information and initial clue expressions automatically. Our method extracts expressions containing the information on the technical effect without predetermined patterns given by hand, and is expected to be applied to other tasks for acquiring expressions that have a particular meaning (e.g., information on the means for solving the problems) not limited to the information on the technical effect. Our method achieves not only high precision (78.0%) but also high recall (77.6%) by acquiring such clue expressions automatically from patent documents.

  8. An Augmented Classical Least Squares Method for Quantitative Raman Spectral Analysis against Component Information Loss

    Directory of Open Access Journals (Sweden)

    Yan Zhou

    2013-01-01

    Full Text Available We propose an augmented classical least squares (ACLS calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS and principal component regression (PCR using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  9. MR urography: Anatomical and quantitative information on congenital malformations in children.

    Science.gov (United States)

    Karaveli, Maria; Katsanidis, Dimitrios; Kalaitzoglou, Ioannis; Haritanti, Afroditi; Sioundas, Anastasios; Dimitriadis, Athanasios; Psarrakos, Kyriakos

    2013-03-01

    Magnetic resonance urography (MRU) is considered to be the next step in uroradiology. This technique combines superb anatomical images and functional information in a single test. In this article, we aim to present the topic of MRU in children and how it has been implemented in Northern Greece so far. The purpose of this study is to demonstrate the potential of MRU in clinical practice. We focus both on the anatomical and the quantitative information this technique can offer. MRU was applied in 25 children (ages from 3 to 11 years) diagnosed with different types of congenital malformations. T1 and T2 images were obtained for all patients. Dynamic, contrast-enhanced data were processed and signal intensity versus time curves were created for all patients from regions of interest (ROIs) selected around the kidneys in order to yield quantitative information regarding the kidneys function. From the slopes of these curves we were able to evaluate which kidneys were functional and from the corticomedullary cross-over point to determine whether the renal system was obstructed or not. In all 25 cases MRU was sufficient, if not superior to other imaging modalities, to establish a complete diagnosis.

  10. MR urography: Anatomical and quantitative information on congenital malformations in children

    Directory of Open Access Journals (Sweden)

    Maria Karaveli

    2013-01-01

    Full Text Available Background and Aim: Magnetic resonance urography (MRU is considered to be the next step in uroradiology. This technique combines superb anatomical images and functional information in a single test. In this article, we aim to present the topic of MRU in children and how it has been implemented in Northern Greece so far. The purpose of this study is to demonstrate the potential of MRU in clinical practice. We focus both on the anatomical and the quantitative information this technique can offer. Materials and Methods: MRU was applied in 25 children (ages from 3 to 11 years diagnosed with different types of congenital malformations. T1 and T2 images were obtained for all patients. Dynamic, contrast-enhanced data were processed and signal intensity versus time curves were created for all patients from regions of interest (ROIs selected around the kidneys in order to yield quantitative information regarding the kidneys function. Results: From the slopes of these curves we were able to evaluate which kidneys were functional and from the corticomedullary cross-over point to determine whether the renal system was obstructed or not. Conclusion: In all 25 cases MRU was sufficient, if not superior to other imaging modalities, to establish a complete diagnosis.

  11. On Depth Information Extraction from Metal Detector Signals

    NARCIS (Netherlands)

    Schoolderman, A.J.; Wolf, F.J. de; Merlat, L.

    2003-01-01

    Information on the depth of objects detected with the help of a metal detector is useful for safe excavation of these objects in demining operations. Apart from that, depth informatíon may be used in advanced sensor fusion algorithms for a detection system where a metal detector is combíned with eg.

  12. Extracting Conflict-free Information from Multi-labeled Trees

    CERN Document Server

    Deepak, Akshay; McMahon, Michelle M

    2012-01-01

    A multi-labeled tree, or MUL-tree, is a phylogenetic tree where two or more leaves share a label, e.g., a species name. A MUL-tree can imply multiple conflicting phylogenetic relationships for the same set of taxa, but can also contain conflict-free information that is of interest and yet is not obvious. We define the information content of a MUL-tree T as the set of all conflict-free quartet topologies implied by T, and define the maximal reduced form of T as the smallest tree that can be obtained from T by pruning leaves and contracting edges while retaining the same information content. We show that any two MUL-trees with the same information content exhibit the same reduced form. This introduces an equivalence relation in MUL-trees with potential applications to comparing MUL-trees. We present an efficient algorithm to reduce a MUL-tree to its maximally reduced form and evaluate its performance on empirical datasets in terms of both quality of the reduced tree and the degree of data reduction achieved.

  13. A strategy for extracting and analyzing large-scale quantitative epistatic interaction data

    Science.gov (United States)

    Collins, Sean R; Schuldiner, Maya; Krogan, Nevan J; Weissman, Jonathan S

    2006-01-01

    Recently, approaches have been developed for high-throughput identification of synthetic sick/lethal gene pairs. However, these are only a specific example of the broader phenomenon of epistasis, wherein the presence of one mutation modulates the phenotype of another. We present analysis techniques for generating high-confidence quantitative epistasis scores from measurements made using synthetic genetic array and epistatic miniarray profile (E-MAP) technology, as well as several tools for higher-level analysis of the resulting data that are greatly enhanced by the quantitative score and detection of alleviating interactions. PMID:16859555

  14. Two applications of information extraction to biological science journal articles: enzyme interactions and protein structures.

    Science.gov (United States)

    Humphreys, K; Demetriou, G; Gaizauskas, R

    2000-01-01

    Information extraction technology, as defined and developed through the U.S. DARPA Message Understanding Conferences (MUCs), has proved successful at extracting information primarily from newswire texts and primarily in domains concerned with human activity. In this paper we consider the application of this technology to the extraction of information from scientific journal papers in the area of molecular biology. In particular, we describe how an information extraction system designed to participate in the MUC exercises has been modified for two bioinformatics applications: EMPathIE, concerned with enzyme and metabolic pathways; and PASTA, concerned with protein structure. Progress to date provides convincing grounds for believing that IE techniques will deliver novel and effective ways for scientists to make use of the core literature which defines their disciplines.

  15. Comparison of DNA extraction kits and modification of DNA elution procedure for the quantitation of subdominant bacteria from piggery effluents with real‐time PCR

    National Research Council Canada - National Science Library

    Desneux, Jérémy; Pourcher, Anne‐Marie

    2014-01-01

    Four commercial DNA extraction kits and a minor modification in the DNA elution procedure were evaluated for the quantitation of bacteria in pig manure samples. The PowerSoil ® , PowerFecal ® , NucleoSpin...

  16. A New Classification Analysis of Customer Requirement Information Based on Quantitative Standardization for Product Configuration

    Directory of Open Access Journals (Sweden)

    Zheng Xiao

    2016-01-01

    Full Text Available Traditional methods used for the classification of customer requirement information are typically based on specific indicators, hierarchical structures, and data formats and involve a qualitative analysis in terms of stationary patterns. Because these methods neither consider the scalability of classification results nor do they regard subsequent application to product configuration, their classification becomes an isolated operation. However, the transformation of customer requirement information into quantifiable values would lead to a dynamic classification according to specific conditions and would enable an association with product configuration in an enterprise. This paper introduces a classification analysis based on quantitative standardization, which focuses on (i expressing customer requirement information mathematically and (ii classifying customer requirement information for product configuration purposes. Our classification analysis treated customer requirement information as follows: first, it was transformed into standardized values using mathematics, subsequent to which it was classified through calculating the dissimilarity with general customer requirement information related to the product family. Finally, a case study was used to demonstrate and validate the feasibility and effectiveness of the classification analysis.

  17. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp.).

    Science.gov (United States)

    Turner, Cameron R; Miller, Derryl J; Coyne, Kathryn J; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  18. Improved methods for capture, extraction, and quantitative assay of environmental DNA from Asian bigheaded carp (Hypophthalmichthys spp..

    Directory of Open Access Journals (Sweden)

    Cameron R Turner

    Full Text Available Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp., an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species.

  19. Improved Methods for Capture, Extraction, and Quantitative Assay of Environmental DNA from Asian Bigheaded Carp (Hypophthalmichthys spp.)

    Science.gov (United States)

    Turner, Cameron R.; Miller, Derryl J.; Coyne, Kathryn J.; Corush, Joel

    2014-01-01

    Indirect, non-invasive detection of rare aquatic macrofauna using aqueous environmental DNA (eDNA) is a relatively new approach to population and biodiversity monitoring. As such, the sensitivity of monitoring results to different methods of eDNA capture, extraction, and detection is being investigated in many ecosystems and species. One of the first and largest conservation programs with eDNA-based monitoring as a central instrument focuses on Asian bigheaded carp (Hypophthalmichthys spp.), an invasive fish spreading toward the Laurentian Great Lakes. However, the standard eDNA methods of this program have not advanced since their development in 2010. We developed new, quantitative, and more cost-effective methods and tested them against the standard protocols. In laboratory testing, our new quantitative PCR (qPCR) assay for bigheaded carp eDNA was one to two orders of magnitude more sensitive than the existing endpoint PCR assays. When applied to eDNA samples from an experimental pond containing bigheaded carp, the qPCR assay produced a detection probability of 94.8% compared to 4.2% for the endpoint PCR assays. Also, the eDNA capture and extraction method we adapted from aquatic microbiology yielded five times more bigheaded carp eDNA from the experimental pond than the standard method, at a per sample cost over forty times lower. Our new, more sensitive assay provides a quantitative tool for eDNA-based monitoring of bigheaded carp, and the higher-yielding eDNA capture and extraction method we describe can be used for eDNA-based monitoring of any aquatic species. PMID:25474207

  20. Perfusion information extracted from resting state functional magnetic resonance imaging.

    Science.gov (United States)

    Tong, Yunjie; Lindsey, Kimberly P; Hocke, Lia M; Vitaliano, Gordana; Mintzopoulos, Dionyssios; Frederick, Blaise deB

    2017-02-01

    It is widely known that blood oxygenation level dependent (BOLD) contrast in functional magnetic resonance imaging (fMRI) is an indirect measure for neuronal activations through neurovascular coupling. The BOLD signal is also influenced by many non-neuronal physiological fluctuations. In previous resting state (RS) fMRI studies, we have identified a moving systemic low frequency oscillation (sLFO) in BOLD signal and were able to track its passage through the brain. We hypothesized that this seemingly intrinsic signal moves with the blood, and therefore, its dynamic patterns represent cerebral blood flow. In this study, we tested this hypothesis by performing Dynamic Susceptibility Contrast (DSC) MRI scans (i.e. bolus tracking) following the RS scans on eight healthy subjects. The dynamic patterns of sLFO derived from RS data were compared with the bolus flow visually and quantitatively. We found that the flow of sLFO derived from RS fMRI does to a large extent represent the blood flow measured with DSC. The small differences, we hypothesize, are largely due to the difference between the methods in their sensitivity to different vessel types. We conclude that the flow of sLFO in RS visualized by our time delay method represents the blood flow in the capillaries and veins in the brain.

  1. Financial Information Extraction Using Pre-defined and User-definable Templates in the LOLITA System

    OpenAIRE

    Costantino, Marco; Morgan, Richard G.; Collingham, Russell J.

    1996-01-01

    This paper addresses the issue of information extraction in the financial domain within the framework of a large Natural Language Processing system: LOLITA. The LOLITA system, Large-scale Object-based Linguistic Interactor Translator and Analyser, is a general purpose natural language processing system. Different kinds of applications have been built around the system's core. One of these is the financial information extraction application, which has been designed in close contact with expert...

  2. Extracting information masked by the chaotic signal of a time-delay system.

    Science.gov (United States)

    Ponomarenko, V I; Prokhorov, M D

    2002-08-01

    We further develop the method proposed by Bezruchko et al. [Phys. Rev. E 64, 056216 (2001)] for the estimation of the parameters of time-delay systems from time series. Using this method we demonstrate a possibility of message extraction for a communication system with nonlinear mixing of information signal and chaotic signal of the time-delay system. The message extraction procedure is illustrated using both numerical and experimental data and different kinds of information signals.

  3. A comprehensive method for extraction and quantitative analysis of sterols and secosteroids from human plasma.

    Science.gov (United States)

    McDonald, Jeffrey G; Smith, Daniel D; Stiles, Ashlee R; Russell, David W

    2012-07-01

    We describe the development of a method for the extraction and analysis of 62 sterols, oxysterols, and secosteroids from human plasma using a combination of HPLC-MS and GC-MS. Deuterated standards are added to 200 μl of human plasma. Bulk lipids are extracted with methanol:dichloromethane, the sample is hydrolyzed using a novel procedure, and sterols and secosteroids are isolated using solid-phase extraction (SPE). Compounds are resolved on C₁₈ core-shell HPLC columns and by GC. Sterols and oxysterols are measured using triple quadrupole mass spectrometers, and lathosterol is measured using GC-MS. Detection for each compound measured by HPLC-MS was ∪ 1 ng/ml of plasma. Extraction efficiency was between 85 and 110%; day-to-day variability showed a relative standard error of <10%. Numerous oxysterols were detected, including the side chain oxysterols 22-, 24-, 25-, and 27-hydroxycholesterol, as well as ring-structure oxysterols 7α- and 4β-hydroxycholesterol. Intermediates from the cholesterol biosynthetic pathway were also detected, including zymosterol, desmosterol, and lanosterol. This method also allowed the quantification of six secosteroids, including the 25-hydroxylated species of vitamins D₂ and D₃. Application of this method to plasma samples revealed that at least 50 samples could be extracted in a routine day.

  4. Statistical assessment of DNA extraction reagent lot variability in real-time quantitative PCR

    Science.gov (United States)

    Bushon, R.N.; Kephart, C.M.; Koltun, G.F.; Francy, D.S.; Schaefer, F. W.; Lindquist, H.D. Alan

    2010-01-01

    Aims: The aim of this study was to evaluate the variability in lots of a DNA extraction kit using real-time PCR assays for Bacillus anthracis, Francisella tularensis and Vibrio cholerae. Methods and Results: Replicate aliquots of three bacteria were processed in duplicate with three different lots of a commercial DNA extraction kit. This experiment was repeated in triplicate. Results showed that cycle threshold values were statistically different among the different lots. Conclusions: Differences in DNA extraction reagent lots were found to be a significant source of variability for qPCR results. Steps should be taken to ensure the quality and consistency of reagents. Minimally, we propose that standard curves should be constructed for each new lot of extraction reagents, so that lot-to-lot variation is accounted for in data interpretation. Significance and Impact of the Study: This study highlights the importance of evaluating variability in DNA extraction procedures, especially when different reagent lots are used. Consideration of this variability in data interpretation should be an integral part of studies investigating environmental samples with unknown concentrations of organisms. ?? 2010 The Society for Applied Microbiology.

  5. Quantitative and qualitative proteome characteristics extracted from in-depth integrated genomics and proteomics analysis

    NARCIS (Netherlands)

    Low, T.Y.; van Heesch, S.; van den Toorn, H.; Giansanti, P.; Cristobal, A.; Toonen, P.; Schafer, S.; Hubner, N.; van Breukelen, B.; Mohammed, S.; Cuppen, E.; Heck, A.J.R.; Guryev, V.

    2013-01-01

    Quantitative and qualitative protein characteristics are regulated at genomic, transcriptomic, and posttranscriptional levels. Here, we integrated in-depth transcriptome and proteome analyses of liver tissues from two rat strains to unravel the interactions within and between these layers. We obtain

  6. Imaged document information location and extraction using an optical correlator

    Science.gov (United States)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-12-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  7. CTSS: A Tool for Efficient Information Extraction with Soft Matching Rules for Text Mining

    Directory of Open Access Journals (Sweden)

    A. Christy

    2008-01-01

    Full Text Available The abundance of information available digitally in modern world had made a demand for structured information. The problem of text mining which dealt with discovering useful information from unstructured text had attracted the attention of researchers. The role of Information Extraction (IE software was to identify relevant information from texts, extracting information from a variety of sources and aggregating it to create a single view. Information extraction systems depended on particular corpora and were poor in recall values. Therefore, developing the system as domain-independent as well as improving the recall was an important challenge for IE. In this research, the authors proposed a domain-independent algorithm for information extraction, called SOFTRULEMINING for extracting the aim, methodology and conclusion from technical abstracts. The algorithm was implemented by combining trigram model with softmatching rules. A tool CTSS was constructed using SOFTRULEMINING and was tested with technical abstracts of www.computer.org and www.ansinet.org and found that the tool had improved its recall value and therefore the precision value in comparison with other search engines.

  8. ADVANCED EXTRACTION OF SPATIAL INFORMATION FROM HIGH RESOLUTION SATELLITE DATA

    Directory of Open Access Journals (Sweden)

    T. Pour

    2016-06-01

    Full Text Available In this paper authors processed five satellite image of five different Middle-European cities taken by five different sensors. The aim of the paper was to find methods and approaches leading to evaluation and spatial data extraction from areas of interest. For this reason, data were firstly pre-processed using image fusion, mosaicking and segmentation processes. Results going into the next step were two polygon layers; first one representing single objects and the second one representing city blocks. In the second step, polygon layers were classified and exported into Esri shapefile format. Classification was partly hierarchical expert based and partly based on the tool SEaTH used for separability distinction and thresholding. Final results along with visual previews were attached to the original thesis. Results are evaluated visually and statistically in the last part of the paper. In the discussion author described difficulties of working with data of large size, taken by different sensors and different also thematically.

  9. Quantitative spatial analysis of the mouse brain lipidome by pressurized liquid extraction surface analysis

    DEFF Research Database (Denmark)

    Almeida, Reinaldo; Berzina, Zane; Christensen, Eva Arnspang

    2015-01-01

    of internal lipid standards in the extraction solvent. The analysis of lipid microextracts by nanoelectrospray ionization provides long-lasting ion spray which in conjunction with a hybrid ion trap-orbitrap mass spectrometer enables identification and quantification of molecular lipid species using a method......Here we describe a novel surface sampling technique termed pressurized liquid extraction surface analysis (PLESA), which in combination with a dedicated high-resolution shotgun lipidomics routine enables both quantification and in-depth structural characterization of molecular lipid species...... with successive polarity shifting, high-resolution Fourier transform mass spectrometry (FTMS), and fragmentation analysis. We benchmarked the performance of the PLESA approach for in-depth lipidome analysis by comparing it to conventional lipid extraction of excised tissue homogenates and by mapping the spatial...

  10. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    CERN Document Server

    Jonnalagadda, Siddhartha

    2011-01-01

    Overall, the two main contributions of this work include the application of sentence simplification to association extraction as described above, and the use of distributional semantics for concept extraction. The proposed work on concept extraction amalgamates for the first time two diverse research areas -distributional semantics and information extraction. This approach renders all the advantages offered in other semi-supervised machine learning systems, and, unlike other proposed semi-supervised approaches, it can be used on top of different basic frameworks and algorithms. http://gradworks.umi.com/34/49/3449837.html

  11. Quantitative extraction of organic tracer compounds from ambient particulate matter collected on polymer substrates.

    Science.gov (United States)

    Sun, Qinyue; Alexandrova, Olga A; Herckes, Pierre; Allen, Jonathan O

    2009-05-15

    Organic compounds in ambient particulate matter (PM) samples are used as tracers for PM source apportionment. These PM samples are collected using high volume samplers; one such sampler is an impactor in which polyurethane foam (PUF) and polypropylene foam (PPF) are used as the substrates. The polymer substrates have the advantage of limiting particle bounce artifacts during sampling; however these substrates may contain background organic additives. A protocol of two extractions with isopropanol followed by three extractions with dichloromethane (DCM) was developed for both substrate precleaning and analyte extraction. Some residual organic contaminants were present after precleaning; expressed as concentrations in a 24-h ambient PM sample, the residual amounts were 1 microg m(-3) for plasticizers and antioxidants, and 10 ng m(-3) for n-alkanes with carbon number lower than 26. The quantification limit for all other organic tracer compounds was approximately 0.1 ng m(-3) in a 24-h ambient PM sample. Recovery experiments were done using NIST Standard Reference Material (SRM) Urban Dust (1649a); the average recoveries for polycyclic aromatic hydrocarbons (PAHs) from PPF and PUF substrates were 117+/-8% and 107+/-11%, respectively. Replicate extractions were also done using the ambient samples collected in Nogales, Arizona. The relative differences between repeat analyses were less than 10% for 47 organic tracer compounds quantified. After the first extraction of ambient samples, less than 7% of organic tracer compounds remained in the extracted substrates. This method can be used to quantify a suite of semi- and non-polar organic tracer compounds suitable for source apportionment studies in 24-h ambient PM samples.

  12. In vitro evaluation of five different herbal extracts as an antimicrobial endodontic irrigant using real time quantitative polymerase chain reaction

    Directory of Open Access Journals (Sweden)

    Thilla S Vinothkumar

    2013-01-01

    Full Text Available Context: Sodium hypochlorite is the most commonly used irrigant but it has disadvantage like high cytotoxicity. So there is a need to find an alternative to 5.25% Sodium hypochlorite against microorganism Enterococcus faecalis and Candida albicans. Literature has shown that these 5 extracts namely Terminalia chebula, Myristica frangrans, Aloe barbadensis, Curcuma longa and Azadaricta indica has good properties which can be used as a potential endodontic irrigant. Aims: To evaluate the antimicrobial efficacy of various herbal extracts namely Curcuma longa (CL, Azadiracta indica (AI, Aloe barbadensis (AV, Myristica fragrans (MF and Terminalia chebula (TC as endodontic irrigant against Enterococcus faecalis and Candida albicans using real-time quantitative polymerase chain reaction (qPCR. Materials and Methods: Eighty-four teeth were extracted and suspended with Enterococcus faecalis and Candida albicans. A preliminary study was first performed to determine the minimum inhibitory concentration of extracts. The irrigating groups were divided into five herbal groups and 2 control groups. After irrigating the teeth the remaining microbial load was determined using qPCR. Statistical Analysis Used: Statistical analysis was performed using Oneway Anova/Kruskal-Wallis test with post-hoc Tukey′s HSD and was statistically significant ( P < 0.05. Results: It was shown that Neem was highly efficient to 5.25% NaOCl in reducing Enterococcus faecalis and Candida albicans within the root canals when compared with other extracts. Conclusions: Neem leaf extract has a significant antimicrobial efficacy against Enterococcus faecalis and Candida albicans compared to 5.25% sodium hypochlorite.

  13. Quantitation of low concentrations of polysorbates in high protein concentration formulations by solid phase extraction and cobalt-thiocyanate derivatization.

    Science.gov (United States)

    Kim, Justin; Qiu, Jinshu

    2014-01-02

    A spectrophotometric method was developed to quantify low polysorbate (PS) levels in biopharmaceutical formulations containing high protein concentrations. In the method, Oasis HLB solid phase extraction (SPE) cartridge was used to extract PS from high protein concentration formulations. After loading a sample, the cartridge was washed with 4M guanidine HCl and 10% (v/v) methanol, and the retained PS was eluted by acetonitrile. Following the evaporation of acetonitrile, aqueous cobalt-thiocyanate reagent was added to react with the polyoxyethylene oxide chain of polysorbates to form a blue colored PS-cobaltothiocyante complex. This colored complex was then extracted into methylene chloride and measured spectrophotometrically at 620 nm. The method performance was evaluated on three products containing 30-40 mg L(-1) PS-20 and PS-80 in ≤70 g L(-1) protein formulations. The method was specific (no matrix interference identified in three types of protein formulations), sensitive (quantitation limit of 10 mg L(-1) PS) and robust with good precision (relative standard deviation ≤6.4%) and accuracy (spike recoveries from 95% to 101%). The linear range of the method for both PS-20 and PS-80 was 10 to 80 mg L(-1) PS. By diluting samples with 6M guanidine HCl and/or using different methylene chloride volumes to extract the colored complexes of standards and samples, the method could accurately and precisely quantify 40 mg L(-1) PS in up to 300 g L(-1) protein formulations.

  14. Qualitative and quantitative analysis of Dibenzofuran, Alkyldibenzofurans, and Benzo[b]naphthofurans in crude oils and source rock extracts

    Science.gov (United States)

    Meijun Li,; Ellis, Geoffrey S.

    2015-01-01

    Dibenzofuran (DBF), its alkylated homologues, and benzo[b]naphthofurans (BNFs) are common oxygen-heterocyclic aromatic compounds in crude oils and source rock extracts. A series of positional isomers of alkyldibenzofuran and benzo[b]naphthofuran were identified in mass chromatograms by comparison with internal standards and standard retention indices. The response factors of dibenzofuran in relation to internal standards were obtained by gas chromatography-mass spectrometry analyses of a set of mixed solutions with different concentration ratios. Perdeuterated dibenzofuran and dibenzothiophene are optimal internal standards for quantitative analyses of furan compounds in crude oils and source rock extracts. The average concentration of the total DBFs in oils derived from siliciclastic lacustrine rock extracts from the Beibuwan Basin, South China Sea, was 518 μg/g, which is about 5 times that observed in the oils from carbonate source rocks in the Tarim Basin, Northwest China. The BNFs occur ubiquitously in source rock extracts and related oils of various origins. The results of this work suggest that the relative abundance of benzo[b]naphthofuran isomers, that is, the benzo[b]naphtho[2,1-d]furan/{benzo[b]naphtho[2,1-d]furan + benzo[b]naphtho[1,2-d]furan} ratio, may be a potential molecular geochemical parameter to indicate oil migration pathways and distances.

  15. HPTLC METHOD OF QUANTITATION OF BIOACTIVE MARKER CONSTITUENT PINITOL IN EXTRACTS OF PISONIA GRANDIS (R.BR

    Directory of Open Access Journals (Sweden)

    G. Poongothai

    2012-09-01

    Full Text Available Pisonia grandis R.Br. is an evergreen medicinal plant in the four O'clock family of Nyctaginaceae. Leaves, stems and root of this plant are extensively used by tribals in India to prepare several folk medicines to treat rheumatism, arthritis and diabetes. Recently pinitol has been reported from the leaves of Pisonia grandis and thus the present study was designed to quantify this antidiabetic molecule in the various extracts of leaves of Pisonia grandis by high performance thin layer chromatography (HPTLC. The calibration curve of a stabilized HPTLC method was linear in the range of 1.5 to 7.0µg per spot and the correlation coefficient was found to be 0.9718, thus exhibiting good linearity between concentration and area. The limit of quantification (LOQ and limit of detection (LOD was found to be 1.5 and 1.2 µg respectively. The present study also revealed that among all the extracts, PGSX3 and PGSX4 contain more than 8 µg of pinitol per milligram of extract and thus exhibited chloroform: methanol mixture (9:1 and 100 % ethanol are found to be effective solvents to extract pinitol from Pisonia grandis. This is the first report on quantitation of pinitol by HPTLC.

  16. Omnidirectional vision systems calibration, feature extraction and 3D information

    CERN Document Server

    Puig, Luis

    2013-01-01

    This work focuses on central catadioptric systems, from the early step of calibration to high-level tasks such as 3D information retrieval. The book opens with a thorough introduction to the sphere camera model, along with an analysis of the relation between this model and actual central catadioptric systems. Then, a new approach to calibrate any single-viewpoint catadioptric camera is described.  This is followed by an analysis of existing methods for calibrating central omnivision systems, and a detailed examination of hybrid two-view relations that combine images acquired with uncalibrated

  17. Data-Driven Information Extraction from Chinese Electronic Medical Records.

    Directory of Open Access Journals (Sweden)

    Dong Xu

    Full Text Available This study aims to propose a data-driven framework that takes unstructured free text narratives in Chinese Electronic Medical Records (EMRs as input and converts them into structured time-event-description triples, where the description is either an elaboration or an outcome of the medical event.Our framework uses a hybrid approach. It consists of constructing cross-domain core medical lexica, an unsupervised, iterative algorithm to accrue more accurate terms into the lexica, rules to address Chinese writing conventions and temporal descriptors, and a Support Vector Machine (SVM algorithm that innovatively utilizes Normalized Google Distance (NGD to estimate the correlation between medical events and their descriptions.The effectiveness of the framework was demonstrated with a dataset of 24,817 de-identified Chinese EMRs. The cross-domain medical lexica were capable of recognizing terms with an F1-score of 0.896. 98.5% of recorded medical events were linked to temporal descriptors. The NGD SVM description-event matching achieved an F1-score of 0.874. The end-to-end time-event-description extraction of our framework achieved an F1-score of 0.846.In terms of named entity recognition, the proposed framework outperforms state-of-the-art supervised learning algorithms (F1-score: 0.896 vs. 0.886. In event-description association, the NGD SVM is superior to SVM using only local context and semantic features (F1-score: 0.874 vs. 0.838.The framework is data-driven, weakly supervised, and robust against the variations and noises that tend to occur in a large corpus. It addresses Chinese medical writing conventions and variations in writing styles through patterns used for discovering new terms and rules for updating the lexica.

  18. Quantitative analysis of gender stereotypes and information aggregation in a national election.

    Directory of Open Access Journals (Sweden)

    Michele Tumminello

    Full Text Available By analyzing a database of a questionnaire answered by a large majority of candidates and elected in a parliamentary election, we quantitatively verify that (i female candidates on average present political profiles which are more compassionate and more concerned with social welfare issues than male candidates and (ii the voting procedure acts as a process of information aggregation. Our results show that information aggregation proceeds with at least two distinct paths. In the first case candidates characterize themselves with a political profile aiming to describe the profile of the majority of voters. This is typically the case of candidates of political parties which are competing for the center of the various political dimensions. In the second case, candidates choose a political profile manifesting a clear difference from opposite political profiles endorsed by candidates of a political party positioned at the opposite extreme of some political dimension.

  19. Bridging the pressure gap: Can we get local quantitative structural information at 'near-ambient' pressures?

    Science.gov (United States)

    Woodruff, D. P.

    2016-10-01

    In recent years there have been an increasing number of investigations aimed at 'bridging the pressure gap' between UHV surface science experiments on well-characterised single crystal surfaces and the much higher (ambient and above) pressures relevant to practical catalyst applications. By applying existing photon-in/photon-out methods and developing instrumentation to allow photoelectron emission to be measured in higher-pressure sample environments, it has proved possible to obtain surface compositions and spectroscopic fingerprinting of chemical and molecular states of adsorbed species at pressures up to a few millibars. None of these methods, however, provide quantitative structural information on the local adsorption sites of isolated atomic and molecular adsorbate species under these higher-pressure reaction conditions. Methods for gaining this information are reviewed and evaluated.

  20. Formal modeling and quantitative evaluation for information system survivability based on PEPA

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; WANG Hui-qiang; ZHAO Guo-sheng

    2008-01-01

    Survivability should be considered beyond security for information system. To assess system survivability accurately, for improvement, a formal modeling and analysis method based on stochastic process algebra is proposed in this article. By abstracting the interactive behaviors between intruders and information system, a transferring graph of system state oriented survivability is constructed. On that basis, parameters are defined and system behaviors are characterized precisely with performance evaluation process algebra (PEPA), simultaneously considering the influence of different attack modes. Ultimately the formal model for survivability is established and quantitative analysis results are obtained by PEPA Workbench tool. Simulation experiments show the effectiveness and feasibility of the developed method, and it can help to direct the designation of survivable system.

  1. Quantitatively Mapping Cellular Viscosity with Detailed Organelle Information via a Designed PET Fluorescent Probe

    Science.gov (United States)

    Liu, Tianyu; Liu, Xiaogang; Spring, David R.; Qian, Xuhong; Cui, Jingnan; Xu, Zhaochao

    2014-06-01

    Viscosity is a fundamental physical parameter that influences diffusion in biological processes. The distribution of intracellular viscosity is highly heterogeneous, and it is challenging to obtain a full map of cellular viscosity with detailed organelle information. In this work, we report 1 as the first fluorescent viscosity probe which is able to quantitatively map cellular viscosity with detailed organelle information based on the PET mechanism. This probe exhibited a significant ratiometric fluorescence intensity enhancement as solvent viscosity increases. The emission intensity increase was attributed to combined effects of the inhibition of PET due to restricted conformational access (favorable for FRET, but not for PET), and the decreased PET efficiency caused by viscosity-dependent twisted intramolecular charge transfer (TICT). A full map of subcellular viscosity was successfully constructed via fluorescent ratiometric detection and fluorescence lifetime imaging; it was found that lysosomal regions in a cell possess the highest viscosity, followed by mitochondrial regions.

  2. Fine mapping of multiple interacting quantitative trait loci using combined linkage disequilibrium and linkage information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Quantitative trait loci (QTL) and their additive, dominance and epistatic effects play a critical role in complex trait variation. It is often infeasible to detect multiple interacting QTL due to main effects often being confounded by interaction effects.Positioning interacting QTL within a small region is even more difficult. We present a variance component approach nested in an empirical Bayesian method, which simultaneously takes into account additive, dominance and epistatic effects due to multiple interacting QTL. The covariance structure used in the variance component approach is based on combined linkage disequilibrium and linkage (LDL) information. In a simulation study where there are complex epistatic interactions between QTL, it is possible to simultaneously fine map interacting QTL using the proposed approach. The present method combined with LDL information can efficiently detect QTL and their dominance and epistatic effects, making it possible to simultaneously fine map main and epistatic QTL.

  3. Quantitative and qualitative assessment of DNA extracted from saliva for its use in forensic identification

    Directory of Open Access Journals (Sweden)

    Parul Khare

    2014-01-01

    Full Text Available Saliva has long been known for its diagnostic value in several diseases. It also has a potential to be used in forensic science. Objective: The objective of this study is to compare the quantity and quality of DNA samples extracted from saliva with those extracted from blood in order to assess the feasibility of extracting sufficient DNA from saliva for its possible use in forensic identification. Materials and Methods: Blood and saliva samples were collected from 20 volunteers and DNA extraction was performed through Phenol Chloroform technique. The quantity and quality of isolated DNA was analyzed by spectrophotometery and the samples were then used to amplify short tandem repeat (STR F13 using the polymerase chain reaction. Results: Mean quantity of DNA obtained in saliva was 48.4 ± 8.2 μg/ml and in blood was 142.5 ± 45.9 μg/ml. Purity of DNA obtained as assessed by the ratio of optical density 260/280, was found to be optimal in 45% salivary samples while remaining showed minor contamination. Despite this positive F13 STR amplification was achieved in 75% of salivary DNA samples. Conclusion: Results of this study showed that saliva may prove to be a useful source of DNA for forensic purpose.

  4. Quantitative HPLC analysis of some marker compounds of hydroalcoholic extracts of Piper aduncum L

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Laura C.P.; Nunomura, Sergio M. [Instituto Nacional de Pesquisas da Amazonia (INPA), Manaus, AM (Brazil). Coordenacao de Pesquisas em Produtos Naturais]. E-mail: smnunomu@inpa.gov.br; Mause, Robert [Siema Eco Essencias da Amazonia Ltda., Manaus, AM (Brazil)

    2005-11-15

    High performance liquid chromatography is one of the major analytical techniques used in the quality control of phytotherapics. This work describes a HPLC method used to determine the major components present in different hydroalcoholic extracts of aerial parts of Piper aduncum. (author)

  5. Results of Studying Astronomy Students’ Science Literacy, Quantitative Literacy, and Information Literacy

    Science.gov (United States)

    Buxner, Sanlyn; Impey, Chris David; Follette, Katherine B.; Dokter, Erin F.; McCarthy, Don; Vezino, Beau; Formanek, Martin; Romine, James M.; Brock, Laci; Neiberding, Megan; Prather, Edward E.

    2017-01-01

    Introductory astronomy courses often serve as terminal science courses for non-science majors and present an opportunity to assess non future scientists’ attitudes towards science as well as basic scientific knowledge and scientific analysis skills that may remain unchanged after college. Through a series of studies, we have been able to evaluate students’ basic science knowledge, attitudes towards science, quantitative literacy, and informational literacy. In the Fall of 2015, we conducted a case study of a single class administering all relevant surveys to an undergraduate class of 20 students. We will present our analysis of trends of each of these studies as well as the comparison case study. In general we have found that students basic scientific knowledge has remained stable over the past quarter century. In all of our studies, there is a strong relationship between student attitudes and their science and quantitative knowledge and skills. Additionally, students’ information literacy is strongly connected to their attitudes and basic scientific knowledge. We are currently expanding these studies to include new audiences and will discuss the implications of our findings for instructors.

  6. Perceived relevance and information needs regarding food topics and preferred information sources among Dutch adults: results of a quantitative consumer study

    NARCIS (Netherlands)

    Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.

    2004-01-01

    Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research. Des

  7. Perceived relevance and information needs regarding food topics and preferred information sources among Dutch adults: results of a quantitative consumer study

    NARCIS (Netherlands)

    Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.

    2004-01-01

    Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research. Des

  8. Extraction of Left Ventricular Ejection Fraction Information from Various Types of Clinical Reports.

    Science.gov (United States)

    Kim, Youngjun; Garvin, Jennifer H; Goldstein, Mary K; Hwang, Tammy S; Redd, Andrew; Bolton, Dan; Heidenreich, Paul A; Meystre, Stéphane M

    2017-02-02

    Efforts to improve the treatment of congestive heart failure, a common and serious medical condition, include the use of quality measures to assess guideline-concordant care. The goal of this study is to identify left ventricular ejection fraction (LVEF) information from various types of clinical notes, and to then use this information for heart failure quality measurement. We analyzed the annotation differences between a new corpus of clinical notes from the Echocardiography, Radiology, and Text Integrated Utility package and other corpora annotated for natural language processing (NLP) research in the Department of Veterans Affairs. These reports contain varying degrees of structure. To examine whether existing LVEF extraction modules we developed in prior research improve the accuracy of LVEF information extraction from the new corpus, we created two sequence-tagging NLP modules trained with a new data set, with or without predictions from the existing LVEF extraction modules. We also conducted a set of experiments to examine the impact of training data size on information extraction accuracy. We found that less training data is needed when reports are highly structured, and that combining predictions from existing LVEF extraction modules improves information extraction when reports have less structured formats and a rich set of vocabulary.

  9. Abstract Information Extraction From Consumer's Comments On Internet Media

    Directory of Open Access Journals (Sweden)

    Kadriye Ergün

    2013-01-01

    Full Text Available In this study, a system developed to summarize by automatically evaluating comments about product with using text mining techniques will be described. The data has been primarily went through morphological analysis process, because they are texts written in natural language. Words and adjectives meaning positive or negative are determined. They show product features in texts. The tree structure is established according to Turkish grammar rules as subordinate and modified words are designated. The software which uses the depth-first search algorithm on the tree structure is developed. Data from result of software is stored in the SQL database. When any inquiry is made from these data depending on any property of product, numerical information which indicates the degree of satisfaction about this property is obtained.

  10. High-resolution gas chromatography/mas spectrometry method for characterization and quantitative analysis of ginkgolic acids in ginkgo biloba plants, extracts, and dietary supplements

    Science.gov (United States)

    A high resolution GC/MS with Selected Ion Monitor (SIM) method focusing on the characterization and quantitative analysis of ginkgolic acids (GAs) in Ginkgo biloba L. plant materials, extracts and commercial products was developed and validated. The method involved sample extraction with (1:1) meth...

  11. A Validated Reverse Phase HPLC Analytical Method for Quantitation of Glycoalkaloids in Solanum lycocarpum and Its Extracts

    Directory of Open Access Journals (Sweden)

    Renata Fabiane Jorge Tiossi

    2012-01-01

    Full Text Available Solanum lycocarpum (Solanaceae is native to the Brazilian Cerrado. Fruits of this species contain the glycoalkaloids solasonine (SN and solamargine (SM, which display antiparasitic and anticancer properties. A method has been developed for the extraction and HPLC-UV analysis of the SN and SM in different parts of S. lycocarpum, mainly comprising ripe and unripe fruits, leaf, and stem. This analytical method was validated and gave good detection response with linearity over a dynamic range of 0.77–1000.00 μg mL−1 and recovery in the range of 80.92–91.71%, allowing a reliable quantitation of the target compounds. Unripe fruits displayed higher concentrations of glycoalkaloids (1.04% ± 0.01 of SN and 0.69% ± 0.00 of SM than the ripe fruits (0.83% ± 0.02 of SN and 0.60% ± 0.01 of SM. Quantitation of glycoalkaloids in the alkaloidic extract gave 45.09% ± 1.14 of SN and 44.37% ± 0.60 of SM, respectively.

  12. A validated UHPLC-tandem mass spectrometry method for quantitative analysis of flavonolignans in milk thistle (Silybum marianum) extracts.

    Science.gov (United States)

    Graf, Tyler N; Cech, Nadja B; Polyak, Stephen J; Oberlies, Nicholas H

    2016-07-15

    Validated methods are needed for the analysis of natural product secondary metabolites. These methods are particularly important to translate in vitro observations to in vivo studies. Herein, a method is reported for the analysis of the key secondary metabolites, a series of flavonolignans and a flavonoid, from an extract prepared from the seeds of milk thistle [Silybum marianum (L.) Gaertn. (Asteraceae)]. This report represents the first UHPLC MS-MS method validated for quantitative analysis of these compounds. The method takes advantage of the excellent resolution achievable with UHPLC to provide a complete analysis in less than 7min. The method is validated using both UV and MS detectors, making it applicable in laboratories with different types of analytical instrumentation available. Lower limits of quantitation achieved with this method range from 0.0400μM to 0.160μM with UV and from 0.0800μM to 0.160μM with MS. The new method is employed to evaluate variability in constituent composition in various commercial S. marianum extracts, and to show that storage of the milk thistle compounds in DMSO leads to degradation.

  13. Trace-Level Volatile Quantitation by DART-MS following Headspace Extraction - Optimization and Validation in Grapes.

    Science.gov (United States)

    Jastrzembski, Jillian A; Bee, Madeleine Y; Sacks, Gavin L

    2017-10-02

    Ambient Ionization - Mass Spectrometry (AI-MS) techniques like Direct Analysis in Real Time (DART) offer the potential for rapid quantitative analyses of trace volatiles in food matrices, but performance is generally limited by the lack of pre-concentration and extraction steps. The sensitivity and selectivity of AI-MS approaches can be improved through solid-phase microextraction (SPME) with appropriate thin-film geometries, e.g. solid phase mesh enhanced sorption from headspace (SPMESH). This work improves the SPMESH-DART-MS approach for use in food analyses, and validates the approach for trace volatile analysis for two compounds in real samples (grape macerates). SPMESH units prepared with different sorbent coatings were evaluated for their ability to extract a range of odor-active volatiles, with polydimethylsiloxane/divinylbenzene giving the most satisfactory results. In combination with high-resolution mass spectrometry (HRMS), detection limits for SPMESH-DART-MS under 4 ng/L in less than 30 s acquisition times could be achieved for some volatiles (3-isobutyl-2-methoxypyrazine (IBMP), β-damascenone). A comparison of SPMESH-DART-MS and SPME-GC-MS quantitation of linalool and IBMP demonstrates excellent agreement between the two methods using real grape samples (r2≥0.90), although linalool measurements appeared to also include isobaric interferences.

  14. Near-quantitative extraction of genomic DNA from various food-borne eubacteria

    Science.gov (United States)

    In this work we have tested a dozen commercial bacterial genomic DNA extraction methodologies on an average of 7.70E6 (± 9.05%), 4.77E8 (± 31.0%), and 5.93E8 (± 4.69%) colony forming units (CFU) associated with 3 cultures (n = 3) each of Brochothrix thermosphacta (Bt), Shigella sonnei (Ss), and Esch...

  15. New cold-fiber headspace solid-phase microextraction device for quantitative extraction of polycyclic aromatic hydrocarbons in sediment.

    Science.gov (United States)

    Ghiasvand, Ali Reza; Hosseinzadeh, Shokouh; Pawliszyn, Janusz

    2006-08-18

    A new automated headspace solid-phase microextraction (HS-SPME) sampling device was developed, with the capability of heating the sample matrix and simultaneously cooling the fiber coating. The device was evaluated for the quantitative extraction of polycyclic aromatic hydrocarbons (PAHs) from solid matrices. The proposed device improves the efficiency of the release of analytes from the matrix, facilitates the mass transfer into the headspace and significantly increases the partition coefficients of the analytes, by creating a temperature gap between the cold-fiber (CF) coating and the hot headspace. The reliability and applicability of previously reported cold-fiber devices are significantly enhanced by this improvement. In addition, it can be easily adopted for full automation of extraction, enrichment and introduction of different samples using commercially available autosampling devices. Sand samples spiked with PAHs were used as solid matrices and the effect of different experimental parameters were studied, including the extraction temperature, extraction time, moisture content, and the effect of sonication and modifier under optimal experimental conditions, linear calibration curves were obtained in the range of 0.0009-1000 ng/g, with regression coefficients higher than 0.99 and detection limits that ranged from 0.3 to 3 pg/g. Reproducible, precise and high throughput extraction, monitoring and quantification of PAHs were achieved with the automated cold-fiber headspace solid-phase microextraction (CF-HS-SPME) device coupled to GC-flame ionization detection. Determination of PAHs in certified reference sediments using the proposed approach exhibited acceptable agreement with the standard values.

  16. [Rapid quantitative analysis of hydrocarbon composition of furfural extract oils using attenuated total reflection infrared spectroscopy].

    Science.gov (United States)

    Li, Na; Yuan, Hong-Fu; Hu, Ai-Qin; Liu, Wei; Song, Chun-Feng; Li, Xiao-Yu; Song, Yi-Chang; He, Qi-Jun; Liu, Sha; Xu, Xiao-Xuan

    2014-07-01

    A set of rapid analysis system for hydrocarbon composition of heavy oils was designed using attenuated total reflection FTIR spectrometer and chemometrics to determine the hydrocarbon composition of furfural extract oils. Sixty two extract oil samples were collected and their saturates and aromatics content data were determined according to the standard NB/SH/T0509-2010, then the total contents of resins plus asphaltenes were calculated by the subtraction method in the percentage of weight. Based on the partial least squares (PLS), calibration models for saturates, aromatics, and resin+asphaltene contents were established using attenuated total reflection FTIR spectroscopy, with their SEC, 1.43%, 0.91% and 1.61%, SEP, 1.56%, 1.24% and 1.81%, respectively, meeting the accuracy and repeatability required for the standard. Compared to the present standard method, the efficiency of hydrocarbon composition analysis for furfural extract oils is significantly improved by the new method which is rapid and simple. The system could also be used for other heavy oil analysis, with excellent extension and application foreground.

  17. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  18. STUDY ON EXTRACTING METHODS OF BURIED GEOLOGICAL INFORMATION IN HUAIBEI COAL FIELD

    Institute of Scientific and Technical Information of China (English)

    王四龙; 赵学军; 凌贻棕; 刘玉荣; 宁书年; 侯德文

    1999-01-01

    It is discussed features and the producing mechanism of buried geological information in geological, geophysical and remote sensing data in Huaibei coal field, and studied the methods extracting buried tectonic and igneous rock information from various geological data using digital image processing techniques.

  19. Fallen stock data: An essential source of information for quantitative knowledge of equine mortality in France.

    Science.gov (United States)

    Tapprest, J; Morignat, E; Dornier, X; Borey, M; Hendrikx, P; Ferry, B; Calavas, D; Sala, C

    2017-09-01

    Quantitative information about equine mortality is relatively scarce, yet it could be of great value for epidemiological purposes. In France, data from rendering plants are centralised in the Fallen Stock Data Interchange database (FSDI), managed by the French Ministry of Agriculture, while individual equine data are centralised in the French equine census database, SIRE, managed by the French horse and riding institute (IFCE). To evaluate whether the combined use of the FSDI and SIRE databases can provide representative and accurate quantitative information on mortality for the French equine population and to propose enhancements of these databases to improve the quality of the resulting demographic information. Descriptive study. Mortality ratios for the French equine population were calculated per year between 2011 and 2014 and temporal variations in equine mortality modelled during the same period. Survival analyses were performed on a sample of equines traceable in both the FSDI and SIRE databases. Estimates of the annual mortality ratios varied from 3.02 to 3.40% depending on the years. Survival rates of equines 2-years-old and over differed according to breed categories with the highest median age at death for the ponies. The weekly description of mortality highlighted marked seasonality of deaths whatever the category of equines. Modelling temporal variations in equine mortality also brought to light excess mortality. Insufficient traceability of equines between the two databases. The FSDI database provided an initial approach to equine death ratios on a national scale and an original description of temporal variations in mortality. Improvement in the traceability of equines between the FSDI and SIRE databases is needed to enable their combined use, providing a representative description of equine longevity and a more detailed description of temporal variations in mortality. © 2017 The Authors. Equine Veterinary Journal published by John Wiley & Sons Ltd

  20. Analysis of Automated Modern Web Crawling and Testing Tools and Their Possible Employment for Information Extraction

    Directory of Open Access Journals (Sweden)

    Tomas Grigalis

    2012-04-01

    Full Text Available World Wide Web has become an enormously big repository of data. Extracting, integrating and reusing this kind of data has a wide range of applications, including meta-searching, comparison shopping, business intelligence tools and security analysis of information in websites. However, reaching information in modern WEB 2.0 web pages, where HTML tree is often dynamically modified by various JavaScript codes, new data are added by asynchronous requests to the web server and elements are positioned with the help of cascading style sheets, is a difficult task. The article reviews automated web testing tools for information extraction tasks.Article in Lithuanian

  1. Quantitative Insights into the Fast Pyrolysis of Extracted Cellulose, Hemicelluloses, and Lignin.

    Science.gov (United States)

    Carrier, Marion; Windt, Michael; Ziegler, Bernhard; Appelt, Jörn; Saake, Bodo; Meier, Dietrich; Bridgwater, Anthony

    2017-08-24

    The transformation of lignocellulosic biomass into bio-based commodity chemicals is technically possible. Among thermochemical processes, fast pyrolysis, a relatively mature technology that has now reached a commercial level, produces a high yield of an organic-rich liquid stream. Despite recent efforts to elucidate the degradation paths of biomass during pyrolysis, the selectivity and recovery rates of bio-compounds remain low. In an attempt to clarify the general degradation scheme of biomass fast pyrolysis and provide a quantitative insight, the use of fast pyrolysis microreactors is combined with spectroscopic techniques (i.e., mass spectrometry and NMR spectroscopy) and mixtures of unlabeled and (13) C-enriched materials. The first stage of the work aimed to select the type of reactor to use to ensure control of the pyrolysis regime. A comparison of the chemical fragmentation patterns of "primary" fast pyrolysis volatiles detected by using GC-MS between two small-scale microreactors showed the inevitable occurrence of secondary reactions. In the second stage, liquid fractions that are also made of primary fast pyrolysis condensates were analyzed by using quantitative liquid-state (13) C NMR spectroscopy to provide a quantitative distribution of functional groups. The compilation of these results into a map that displays the distribution of functional groups according to the individual and main constituents of biomass (i.e., hemicelluloses, cellulose and lignin) confirmed the origin of individual chemicals within the fast pyrolysis liquids. © 2017 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.

  2. [Absolute quantification of carminic acid in cochineal extract by quantitative NMR].

    Science.gov (United States)

    Sugimoto, Naoki; Tada, Atsuko; Suematsu, Takako; Arifuku, Kazunori; Saito, Takeshi; Ihara, Toshihide; Yoshida, Yuuichi; Kubota, Reiji; Tahara, Maiko; Shimizu, Kumiko; Ito, Sumio; Yamazaki, Takeshi; Kawamura, Yoko; Nishimura, Tetsuji

    2010-01-01

    A quantitative NMR (qNMR) method was applied for the determination of carminic acid. Carminic acid is the main component in cochineal dye that is widely used as a natural food colorant. Since several manufacturers only provide reagent-grade carminic acid, there is no reference material of established purity. To improve the reliability of analytical data, we are developing quantitative nuclear magnetic resonance (qNMR), based on the fact that the intensity of a given NMR resonance is directly proportional to the molar amount of that nucleus in the sample. The purities and contents of carminic acid were calculated from the ratio of the signal intensities of an aromatic proton on carminic acid to nine protons of three methyl groups on DSS-d6 used as the internal standard. The concentration of DSS-d6 itself was corrected using potassium hydrogen phthalate, which is a certified reference material (CRM). The purities of the reagents and the contents of carminic acid in cochineal dye products were determined with SI-traceability as 25.3-92.9% and 4.6-30.5% based on the crystalline formula, carminic acid potassium salt trihydrate, which has been confirmed by X-ray analysis. The qNMR method does not require a reference compound, and is rapid and simple, with an overall analysis time of only 10 min. Our approach thus represents an absolute quantitation method with SI-traceability that should be readily applicable to analysis and quality control of any natural product.

  3. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    Science.gov (United States)

    Xu, Xiaoli; Liu, Xiuli

    2017-04-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  4. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    Science.gov (United States)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  5. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    Science.gov (United States)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  6. Quantitative and qualitative proteome characteristics extracted from in-depth integrated genomics and proteomics analysis.

    Science.gov (United States)

    Low, Teck Yew; van Heesch, Sebastiaan; van den Toorn, Henk; Giansanti, Piero; Cristobal, Alba; Toonen, Pim; Schafer, Sebastian; Hübner, Norbert; van Breukelen, Bas; Mohammed, Shabaz; Cuppen, Edwin; Heck, Albert J R; Guryev, Victor

    2013-12-12

    Quantitative and qualitative protein characteristics are regulated at genomic, transcriptomic, and posttranscriptional levels. Here, we integrated in-depth transcriptome and proteome analyses of liver tissues from two rat strains to unravel the interactions within and between these layers. We obtained peptide evidence for 26,463 rat liver proteins. We validated 1,195 gene predictions, 83 splice events, 126 proteins with nonsynonymous variants, and 20 isoforms with nonsynonymous RNA editing. Quantitative RNA sequencing and proteomics data correlate highly between strains but poorly among each other, indicating extensive nongenetic regulation. Our multilevel analysis identified a genomic variant in the promoter of the most differentially expressed gene Cyp17a1, a previously reported top hit in genome-wide association studies for human hypertension, as a potential contributor to the hypertension phenotype in SHR rats. These results demonstrate the power of and need for integrative analysis for understanding genetic control of molecular dynamics and phenotypic diversity in a system-wide manner. Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

  7. Quantitative and Qualitative Proteome Characteristics Extracted from In-Depth Integrated Genomics and Proteomics Analysis

    Directory of Open Access Journals (Sweden)

    Teck Yew Low

    2013-12-01

    Full Text Available Quantitative and qualitative protein characteristics are regulated at genomic, transcriptomic, and posttranscriptional levels. Here, we integrated in-depth transcriptome and proteome analyses of liver tissues from two rat strains to unravel the interactions within and between these layers. We obtained peptide evidence for 26,463 rat liver proteins. We validated 1,195 gene predictions, 83 splice events, 126 proteins with nonsynonymous variants, and 20 isoforms with nonsynonymous RNA editing. Quantitative RNA sequencing and proteomics data correlate highly between strains but poorly among each other, indicating extensive nongenetic regulation. Our multilevel analysis identified a genomic variant in the promoter of the most differentially expressed gene Cyp17a1, a previously reported top hit in genome-wide association studies for human hypertension, as a potential contributor to the hypertension phenotype in SHR rats. These results demonstrate the power of and need for integrative analysis for understanding genetic control of molecular dynamics and phenotypic diversity in a system-wide manner.

  8. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  9. The Visual Display of Quantitative Information; Envisioning Information; Visual Explanations: Images and Quantities, Evidence and Narrative (by Edward R. Tufte)

    Science.gov (United States)

    Harris, Harold H.

    1999-02-01

    The Visual Display of Quantitative Information Edward R. Tufte. Graphics Press: Cheshire, CT, 1983. 195 pp. ISBN 0-961-39210-X. 40.00. Envisioning Information Edward R. Tufte. Graphics Press: Cheshire, CT, 1990. 126 pp. ISBN 0-961-39211-8. 48.00. Visual Explanations: Images and Quantities, Evidence and Narrative Edward R. Tufte. Graphics Press: Cheshire, CT, 1997. 156 pp. ISBN 0-9613921-2-6. $45.00. Visual Explanations: Images and Quantities, Evidence and Narrative is the most recent of three books by Edward R. Tufte about the expression of information through graphs, charts, maps, and images. The most important of all the practical advice in these books is found on the first page of the first book, The Visual Display of Quantitative Information. Quantitative graphics should: Show the data Induce the viewer to think about the substance rather than the graphical design Avoid distorting what the data have to say Present many numbers in a small space Make large data sets coherent Encourage the eye to compare data Reveal the data at several levels of detail Serve a clear purpose: description, exploration, tabulation, or decoration Be closely integrated with the statistical and verbal descriptions of a data set Tufte illustrates these principles through all three books, going to extremes in the care with which he presents examples, both good and bad. He has designed the books so that the reader almost never has to turn a page to see the image, graph, or table that is being described in the text. The books are set in Monotype Bembo, a lead typeface designed so that smaller sizes open the surrounding white space, producing a pleasing balance. Some of the colored pages were put through more than 20 printing steps in order to render the subtle shadings required. The books are printed on heavy paper stock, and the fact that contributing artists, the typeface, the printing company, and the bindery are all credited on one of the back flyleaves is one indication of how

  10. The Technology of Extracting Content Information from Web Page Based on DOM Tree

    Science.gov (United States)

    Yuan, Dingrong; Mo, Zhuoying; Xie, Bing; Xie, Yangcai

    There are huge amounts of information on Web pages, which includes content information and other useless information, such as navigation, advertisement and flash of animation etc. Reducing the toils of Web users, we estabished a thechnique to extract the content information from web page. Fristly, we analyzed the semantic of web documents by V8 engine of Google and parsed the web document into DOM tree. And then, traversed the DOM tree, pruned the DOM tree in the light of the characteristic of Web page's edit language. Finally, we extracted the content information from Web page. Theoretics and experiments showed that the technique could simplify the web page, present the content information to web users and supply clean data for applicable area, such as retrieval, KDD and DM from web.

  11. What do professional forecasters' stock market expectations tell us about herding, information extraction and beauty contests?

    DEFF Research Database (Denmark)

    Rangvid, Jesper; Schmeling, M.; Schrimpf, A.

    2013-01-01

    We study how professional forecasters form equity market expectations based on a new micro-level dataset which includes rich cross-sectional information about individual characteristics. We focus on testing whether agents rely on the beliefs of others, i.e., consensus expectations, when forming t...... that neither information extraction to incorporate dispersed private information, nor herding for reputational reasons can fully explain these results, leaving Keynes' beauty contest argument as a potential candidate for explaining forecaster behavior....

  12. Extraction of Hidden Social Networks from Wiki-Environment Involved in Information Conflict

    OpenAIRE

    Alguliyev, Rasim M.; Ramiz M. Aliguliyev; Irada Y. Alakbarova

    2016-01-01

    Social network analysis is a widely used technique to analyze relationships among wiki-users in Wikipedia. In this paper the method to identify hidden social networks participating in information conflicts in wiki-environment is proposed. In particular, we describe how text clustering techniques can be used for extraction of hidden social networks of wiki-users caused information conflict. By clustering unstructured text articles caused information conflict we ...

  13. Chemometric study of Andalusian extra virgin olive oils Raman spectra: Qualitative and quantitative information.

    Science.gov (United States)

    Sánchez-López, E; Sánchez-Rodríguez, M I; Marinas, A; Marinas, J M; Urbano, F J; Caridad, J M; Moalem, M

    2016-08-15

    Authentication of extra virgin olive oil (EVOO) is an important topic for olive oil industry. The fraudulent practices in this sector are a major problem affecting both producers and consumers. This study analyzes the capability of FT-Raman combined with chemometric treatments of prediction of the fatty acid contents (quantitative information), using gas chromatography as the reference technique, and classification of diverse EVOOs as a function of the harvest year, olive variety, geographical origin and Andalusian PDO (qualitative information). The optimal number of PLS components that summarizes the spectral information was introduced progressively. For the estimation of the fatty acid composition, the lowest error (both in fitting and prediction) corresponded to MUFA, followed by SAFA and PUFA though such errors were close to zero in all cases. As regards the qualitative variables, discriminant analysis allowed a correct classification of 94.3%, 84.0%, 89.0% and 86.6% of samples for harvest year, olive variety, geographical origin and PDO, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Separation and quantitation of methenamine in urine by ion-pair extraction.

    Science.gov (United States)

    Strom, J G; Jun, H W

    1986-04-01

    An ion-pair extraction technique is described for separating methenamine, a urinary tract antibacterial agent, from formaldehyde in human urine samples. Separation conditions are developed from extraction constants for the methenamine-bromocresol green ion-pair. The technique involves adsorption of the ion-pair onto a silica cartridge and elution with methylene chloride:1-pentanol (95:5). Methenamine is freed from the ion-pair by the addition of excess tetrabutylammonium iodide and converted to formaldehyde (determined spectrophotometrically) by reaction with ammonia and acetylacetone. Linear standard plots were obtained from urine containing methenamine which was diluted to 10-160 micrograms/mL. The lower limit of detection was 6 micrograms/mL of methenamine. Absolute recovery from urine was greater than or equal to 94.5%. The precision (CV) of detection of methenamine in the presence of formaldehyde was less than 2%, and less than or equal to 4.5% for the detection of formaldehyde in the presence of methenamine. No interferences were noted. The applicability of the method was demonstrated by analysis of human urine levels of both methenamine and formaldehyde following oral administration of a methenamine salt to a volunteer.

  15. Extracting information from the data flood of new solar telescopes. Brainstorming

    CERN Document Server

    Ramos, A Asensio

    2012-01-01

    Extracting magnetic and thermodynamic information from spectropolarimetric observations is a difficult and time consuming task. The amount of science-ready data that will be generated by the new family of large solar telescopes is so large that we will be forced to modify the present approach to inference. In this contribution, I propose several possible ways that might be useful for extracting the thermodynamic and magnetic properties of solar plasmas from such observations quickly.

  16. Extracting Information from the Data Flood of New Solar Telescopes: Brainstorming

    Science.gov (United States)

    Asensio Ramos, A.

    2012-12-01

    Extracting magnetic and thermodynamic information from spectropolarimetric observations is a difficult and time consuming task. The amount of science-ready data that will be generated by the new family of large solar telescopes is so large that we will be forced to modify the present approach to inference. In this contribution, I propose several possible ways that might be useful for extracting the thermodynamic and magnetic properties of solar plasmas from such observations quickly.

  17. Quantitative analysis of sesquiterpene lactones in extract of Arnica montana L. by 1H NMR spectroscopy.

    Science.gov (United States)

    Staneva, Jordanka; Denkova, Pavletta; Todorova, Milka; Evstatieva, Ljuba

    2011-01-01

    (1)H NMR spectroscopy was used as a method for quantitative analysis of sesquiterpene lactones present in a crude lactone fraction isolated from Arnica montana. Eight main components - tigloyl-, methacryloyl-, isobutyryl- and 2-methylbutyryl-esters of helenalin (H) and 11α,13-dihydrohelenalin (DH) were identified in the studied sample. The method allows the determination of the total amount of sesquiterpene lactones and the quantity of both type helenalin and 11α,13-dihydrohelenalin esters separately. Furthermore, 6-O-tigloylhelenalin (HT, 1), 6-O-methacryloylhelenalin (HM, 2), 6-O-tigloyl-11α,13-dihydrohelenalin (DHT, 5), and 6-O-methacryloyl-11α,13-dihydrohelenalin (DHM, 6) were quantified as individual components.

  18. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    Science.gov (United States)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  19. Extracting important information from Chinese Operation Notes with natural language processing methods.

    Science.gov (United States)

    Wang, Hui; Zhang, Weide; Zeng, Qiang; Li, Zuofeng; Feng, Kaiyan; Liu, Lei

    2014-04-01

    Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Automatically extracting clinically useful sentences from UpToDate to support clinicians' information needs.

    Science.gov (United States)

    Mishra, Rashmi; Del Fiol, Guilherme; Kilicoglu, Halil; Jonnalagadda, Siddhartha; Fiszman, Marcelo

    2013-01-01

    Clinicians raise several information needs in the course of care. Most of these needs can be met by online health knowledge resources such as UpToDate. However, finding relevant information in these resources often requires significant time and cognitive effort. To design and assess algorithms for extracting from UpToDate the sentences that represent the most clinically useful information for patient care decision making. We developed algorithms based on semantic predications extracted with SemRep, a semantic natural language processing parser. Two algorithms were compared against a gold standard composed of UpToDate sentences rated in terms of clinical usefulness. Clinically useful sentences were strongly correlated with predication frequency (correlation= 0.95). The two algorithms did not differ in terms of top ten precision (53% vs. 49%; p=0.06). Semantic predications may serve as the basis for extracting clinically useful sentences. Future research is needed to improve the algorithms.

  1. Development of response surface methodology for optimization of extraction parameters and quantitative estimation of embelin from Embelia ribes Burm by high performance liquid chromatography

    Science.gov (United States)

    Alam, Md. Shamsir; Damanhouri, Zoheir A.; Ahmad, Aftab; Abidin, Lubna; Amir, Mohd; Aqil, Mohd; Khan, Shah Alam; Mujeeb, Mohd

    2015-01-01

    Background: Embelia ribes Burm is widely used medicinal plant for the treatment of different types of disorders in the Indian traditional systems of medicine. Objective: The present work was aimed to optimize the extraction parameters of embelin from E. ribes fruits and also to quantify embelin content in different extracts of the plant. Materials and Methods: Optimization of extraction parameters such as solvent: drug ratio, temperature and time were carried out by response surface methodology (RSM). Quantitative estimation of embelin in different extracts of E. ribes fruits was done through high performance liquid chromatography. Results: The optimal conditions determined for extraction of embelin through RSM were; extraction time (27.50 min), extraction temperature 45°C and solvent: drug ratio (8:1). Under the optimized conditions, the embelin yield (32.71%) was equitable to the expected yield (31.07%, P > 0.05). These results showed that the developed model is satisfactory and suitable for the extraction process of embelin. The analysis of variance showed a high goodness of model fit and the accomplishment of the RSM method for improving embelin extraction from the fruits of E. ribes. Conclusion: It is concluded that this may be a useful method for the extraction and quantitative estimation of embelin from the fruits of E. ribes. PMID:26109763

  2. Development of response surface methodology for optimization of extraction parameters and quantitative estimation of embelin from Embelia ribes Burm by high performance liquid chromatography

    Directory of Open Access Journals (Sweden)

    Md. Shamsir Alam

    2015-01-01

    Full Text Available Background: Embelia ribes Burm is widely used medicinal plant for the treatment of different types of disorders in the Indian traditional systems of medicine. Objective: The present work was aimed to optimize the extraction parameters of embelin from E. ribes fruits and also to quantify embelin content in different extracts of the plant. Materials and Methods: Optimization of extraction parameters such as solvent: drug ratio, temperature and time were carried out by response surface methodology (RSM. Quantitative estimation of embelin in different extracts of E. ribes fruits was done through high performance liquid chromatography. Results: The optimal conditions determined for extraction of embelin through RSM were; extraction time (27.50 min, extraction temperature 45°C and solvent: drug ratio (8:1. Under the optimized conditions, the embelin yield (32.71% was equitable to the expected yield (31.07%, P > 0.05. These results showed that the developed model is satisfactory and suitable for the extraction process of embelin. The analysis of variance showed a high goodness of model fit and the accomplishment of the RSM method for improving embelin extraction from the fruits of E. ribes. Conclusion: It is concluded that this may be a useful method for the extraction and quantitative estimation of embelin from the fruits of E. ribes.

  3. Quantitative extraction of the jet transport parameter from combined data at RHIC and LHC

    CERN Document Server

    Wang, Xin-Nian

    2014-01-01

    Using theoretical tools developed by the JET Collaboration in which one employes 2+1D or 3+1D hydrodynamic models for the bulk medium evolution and jet quenching models, the combined data on suppression of single inclusive hadron spectra at both RHIC and LHC are systematically analyzed with five different approaches to the parton energy loss. The jet transport parameter is extracted from the best fits to the data with values of $\\hat q \\approx 1.2\\pm 0.3$ and $1.9\\pm 0.7$ GeV$^2$/fm in the center of the most central Au+Au collisions at $\\sqrt{s}=200$ GeV and Pb+Pb collisions at $\\sqrt{s}=2.67$ TeV, respectively at an initial time $\\tau_0=0.6$ fm/$c$ for a quark jet with an initial energy of 10 GeV/$c$.

  4. Quantitative methodology to extract regional magnetotelluric impedances and determine the dimension of the conductivity structure

    Energy Technology Data Exchange (ETDEWEB)

    Groom, R. [PetRos EiKon Incorporated, Ontario (Canada); Kurtz, R.; Jones, A.; Boerner, D. [Geological Survey of Canada, Ontario (Canada)

    1996-05-01

    This paper describes a systematic method for determining the appropriate dimensionality of magnetotelluric (MT) data from a site, and illustrates the application of this method to analyze both synthetic data and real data. Additionally, it describes the extraction of regional impedance responses from multiple sites. This method was examined extensively with synthetic data, and proven to be successful. It was demonstrated for two neighboring sites that the analysis methodology can be extremely useful in unraveling the bulk regional response when hidden by strong three-dimensional effects. Although there may still be some uncertainties remaining in the true levels for the regional responses for stations LIT000 and LITW02, the analysis has provided models which not only fit the data but are consistent for neighboring sites. It was suggested from these data that the stations are seeing significantly different structures. 12 refs.

  5. Extracting additional risk managers information from a risk assessment of Listeria monocytogenes in deli meats

    NARCIS (Netherlands)

    Pérez-Rodríguez, F.; Asselt, van E.D.; García-Gimeno, R.M.; Zurera, G.; Zwietering, M.H.

    2007-01-01

    The risk assessment study of Listeria monocytogenes in ready-to-eat foods conducted by the U.S. Food and Drug Administration is an example of an extensive quantitative microbiological risk assessment that could be used by risk analysts and other scientists to obtain information and by managers and s

  6. Extraction of Informative Blocks from Deep Web Page Using Similar Layout Feature

    OpenAIRE

    Zeng,Jun; Flanagan, Brendan; Hirokawa, Sachio

    2013-01-01

    Due to the explosive growth and popularity of the deep web, information extraction from deep web page has gained more and more attention. However, the HTML structure of web page has become more complicated, making it difficult to recognize target content by only analyzing the HTML source code. In this paper, we propose a method to extract the informative blocks from a deep web using the layout feature. We consider the visual rectangular region of an HTML element as a visual block in web page....

  7. Information extraction for legal knowledge representation – a review of approaches and trends

    Directory of Open Access Journals (Sweden)

    Denis Andrei de Araujo

    2014-11-01

    Full Text Available This work presents an introduction to Information Extraction systems and a survey of the known approaches of Information Extraction in the legal area. This work analyzes with particular attention the techniques that rely on the representation of legal knowledge as a means to achieve better performance, with emphasis on those techniques including ontologies and linguistic support. Some details of the systems implementations are presented, followed by an analysis of the positive and negative points of each approach, aiming to bring the reader a critical position regarding the solutions studied.

  8. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.;

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...

  9. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  10. Is it Possible to Extract Brain Metabolic Pathways Information from In Vivo H Nuclear Magnetic Resonance Spectroscopy Data?

    CERN Document Server

    de Lara, Alejandro Chinea Manrique

    2010-01-01

    In vivo H nuclear magnetic resonance (NMR) spectroscopy is an important tool for performing non-invasive quantitative assessments of brain tumour glucose metabolism. Brain tumours are considered as fast-growth tumours because of their high rate of proliferation. In addition, tumour cells exhibit profound genetic, biochemical and histological differences with respect to the original non-transformed cellular types. Therefore, there is a strong interest from the clinical investigator point of view in understanding the role of brain metabolites in normal and pathological conditions and especially on the development of early tumour detection techniques. Unfortunately, current diagnosis techniques ignore the dynamic aspects of these signals. It is largely believed that temporal variations of NMR Spectra are noisy or just simply do not carry enough information to be exploited by any reliable diagnosis procedure. Thus, current diagnosis procedures are mainly based on empirical observations extracted from single avera...

  11. Research in health sciences library and information science: a quantitative analysis.

    Science.gov (United States)

    Dimitroff, A

    1992-10-01

    A content analysis of research articles published between 1966 and 1990 in the Bulletin of the Medical Library Association was undertaken. Four specific questions were addressed: What subjects are of interest to health sciences librarians? Who is conducting this research? How do health sciences librarians conduct their research? Do health sciences librarians obtain funding for their research activities? Bibliometric characteristics of the research articles are described and compared to characteristics of research in library and information science as a whole in terms of subject and methodology. General findings were that most research in health sciences librarianship is conducted by librarians affiliated with academic health sciences libraries (51.8%); most deals with an applied (45.7%) or a theoretical (29.2%) topic; survey (41.0%) or observational (20.7%) research methodologies are used; descriptive quantitative analytical techniques are used (83.5%); and over 25% of research is funded. The average number of authors was 1.85, average article length was 7.25 pages, and average number of citations per article was 9.23. These findings are consistent with those reported in the general library and information science literature for the most part, although specific differences do exist in methodological and analytical areas.

  12. Inexperienced clinicians can extract pathoanatomic information from MRI narrative reports with high reproducibility for use in research/quality assurance

    Directory of Open Access Journals (Sweden)

    Kent Peter

    2011-07-01

    Full Text Available Abstract Background Although reproducibility in reading MRI images amongst radiologists and clinicians has been studied previously, no studies have examined the reproducibility of inexperienced clinicians in extracting pathoanatomic information from magnetic resonance imaging (MRI narrative reports and transforming that information into quantitative data. However, this process is frequently required in research and quality assurance contexts. The purpose of this study was to examine inter-rater reproducibility (agreement and reliability among an inexperienced group of clinicians in extracting spinal pathoanatomic information from radiologist-generated MRI narrative reports. Methods Twenty MRI narrative reports were randomly extracted from an institutional database. A group of three physiotherapy students independently reviewed the reports and coded the presence of 14 common pathoanatomic findings using a categorical electronic coding matrix. Decision rules were developed after initial coding in an effort to resolve ambiguities in narrative reports. This process was repeated a further three times using separate samples of 20 MRI reports until no further ambiguities were identified (total n = 80. Reproducibility between trainee clinicians and two highly trained raters was examined in an arbitrary coding round, with agreement measured using percentage agreement and reliability measured using unweighted Kappa (k. Reproducibility was then examined in another group of three trainee clinicians who had not participated in the production of the decision rules, using another sample of 20 MRI reports. Results The mean percentage agreement for paired comparisons between the initial trainee clinicians improved over the four coding rounds (97.9-99.4%, although the greatest improvement was observed after the first introduction of coding rules. High inter-rater reproducibility was observed between trainee clinicians across 14 pathoanatomic categories over the

  13. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.

    Science.gov (United States)

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single

  14. Understanding the information needs of people with haematological cancers. A meta-ethnography of quantitative and qualitative research.

    Science.gov (United States)

    Atherton, K; Young, B; Salmon, P

    2017-02-10

    Clinical practice in haematological oncology often involves difficult diagnostic and treatment decisions. In this context, understanding patients' information needs and the functions that information serves for them is particularly important. We systematically reviewed qualitative and quantitative evidence on haematological oncology patients' information needs to inform how these needs can best be addressed in clinical practice. PsycINFO, Medline and CINAHL Plus electronic databases were searched for relevant empirical papers published from January 2003 to July 2016. Synthesis of the findings drew on meta-ethnography and meta-study. Most quantitative studies used a survey design and indicated that patients are largely content with the information they receive from physicians, however much or little they actually receive, although a minority of patients are not content with information. Qualitative studies suggest that a sense of being in a caring relationship with a physician allows patients to feel content with the information they have been given, whereas patients who lack such a relationship want more information. The qualitative evidence can help explain the lack of association between the amount of information received and contentment with it in the quantitative research. Trusting relationships are integral to helping patients feel that their information needs have been met. © 2017 John Wiley & Sons Ltd.

  15. Quantitative analysis of proteome extracted from barley crowns grown under different drought conditions.

    Science.gov (United States)

    Vítámvás, Pavel; Urban, Milan O; Škodáček, Zbynek; Kosová, Klára; Pitelková, Iva; Vítámvás, Jan; Renaut, Jenny; Prášil, Ilja T

    2015-01-01

    Barley cultivar Amulet was used to study the quantitative proteome changes through different drought conditions utilizing two-dimensional difference gel electrophoresis (2D-DIGE). Plants were cultivated for 10 days under different drought conditions. To obtain control and differentially drought-treated plants, the soil water content was kept at 65, 35, and 30% of soil water capacity (SWC), respectively. Osmotic potential, water saturation deficit, (13)C discrimination, and dehydrin accumulation were monitored during sampling of the crowns for proteome analysis. Analysis of the 2D-DIGE gels revealed 105 differentially abundant spots; most were differentially abundant between the controls and drought-treated plants, and 25 spots displayed changes between both drought conditions. Seventy-six protein spots were successfully identified by tandem mass spectrometry. The most frequent functional categories of the identified proteins can be put into the groups of: stress-associated proteins, amino acid metabolism, carbohydrate metabolism, as well as DNA and RNA regulation and processing. Their possible role in the response of barley to drought stress is discussed. Our study has shown that under drought conditions barley cv. Amulet decreased its growth and developmental rates, displayed a shift from aerobic to anaerobic metabolism, and exhibited increased levels of several protective proteins. Comparison of the two drought treatments revealed plant acclimation to milder drought (35% SWC); but plant damage under more severe drought treatment (30% SWC). The results obtained revealed that cv. Amulet is sensitive to drought stress. Additionally, four spots revealing a continuous and significant increase with decreasing SWC (UDP-glucose 6-dehydrogenase, glutathione peroxidase, and two non-identified) could be good candidates for testing of their protein phenotyping capacity together with proteins that were significantly distinguished in both drought treatments.

  16. QUANTITATIVE ANALYSIS OF PROTEOME EXTRACTED FROM BARLEY CROWNS GROWN UNDER DIFFERENT DROUGHT CONDITIONS

    Directory of Open Access Journals (Sweden)

    Pavel eVítámvás

    2015-06-01

    Full Text Available Barley cv. Amulet was used to study the quantitative proteome changes through different drought conditions utilizing two-dimensional difference gel electrophoresis (2D-DIGE. Plants were cultivated for ten days under different drought conditions. To obtain control and differentially drought-treated plants, the soil water content was kept at 65%, 35%, and 30% of soil water capacity (SWC, respectively. Osmotic potential, water saturation deficit, 13C discrimination, and dehydrin accumulation were monitored during sampling of the crowns for proteome analysis. Analysis of the 2D-DIGE gels revealed 105 differentially abundant spots; most were differentially abundant between the controls and drought-treated plants, and 25 spots displayed changes between both drought conditions. Seventy-six protein spots were successfully identified by tandem mass spectrometry. The most frequent functional categories of the identified proteins can be put into the groups of: stress-associated proteins, amino acid metabolism, carbohydrate metabolism, as well as DNA & RNA regulation and processing. Their possible role in the response of barley to drought stress is discussed. Our study has shown that under drought conditions barley cultivar Amulet decreased its growth and developmental rates, displayed a shift from aerobic to anaerobic metabolism, and exhibited increased levels of several protective proteins. Comparison of the two drought treatments revealed plant acclimation to milder drought (35% SWC; but plant damage under more severe drought treatment (30% SWC. The results obtained revealed that cv. Amulet is sensitive to drought stress. Additionally, four spots revealing a continuous and significant increase with decreasing SWC (UDP-glucose 6-dehydrogenase, glutathione peroxidase, and two non-identified could be good candidates for testing of their protein phenotyping capacity together with proteins that were significantly distinguished in both drought treatments.

  17. Validation of a quantitative NMR method for suspected counterfeit products exemplified on determination of benzethonium chloride in grapefruit seed extracts.

    Science.gov (United States)

    Bekiroglu, Somer; Myrberg, Olle; Ostman, Kristina; Ek, Marianne; Arvidsson, Torbjörn; Rundlöf, Torgny; Hakkarainen, Birgit

    2008-08-05

    A 1H-nuclear magnetic resonance (NMR) spectroscopy method for quantitative determination of benzethonium chloride (BTC) as a constituent of grapefruit seed extract was developed. The method was validated, assessing its specificity, linearity, range, and precision, as well as accuracy, limit of quantification and robustness. The method includes quantification using an internal reference standard, 1,3,5-trimethoxybenzene, and regarded as simple, rapid, and easy to implement. A commercial grapefruit seed extract was studied and the experiments were performed on spectrometers operating at two different fields, 300 and 600 MHz for proton frequencies, the former with a broad band (BB) probe and the latter equipped with both a BB probe and a CryoProbe. The concentration average for the product sample was 78.0, 77.8 and 78.4 mg/ml using the 300 BB probe, the 600MHz BB probe and CryoProbe, respectively. The standard deviation and relative standard deviation (R.S.D., in parenthesis) for the average concentrations was 0.2 (0.3%), 0.3 (0.4%) and 0.3mg/ml (0.4%), respectively.

  18. Abacus: a computational tool for extracting and pre-processing spectral count data for label-free quantitative proteomic analysis.

    Science.gov (United States)

    Fermin, Damian; Basrur, Venkatesha; Yocum, Anastasia K; Nesvizhskii, Alexey I

    2011-04-01

    We describe Abacus, a computational tool for extracting spectral counts from MS/MS data sets. The program aggregates data from multiple experiments, adjusts spectral counts to accurately account for peptides shared across multiple proteins, and performs common normalization steps. It can also output the spectral count data at the gene level, thus simplifying the integration and comparison between gene and protein expression data. Abacus is compatible with the widely used Trans-Proteomic Pipeline suite of tools and comes with a graphical user interface making it easy to interact with the program. The main aim of Abacus is to streamline the analysis of spectral count data by providing an automated, easy to use solution for extracting this information from proteomic data sets for subsequent, more sophisticated statistical analysis.

  19. Information retrieval and terminology extraction in online resources for patients with diabetes.

    Science.gov (United States)

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

  20. [An improved N-FINDR endmember extraction algorithm based on manifold learning and spatial information].

    Science.gov (United States)

    Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo

    2013-09-01

    An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR).

  1. A method of building information extraction based on mathematical morphology and multiscale

    Science.gov (United States)

    Li, Jing-wen; Wang, Ke; Zhang, Zi-ping; Xue, Long-li; Yin, Shou-qiang; Zhou, Song

    2015-12-01

    In view of monitoring the changes of buildings on Earth's surface ,by analyzing the distribution characteristics of building in remote sensing image, combined with multi-scale in image segmentation and the advantages of mathematical morphology, this paper proposes a multi-scale combined with mathematical morphology of high resolution remote sensing image segmentation method, and uses the multiple fuzzy classification method and the shadow of auxiliary method to extract information building, With the comparison of k-means classification, and the traditional maximum likelihood classification method, the results of experiment object based on multi-scale combined with mathematical morphology of image segmentation and extraction method, can accurately extract the structure of the information is more clear classification data, provide the basis for the intelligent monitoring of earth data and theoretical support.

  2. Extraction and Network Sharing of Forest Vegetation Information based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Hannv

    2013-05-01

    Full Text Available The support vector machine (SVM is a new method of data mining, which can deal with regression problems (time series analysis, pattern recognition (classification, discriminant analysis and many other issues very well. In recent years, SVM has been widely used in computer classification and recognition of remote sensing images. This paper is based on Landsat TM image data, using a classification method which is based on support vector machine to extract the forest cover information of Dahuanggou tree farm of Changbai Mountain area, and compare with the conventional maximum likelihood classification. The results show that extraction accuracy of forest information based on support vector machine, Kappa values are 0.9810, 0.9716, 0.9753, which are exceeding the extraction accuracy of maximum likelihood method (MLC and Kappa value of 0.9634, the method has good maneuverability and practicality.

  3. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  4. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies

    Science.gov (United States)

    Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A

    2017-01-01

    Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265

  5. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies.

    Science.gov (United States)

    Zheng, Shuai; Lu, James J; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A; Wang, Fusheng

    2017-05-09

    Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports-each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. IDEAL-X adopts a unique online machine learning-based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable.

  6. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  7. An Information Extraction Core System for Real World German Text Processing

    CERN Document Server

    Neumann, G; Baur, J; Becker, M; Braun, C

    1997-01-01

    This paper describes SMES, an information extraction core system for real world German text processing. The basic design criterion of the system is of providing a set of basic powerful, robust, and efficient natural language components and generic linguistic knowledge sources which can easily be customized for processing different tasks in a flexible manner.

  8. User-centered evaluation of Arizona BioPathway: an information extraction, integration, and visualization system.

    Science.gov (United States)

    Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun

    2007-09-01

    Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.

  9. Information Extraction to Generate Visual Simulations of Car Accidents from Written Descriptions

    NARCIS (Netherlands)

    Nugues, P.; Dupuy, S.; Egges, A.

    2003-01-01

    This paper describes a system to create animated 3D scenes of car accidents from written reports. The text-to-scene conversion process consists of two stages. An information extraction module creates a tabular description of the accident and a visual simulator generates and animates the scene. We

  10. Information extraction with object based support vector machines and vegetation indices

    Science.gov (United States)

    Ustuner, Mustafa; Abdikan, Saygin; Balik Sanli, Fusun

    2016-07-01

    Information extraction through remote sensing data is important for policy and decision makers as extracted information provide base layers for many application of real world. Classification of remotely sensed data is the one of the most common methods of extracting information however it is still a challenging issue because several factors are affecting the accuracy of the classification. Resolution of the imagery, number and homogeneity of land cover classes, purity of training data and characteristic of adopted classifiers are just some of these challenging factors. Object based image classification has some superiority than pixel based classification for high resolution images since it uses geometry and structure information besides spectral information. Vegetation indices are also commonly used for the classification process since it provides additional spectral information for vegetation, forestry and agricultural areas. In this study, the impacts of the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) on the classification accuracy of RapidEye imagery were investigated. Object based Support Vector Machines were implemented for the classification of crop types for the study area located in Aegean region of Turkey. Results demonstrated that the incorporation of NDRE increase the classification accuracy from 79,96% to 86,80% as overall accuracy, however NDVI decrease the classification accuracy from 79,96% to 78,90%. Moreover it is proven than object based classification with RapidEye data give promising results for crop type mapping and analysis.

  11. Web信息抽取系统的设计%Design of Web Information Extraction System

    Institute of Scientific and Technical Information of China (English)

    刘斌; 张晓婧

    2013-01-01

    In order to obtain the scattered information hidden in Web pages,Web information extraction system design.The system first uses a modified HITS algorithm for topic selection information collection; then the Web page's HTML document structure of the data pre-processing; Finally,based on the XPath DOM tree generation algorithm to obtain the absolute path is an XPath node marked expression,and use the XPath language with XSLT technology to write extraction rules,resulting in a structured database or XML file,to achieve the positioning and Web information extraction.Extraction through a shopping site experiments show that the extraction system works well,can achieve similar batch extract Web page.%为了获取分散Web页面中隐含信息,设计了Web信息抽取系统.该系统首先使用一种改进的HITS主题精选算法进行信息采集;然后对Web页面的HTML结构进行文档的数据预处理;最后,基于DOM树的XPath绝对路径生成算法来获取被标注结点的XPath表达式,并使用XPath语言结合XSLT技术来编写抽取规则,从而得到结构化的数据库或XML文件,实现了Web信息的定位和抽取.通过一个购物网站的抽取实验证明,该系统的抽取效果良好,可以实现相似Web页面的批量抽取.

  12. Simplified and rapid method for extraction of ergosterol from natural samples and detection with quantitative and semi-quantitative methods using thin-layer chromatography

    OpenAIRE

    2004-01-01

    A new and simplified method for extraction of ergosterol (ergoste-5,7,22-trien-3-beta-ol) from fungi in soil and litter was developed using pre-soaking extraction and paraffin oil for recovery. Recoveries of ergosterol were in the range of 94 - 100% depending on the solvent to oil ratio. Extraction efficiencies equal to heat-assisted extraction treatments were obtained with pre-soaked extraction. Ergosterol was detected with thin-layer chromatography (TLC) using fluorodensitometry with a quan...

  13. Echo time-dependent quantitative susceptibility mapping contains information on tissue properties.

    Science.gov (United States)

    Sood, Surabhi; Urriola, Javier; Reutens, David; O'Brien, Kieran; Bollmann, Steffen; Barth, Markus; Vegh, Viktor

    2017-05-01

    Magnetic susceptibility is a physical property of matter that varies depending on chemical composition and abundance of different molecular species. Interest is growing in mapping of magnetic susceptibility in the human brain using magnetic resonance imaging techniques, but the influences affecting the mapped values are not fully understood. We performed quantitative susceptibility mapping on 7 Tesla (T) multiple echo time gradient recalled echo data and evaluated the trend in 10 regions of the human brain. Temporal plots of susceptibility were performed in the caudate, pallidum, putamen, thalamus, insula, red nucleus, substantia nigra, internal capsule, corpus callosum, and fornix. We implemented an existing three compartment signal model and used optimization to fit the experimental result to assess the influences that could be responsible for our findings. The temporal trend in susceptibility is different for different brain regions, and subsegmentation of specific regions suggests that differences are likely to be attributable to variations in tissue structure and composition. Using a signal model, we verified that a nonlinear temporal behavior in experimentally computed susceptibility within imaging voxels may be the result of the heterogeneous composition of tissue properties. Decomposition of voxel constituents into meaningful parameters may lead to informative measures that reflect changes in tissue microstructure. Magn Reson Med 77:1946-1958, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Modifying the Schwarz Bayesian information criterion to locate multiple interacting quantitative trait loci.

    Science.gov (United States)

    Bogdan, Malgorzata; Ghosh, Jayanta K; Doerge, R W

    2004-06-01

    The problem of locating multiple interacting quantitative trait loci (QTL) can be addressed as a multiple regression problem, with marker genotypes being the regressor variables. An important and difficult part in fitting such a regression model is the estimation of the QTL number and respective interactions. Among the many model selection criteria that can be used to estimate the number of regressor variables, none are used to estimate the number of interactions. Our simulations demonstrate that epistatic terms appearing in a model without the related main effects cause the standard model selection criteria to have a strong tendency to overestimate the number of interactions, and so the QTL number. With this as our motivation we investigate the behavior of the Schwarz Bayesian information criterion (BIC) by explaining the phenomenon of the overestimation and proposing a novel modification of BIC that allows the detection of main effects and pairwise interactions in a backcross population. Results of an extensive simulation study demonstrate that our modified version of BIC performs very well in practice. Our methodology can be extended to general populations and higher-order interactions.

  15. Extraction of arsenic as the diethyl dithiophosphate complex with supercritical fluid and quantitation by cathodic stripping voltammetry.

    Science.gov (United States)

    Arancibia, Verónica; López, Alex; Zúñiga, M Carolina; Segura, Rodrigo

    2006-02-28

    The separation of arsenic based on in situ chelation with ammonium diethyl dithiophosphate (ADDTP) has been carried out using methanol-modified supercritical CO(2). Aliquots of extract were added to an electroanalytical cell and arsenic was determined by square wave cathodic stripping voltammetry (SWCSV) at a hanging mercury drop electrode (HMDE). Quantitative extractions of As(DDTP)(3) were achieved when the experiments were carried out at a pressure of 2500psi, a temperature of 90 degrees C, 2.0mL of methanol, 20.0min of static extraction and 5.0min of dynamic extraction in the presence of 18mg of ADDTP. Analysis of arsenic was made using 150mgL(-1) of Cu(II) in 1M HCl solution as supporting electrolyte in the presence of ADDTP as ligand. Preconcentration was carried out by deposition at a potential of -0.50V and the intermetallic compound Cu(x)As(y) was reduced at a potential of -0.77 to -0.82V, depending on ligand concentration. The results showed that the presence of ligand plays an important role, increasing the method's sensitivity and preventing the oxidation of As(III). The calibration graph of the As(DDTP)(3) solution was linear from 0.8 to 12.5mugL(-1) of arsenic (LOD 0.5mugL(-1), R=0.9992, t(acc)=60s). The method was validated using carrot pulp spiked with arsenic solution. This method was applied to the determination of arsenic in samples of carrots, beets and irrigation water. Arsenic in beets was: skin 4.10+/-0.18mgkg(-1); pulp 3.83+/-0.19mgkg(-1) and juice 0.71+/-0.09mgL(-1); arsenic in carrots was: skin 2.15+/-0.09mgkg(-1); pulp 0.59+/-0.11mgkg(-1) and juice 0.71+/-0.03mgL(-1). Arsenic in water were: Chiu-Chiu 0.08mgL(-1), Inacaliri 1.12mgL(-1), and Salado river 0.17+/-0.07mgL(-1).

  16. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    Science.gov (United States)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  17. A Framework For Extracting Information From Web Using VTD-XML‘s XPath

    Directory of Open Access Journals (Sweden)

    C. Subhashini

    2012-03-01

    Full Text Available The exponential growth of WWW (World Wide Web is the cause for vast pool of information as well as several challenges posed by it, such as extracting potentially useful and unknown information from WWW. Many websites are built with HTML, because of its unstructured layout, it is difficult to obtain effective and precise data from web using HTML. The advent of XML (Extensible Markup Language proposes a better solution to extract useful knowledge from WWW. Web Data Extraction based on XML Technology solves this problem because XML is a general purpose specification for exchanging data over the Web. In this paper, a framework is suggested to extract the data from the web.Here the semi-structured data in the web page is transformed into well-structured data using standard XML technologies and the new parsing technique called extended VTD-XML (Virtual Token Descriptorfor XML along with Xpath implementation has been used to extract data from the well-structured XML document.

  18. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  19. Framework for automatic information extraction from research papers on nanocrystal devices.

    Science.gov (United States)

    Dieb, Thaer M; Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C

    2015-01-01

    To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called " NaDev" (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called "NaDevEx" (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39-73%); however, precision is better (75-97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers

  20. Clinic expert information extraction based on domain model and block importance model.

    Science.gov (United States)

    Zhang, Yuanpeng; Wang, Li; Qian, Danmin; Geng, Xingyun; Yao, Dengfu; Dong, Jiancheng

    2015-11-01

    To extract expert clinic information from the Deep Web, there are two challenges to face. The first one is to make a judgment on forms. A novel method based on a domain model, which is a tree structure constructed by the attributes of query interfaces is proposed. With this model, query interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from response Web pages indexed by query interfaces. To filter the noisy information on a Web page, a block importance model is proposed, both content and spatial features are taken into account in this model. The experimental results indicate that the domain model yields a precision 4.89% higher than that of the rule-based method, whereas the block importance model yields an F1 measure 10.5% higher than that of the XPath method.

  1. Extraction of Hidden Social Networks from Wiki-Environment Involved in Information Conflict

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliyev

    2016-03-01

    Full Text Available Social network analysis is a widely used technique to analyze relationships among wiki-users in Wikipedia. In this paper the method to identify hidden social networks participating in information conflicts in wiki-environment is proposed. In particular, we describe how text clustering techniques can be used for extraction of hidden social networks of wiki-users caused information conflict. By clustering unstructured text articles caused information conflict we create social network of wiki-users. For clustering of the conflict articles a hybrid weighted fuzzy-c-means method is proposed.

  2. Extraction of spatial information from remotely sensed image data - an example: gloria sidescan sonar images

    Science.gov (United States)

    Chavez, Pat S.; Gardner, James V.

    1994-01-01

    A method to extract spatial amplitude and variability information from remotely sensed digital imaging data is presented. High Pass Filters (HPFs) are used to compute both a Spatial Amplitude Image/Index (SAI) and Spatial Variability Image/Index (SVI) at the local, intermediate, and regional scales. Used as input to principal component analysis and automatic clustering classification, the results indicate that spatial information at scales other than local is useful in the analysis of remotely sensed data. The resultant multi-spatial data set allows the user to study and analyze an image based more on the complete spatial characteristics of an image than only local textural information.

  3. Visualization and Analysis of Geology Word Vectors for Efficient Information Extraction

    Science.gov (United States)

    Floyd, J. S.

    2016-12-01

    allow one to extract information from hundreds of papers or more and find relationships in less time than it would take to read all of the papers. As machine learning tools become more commonly available, more and more scientists will be able to use and refine these tools for their individual needs.

  4. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    Directory of Open Access Journals (Sweden)

    Lishuang Li

    Full Text Available Protein-Protein Interaction (PPI extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH. We evaluate our method with Support Vector Machine (SVM and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.

  5. Towards a Quantitative Performance Measurement Framework to Assess the Impact of Geographic Information Standards

    Science.gov (United States)

    Vandenbroucke, D.; Van Orshoven, J.; Vancauwenberghe, G.

    2012-12-01

    Over the last decennia, the use of Geographic Information (GI) has gained importance, in public as well as in private sector. But even if many spatial data and related information exist, data sets are scattered over many organizations and departments. In practice it remains difficult to find the spatial data sets needed, and to access, obtain and prepare them for using in applications. Therefore Spatial Data Infrastructures (SDI) haven been developed to enhance the access, the use and sharing of GI. SDIs consist of a set of technological and non-technological components to reach this goal. Since the nineties many SDI initiatives saw light. Ultimately, all these initiatives aim to enhance the flow of spatial data between organizations (users as well as producers) involved in intra- and inter-organizational and even cross-country business processes. However, the flow of information and its re-use in different business processes requires technical and semantic interoperability: the first should guarantee that system components can interoperate and use the data, while the second should guarantee that data content is understood by all users in the same way. GI-standards within the SDI are necessary to make this happen. However, it is not known if this is realized in practice. Therefore the objective of the research is to develop a quantitative framework to assess the impact of GI-standards on the performance of business processes. For that purpose, indicators are defined and tested in several cases throughout Europe. The proposed research will build upon previous work carried out in the SPATIALIST project. It analyzed the impact of different technological and non-technological factors on the SDI-performance of business processes (Dessers et al., 2011). The current research aims to apply quantitative performance measurement techniques - which are frequently used to measure performance of production processes (Anupindi et al., 2005). Key to reach the research objectives

  6. Orchard spatial information extraction from SPOT-5 image based on CART model

    Science.gov (United States)

    Li, Deyi; Zhang, Shuwen

    2009-07-01

    Orchard is an important agricultural industry and typical land use type in Shandong peninsula of China. This article focused on the automatic information extraction of orchard using SPOT-5 image. After analyzing every object's spectrum, we proposed a CART model based on sub-region and hierarchy theory by exploring spectrum, texture and topography attributes. The whole area was divided into coastal plain region and hill region based on SRTM data and extracted respectively. The accuracy reached to 86.40%, which was much higher than supervised classification method.

  7. Extracting directed information flow networks: an application to genetics and semantics

    CERN Document Server

    Masucci, A P; Hernández-García, E; Kalampokis, A

    2010-01-01

    We introduce a general method to infer the directional information flow between populations whose elements are described by n-dimensional vectors of symbolic attributes. The method is based on the Jensen-Shannon divergence and on the Shannon entropy and has a wide range of application. We show here the results of two applications: first extracting the network of genetic flow between the meadows of the seagrass Poseidonia Oceanica, where the meadow elements are specified by sets of microsatellite markers, then we extract the semantic flow network from a set of Wikipedia pages, showing the semantic channels between different areas of knowledge.

  8. A theoretical extraction scheme of transport information based on exclusion models

    Institute of Scientific and Technical Information of China (English)

    Chen Hua; Du Lei; Qu Cheng-Li; Li Wei-Hua; He Liang; Chen Wen-Hao; Sun Peng

    2010-01-01

    In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows model and tunnel model. The first four cumulants of these two exclusion models are computed in a single barrier structure, and their characteristics are obtained. A scheme with the help of the first three cumulants is devised to check a transport process to follow the counter-flows model, the tunnel model or neither of them. Time series generated by Monte Carlo techniques is adopted to validate the abstraction procedure, and the result is reasonable.

  9. MO-G-17A-09: Quantitative Autoradiography of Biopsy Specimens Extracted Under PET/CT Guidance

    Energy Technology Data Exchange (ETDEWEB)

    Fanchon, L; Carlin, S; Schmidtlein, C; Humm, J; Yorke, E; Solomon, S; Deasy, J; Kirov, A [Memorial Sloan Kettering Cancer Center, New York, New York (United States); Burger, I [University Hospital of Zurich, Zurich, Switzerland, Zurich (Switzerland)

    2014-06-15

    Purpose: To develop a procedure for accurate determination of PET tracer concentration with high spatial accuracy in situ by performing Quantitative Autoradiography of Biopsy Specimens (QABS) extracted under PET/CT guidance. Methods: Autoradiography (ARG) standards were produced from a gel loaded with a known concentration of FDG biopsied with 18G and 20G biopsy needles. Specimens obtained with these needles are generally cylindrical: up to 18 mm in length and about 0.8 and 0.6 mm in diameter respectively. These standards, with similar shape and density as biopsy specimens were used to generate ARG calibration curves.Quantitative ARG was performed to measure the activity concentration in biopsy specimens extracted from ten patients. The biopsy sites were determined according to PET/CT's obtained in the operating room. Additional CT scans were acquired with the needles in place to confirm correct needle placements. The ARG images were aligned with the needle tip in the PET/CT images using the open source CERR software. The mean SUV calculated from the specimen activities (SUVarg) were compared to that from PET (SUVpet) at the needle locations. Results: Calibration curves show that the relation between ARG signal and activity concentration in those standards is linear for the investigated range (up to 150 kBq/ml). The correlation coefficient of SUVarg with SUVpet is 0.74. Discrepancies between SUVarg and SUVpet can be attributed to the small size of the biopsy specimens compared to PET resolution. Conclusion: The calibration procedure using surrogate biopsy specimens provided a method for quantifying the activity within the biopsy cores obtained under FDG-PET guidance. QABS allows mapping the activity concentration in such biopsy specimens with a resolution of about 1mm. QABS is a promising tool for verification of biopsy adequacy by comparing specimen activity to that expected from the PET image. A portion of this research was funded by a research grant from

  10. QUANTITATIVE ION-PAIR EXTRACTION OF 4(5)-METHYLIMIDAZOLE FROM CARAMEL COLOR AND ITS DETERMINATION BY REVERSED-PHASE ION-PAIR LIQUID-CHROMATOGRAPHY

    DEFF Research Database (Denmark)

    Thomsen, Mohens; Willumsen, Dorthe

    1981-01-01

    A procedure for quantitative ion-pair extraction of 4(5)-methylimidazole from caramel colour using bis(2-ethylhexyl)phosphoric acid as ion-pairing agent has been developed. Furthermore, a reversed-phase ion-pair liquid chromatographic separation method has been established to analyse the content ...

  11. Information Quality of a Nursing Information System depends on the nurses: A combined quantitative and qualitative evaluation

    NARCIS (Netherlands)

    Michel-Verkerke, M.B.

    2012-01-01

    Purpose Providing access to patient information is the key factor in nurses’ adoption of a Nursing Information System (NIS). In this study the requirements for information quality and the perceived quality of information are investigated. A teaching hospital in the Netherlands has developed a NIS as

  12. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    Science.gov (United States)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  13. Extraction of palaeochannei information from remote sensing imagery in the east of Chaohu Lake, China

    Institute of Scientific and Technical Information of China (English)

    Xinyuan WANG; Zhenya GUO; Li WU; Cheng ZHU; Hui HE

    2012-01-01

    Palaeochannels are deposits of unconsolidated sediments or semi-consolidated sedimentary rocks deposited in ancient,currently inactive fiver and stream channel systems.It is distinct from the overbank deposits of currently active river channels,including ephemeral water courses which do not regularly flow.We have introduced a spectral characteristics-based palaeochannel information extraction model from SPOT-5 imagery with special time phase,which has been built by virtue of an analysis of remote sensing mechanism and spectral characteristics of the palaeochannel,combined with its distinction from the spatial distribution and spectral features of currently active river channels,also with the establishment of remote sensing judging features of the palaeochannel in remote sensing image.This model follows the process of supervised classification → farmland masking and primary component analysis → underground palaeochannel information extractioninformation combination → palaeochannel system image.The Zhegao River Valley in the east of Chaohu Lake was selected as a study area,and SPOT-5 imagery was used as a source of data.The result was satisfactory when this method has been successfully applied to extract the palaeochannel information,which can provide good reference for regional remote sensing archeology and neotectonic research.However,the applicability of this method needs to be tested further in other areas as the spatial characteristics and spectral response of palaeochannel might be different.

  14. A study of extraction of petal region on flower picture using HSV color information

    Science.gov (United States)

    Yanagihara, Yoshio; Nakayama, Ryo

    2014-01-01

    It is one of useful and interesting applications to discriminate the kind of the flower or recognize the name of the flower, as example of retrieving flower database. As its contour line of the petal region of flower is useful for such problems, it is important to extract the precise region of the petal of a flower picture. In this paper, the method which extracts petal regions on a flower picture using HSV color information is proposed, such to discriminate the kind of the flower. The experiments show that the proposed method can extract petal regions at the success rate of about 90%, which is thought to be satisfied. In detail, the success rates of one-colored flower, plural-colored flower, and white flower are about 98%, 85%, and 83%, respectively.

  15. Web Page Information Extraction Technology%网页信息提取技术

    Institute of Scientific and Technical Information of China (English)

    邵振凯

    2013-01-01

    With the rapid development of the Internet,the amount of information in the Web page has become very large,how to quickly and efficiently search and find valuable information has become an important aspect of Web research. In this regard a tag extraction meth-od is proposed. Optimize the Web page into good HTML format documents with JTidy,and resolve to a DOM tree. Then use tag extrac-tion approach to extract the tags contain the text message content from DOM tree,remove the tags used to control the Web interaction and display,and use the method based on the punctuation information extraction method to remove the copyright notice and other informa-tion. The results on a number of different sites extraction show that the tags extraction methods not only have a great generality but also can accurately extract site theme.%随着互联网的快速发展,Web页面上的信息量已变得非常巨大,面对网页上海量的信息资源,如何快速有效地检索及发现有价值的信息已成为Web研究的一个重要方面。对此提出了一种标签提取方法。利用JTidy将网页优化为格式良好的HTML文档并解析为DOM树,然后用标签提取方法对该DOM树中包含有文本信息内容的叶子节点标签进行提取,把用于控制网页交互性和显示的标签删除掉,并运用基于标点符号的信息提取方法去除版权说明等信息。对不同网站的网页进行抽取实验,结果表明标签提取方法不但通用性强,而且能够准确地提取网页的主题信息。

  16. Signal Feature Extraction and Quantitative Evaluation of Metal Magnetic Memory Testing for Oil Well Casing Based on Data Preprocessing Technique

    Directory of Open Access Journals (Sweden)

    Zhilin Liu

    2014-01-01

    Full Text Available Metal magnetic memory (MMM technique is an effective method to achieve the detection of stress concentration (SC zone for oil well casing. It can provide an early diagnosis of microdamages for preventive protection. MMM is a natural space domain signal which is weak and vulnerable to noise interference. So, it is difficult to achieve effective feature extraction of MMM signal especially under the hostile subsurface environment of high temperature, high pressure, high humidity, and multiple interfering sources. In this paper, a method of median filter preprocessing based on data preprocessing technique is proposed to eliminate the outliers point of MMM. And, based on wavelet transform (WT, the adaptive wavelet denoising method and data smoothing arithmetic are applied in testing the system of MMM. By using data preprocessing technique, the data are reserved and the noises of the signal are reduced. Therefore, the correct localization of SC zone can be achieved. In the meantime, characteristic parameters in new diagnostic approach are put forward to ensure the reliable determination of casing danger level through least squares support vector machine (LS-SVM and nonlinear quantitative mapping relationship. The effectiveness and feasibility of this method are verified through experiments.

  17. Qualitative and Quantitative Evaluation of Drug and Health Food Products Containing Red Vine Leaf Extracts on the Japanese Market.

    Science.gov (United States)

    Masada, Sayaka; Takahashi, Yutaka; Goda, Yukihiro; Hakamatsuka, Takashi

    2016-09-01

    Red vine leaf extracts (RVLEs) have traditionally been used for leg wellness and are now standardized to be used as OTC drugs in Europe. In Japan, one brand of RVLE products was recently approved as a direct OTC drug, and RVLEs are still used as ingredients in health food products. Since there is no mandated criterion for the quality of health food products in Japan, the consistent quality and composition of these products are not assured. Here we analyzed OTC drug and health food products containing RVLEs with different lot numbers by LC/MS. Subsequent multivariate analyses clearly indicated that the quality of the health food products was highly variable compared to that of the drug products. Surprisingly, the component contents in the health foods were different even within a same lot in a same brand. The quantitative analyses of flavonols and stilbene derivatives in the drugs and health foods indicated that the concentration of each substance was kept constant in the drugs but not in the health foods. These results strongly indicated that the quality of RVLEs as a whole was not properly controlled in the manufacturing process of health foods. Since RVLE is an active ingredient with pharmaceutical evidences and is used for drugs, the proper regulation for ensuring the consistent quality of RVLEs from product to product would be recommended even in the health foods.

  18. Comparison of methods for miRNA extraction from plasma and quantitative recovery of RNA from plasma and cerebrospinal fluid

    Directory of Open Access Journals (Sweden)

    Melissa A McAlexander

    2013-05-01

    Full Text Available Interest in extracellular RNA has intensified as evidence accumulates that these molecules may be useful as indicators of a wide variety of biological conditions. To establish specific extracellular RNA molecules as clinically relevant biomarkers, reproducible recovery from biological samples and reliable measurements of the isolated RNA are paramount. Towards these ends, careful and rigorous comparisons of technical procedures are needed at all steps from sample handling to RNA isolation to RNA measurement protocols. In the investigations described in this methods paper, RT-qPCR was used to examine the apparent recovery of specific endogenous miRNAs and a spiked-in synthetic RNA from blood plasma samples. RNA was isolated using several widely used RNA isolation kits, with or without the addition of glycogen as a carrier. Kits examined included total RNA isolation systems that have been commercially available for several years and commonly adapted for extraction of biofluid RNA, as well as more recently introduced biofluids-specific RNA methods. Our conclusions include the following: some RNA isolation methods appear to be superior to others for the recovery of RNA from biological fluids; addition of a carrier molecule seems to be beneficial for some but not all isolation methods; and partially or fully quantitative recovery of RNA is observed from increasing volumes of plasma and cerebrospinal fluid.

  19. Linking genes to literature: text mining, information extraction, and retrieval applications for biology.

    Science.gov (United States)

    Krallinger, Martin; Valencia, Alfonso; Hirschman, Lynette

    2008-01-01

    Efficient access to information contained in online scientific literature collections is essential for life science research, playing a crucial role from the initial stage of experiment planning to the final interpretation and communication of the results. The biological literature also constitutes the main information source for manual literature curation used by expert-curated databases. Following the increasing popularity of web-based applications for analyzing biological data, new text-mining and information extraction strategies are being implemented. These systems exploit existing regularities in natural language to extract biologically relevant information from electronic texts automatically. The aim of the BioCreative challenge is to promote the development of such tools and to provide insight into their performance. This review presents a general introduction to the main characteristics and applications of currently available text-mining systems for life sciences in terms of the following: the type of biological information demands being addressed; the level of information granularity of both user queries and results; and the features and methods commonly exploited by these applications. The current trend in biomedical text mining points toward an increasing diversification in terms of application types and techniques, together with integration of domain-specific resources such as ontologies. Additional descriptions of some of the systems discussed here are available on the internet http://zope.bioinfo.cnio.es/bionlp_tools/.

  20. Multilevel spatial semantic model for urban house information extraction automatically from QuickBird imagery

    Science.gov (United States)

    Guan, Li; Wang, Ping; Liu, Xiangnan

    2006-10-01

    Based on the introduction to the characters and constructing flow of space semantic model, the feature space and context of house information in high resolution remote sensing image are analyzed, and the house semantic network model of Quick Bird image is also constructed. Furthermore, the accuracy and practicability of space semantic model are checked up through extracting house information automatically from Quick Bird image after extracting candidate semantic nodes to the image by taking advantage of grey division method, window threshold value method and Hough transformation. Sample result indicates that its type coherence, shape coherence and area coherence are 96.75%, 89.5 % and 88 % respectively. Thereinto the effect of the extraction of the houses with rectangular roof is the best and that with herringbone and the polygonal roofs is just ideal. However, the effect of the extraction of the houses with round roof is not satisfied and thus they need the further perfection to the semantic model to make them own higher applied value.

  1. Extraction of Remote Sensing Information Ofbanana Under Support of 3S Technology Inguangxi Province

    Science.gov (United States)

    Yang, Xin; Sun, Han; Tan, Zongkun; Ding, Meihua

    This paper presents an automatic approach to planting areas extraction for mixed vegetation and hilly region, more cloud using moderate spatial resolution and high temporal resolution MODIS data around Guangxi province, south of China. According to banana growth lasting more 9 to 11 months, and the areas are reduced during crush season, the Maximum likelihood was used to extract the information of banana planting and their spatial distribution through the calculation of multiple-phase MODIS-NDVI in Guangxi and stylebook training regions of banana of being selected by GPS. Compared with the large and little regions of banana planting in monitoring image and the investigation of on the spot with GPS, the resolute shows that the banana planting information in remote sensing image are true. In this research, multiple-phase MODIS data were received during banana main growing season and preprocessed; NDVI temporal profiles of banana were generated;models for planting areas extraction were developed based on the analysis of temporal NDVI curves; and spatial distribution map of planting areas of banana in Guangxi in 2006 were created. The study suggeststhat it is possible to extract planting areas automatically from MODIS data for large areas.

  2. An Useful Information Extraction using Image Mining Techniques from Remotely Sensed Image (RSI

    Directory of Open Access Journals (Sweden)

    Dr. C. Jothi Venkateswaran,

    2010-11-01

    Full Text Available Information extraction using mining techniques from remote sensing image (RSI is rapidly gaining attention among researchers and decision makers because of its potential in application oriented studies. Knowledge discovery from image poses many interesting challenges such as preprocessing the image data set, training the data and discovering useful image patterns applicable to many newapplication frontiers. In the image rich domain of RSI, image mining implies the synergy of data mining and image processing technology. Such culmination of techniques renders a valuable tool in information extraction. Also, this encompasses the problem of handling a larger data base of varied image data formats representing various levels ofinformation such as pixel, local and regional. In the present paper, various preprocessing corrections and techniques of image mining are discussed.

  3. Quantitative determination, Metal analysis and Antiulcer evaluation of Methanol seeds extract of Citrullus lanatus Thunb (Cucurbitaceae) in Rats

    Institute of Scientific and Technical Information of China (English)

    Okunrobo O Lucky; Uwaya O John; Imafidon E Kate; Osarumwense O Peter; Omorodion E Jude

    2012-01-01

    Objective: The use of herbs in treatment of diseases is gradually becoming universally accepted especially in non industrialized societies. Citrullus lanatus Thunb (Cucurbitaceae) commonly called water melon is widely consumed in this part of the world as food and medicine. This work was conducted to investigate the phytochemical composition, proximate and metal content analysis of the seed of Citrullus lanatus and to determine the antiulcer action of the methanol seed extract. Methods: Phytochemical screening, proximate and metal content analysis was done using the standard procedures and the antiulcer activity was evaluated against acetylsalicylic acid-induced ulcers. Results: The results revealed the presence of the following phytochemicals;flavonoids, saponins, tannins, alkaloids, glycosides. Proximate analysis indicated high concentration of carbohydrate, protein and fat while metal analysis showed the presence of sodium, calcium, zinc, magnesium at levels within the recommended dietary intake. Antiulcer potential of the extract against acetylsalicylic acid induced ulceration of gastric mucosa of Wister rats was evaluated at three doses (200mg/kg, 400mg/kg, and 800mg/kg). The ulcer parameters investigated included ulcer number, ulcer severity, ulcer index and percentage ulcer protection. The antiulcer activity was compared against ranitidine at 20mg/kg. The extract exhibited a dose related antiulcer activity with maximum activity at 800mg/kg (P<0.001). Conclusions: Proximate and metal content analysis of the seeds provides information that the consumption of the seeds ofCitrullus lanatus is safe. This present study also provides preliminary data for the first time that the seeds of Citrullus lanatus possesses antiulcer activity in animal model.

  4. Quantitative determination, Metal analysis and Antiulcer evaluation of Methanol seeds extract of Citrullus lanatus Thunb (Cucurbitaceae in Rats

    Directory of Open Access Journals (Sweden)

    Okunrobo O. Lucky

    2012-10-01

    Full Text Available Objective: The use of herbs in treatment of diseases is gradually becoming universally accepted especially in non industrialized societies. Citrullus lanatus Thunb (Cucurbitaceae commonly called water melon is widely consumed in this part of the world as food and medicine. This work was conducted to investigate the phytochemical composition, proximate and metal content analysis of the seed of Citrullus lanatus and to determine the antiulcer action of the methanol seed extract. Methods: Phytochemical screening, proximate and metal content analysis was done using the standard procedures and the antiulcer activity was evaluated against acetylsalicylic acid-induced ulcers. Results: The results revealed the presence of the following phytochemicals; flavonoids, saponins, tannins, alkaloids, glycosides. Proximate analysis indicated high concentration of carbohydrate, protein and fat while metal analysis showed the presence of sodium, calcium, zinc, magnesium at levels within the recommended dietary intake. Antiulcer potential of the extract against acetylsalicylic acid induced ulceration of gastric mucosa of Wister rats was evaluated at three doses (200mg/kg, 400mg/kg, and 800mg/kg. The ulcer parameters investigated included ulcer number, ulcer severity, ulcer index and percentage ulcer protection. The antiulcer activity was compared against ranitidine at 20mg/kg. The extract exhibited a dose related antiulcer activity with maximum activity at 800mg/kg (P<0.001. Conclusions: Proximate and metal content analysis of the seeds provides information that the consumption of the seeds of Citrullus lanatus is safe. This present study also provides preliminary data for the first time that the seeds of Citrullus lanatus possesses antiulcer activity in animal model.

  5. An Useful Information Extraction using Image Mining Techniques from Remotely Sensed Image (RSI)

    OpenAIRE

    Dr. C. Jothi Venkateswaran,; Murugan, S.; Dr. N. Radhakrishnan

    2010-01-01

    Information extraction using mining techniques from remote sensing image (RSI) is rapidly gaining attention among researchers and decision makers because of its potential in application oriented studies. Knowledge discovery from image poses many interesting challenges such as preprocessing the image data set, training the data and discovering useful image patterns applicable to many newapplication frontiers. In the image rich domain of RSI, image mining implies the synergy of data mining and ...

  6. Monadic datalog and the expressive power of languages for Web information extraction

    OpenAIRE

    Gottlob, Georg; Koch, Christoph

    2004-01-01

    Research on information extraction from Web pages (wrapping) has seen much activity recently (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping. In this paper, we first study monadic datalog over trees as a wrapping language. We show that this simple language is equivalent to monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO...

  7. KneeTex: an ontology-driven system for information extraction from MRI reports.

    Science.gov (United States)

    Spasić, Irena; Zhao, Bo; Jones, Christopher B; Button, Kate

    2015-01-01

    In the realm of knee pathology, magnetic resonance imaging (MRI) has the advantage of visualising all structures within the knee joint, which makes it a valuable tool for increasing diagnostic accuracy and planning surgical treatments. Therefore, clinical narratives found in MRI reports convey valuable diagnostic information. A range of studies have proven the feasibility of natural language processing for information extraction from clinical narratives. However, no study focused specifically on MRI reports in relation to knee pathology, possibly due to the complexity of knee anatomy and a wide range of conditions that may be associated with different anatomical entities. In this paper we describe KneeTex, an information extraction system that operates in this domain. As an ontology-driven information extraction system, KneeTex makes active use of an ontology to strongly guide and constrain text analysis. We used automatic term recognition to facilitate the development of a domain-specific ontology with sufficient detail and coverage for text mining applications. In combination with the ontology, high regularity of the sublanguage used in knee MRI reports allowed us to model its processing by a set of sophisticated lexico-semantic rules with minimal syntactic analysis. The main processing steps involve named entity recognition combined with coordination, enumeration, ambiguity and co-reference resolution, followed by text segmentation. Ontology-based semantic typing is then used to drive the template filling process. We adopted an existing ontology, TRAK (Taxonomy for RehAbilitation of Knee conditions), for use within KneeTex. The original TRAK ontology expanded from 1,292 concepts, 1,720 synonyms and 518 relationship instances to 1,621 concepts, 2,550 synonyms and 560 relationship instances. This provided KneeTex with a very fine-grained lexico-semantic knowledge base, which is highly attuned to the given sublanguage. Information extraction results were evaluated

  8. Road Extraction from High-resolution Remote Sensing Images Based on Multiple Information Fusion

    Directory of Open Access Journals (Sweden)

    LI Xiao-feng

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing images has been considered to be a significant but very difficult task.Especially the spectrum of some buildings is similar with that of roads,which makes the surfaces being connect with each other after classification and difficult to be distinguished.Based on the cooperation between road surfaces and edges,this paper presents an approach to purify roads from high-resolution remote sensing images.Firstly,we try to improve the extraction accuracy of road surfaces and edges respectively.The logic cooperation between these two binary images is used to separate road and non-road objects.Then the road objects are confirmed by the cooperation between surfaces and edges.And the effective shape indices(e.g.polar moment of inertia and narrow extent index are applied to eliminate non-road objects.So the road information is refined.The experiments indicate that the proposed approach is efficient for eliminating non-road information and extracting road information from high-resolution remote sensing image.

  9. Automatically extracting clinically useful sentences from UpToDate to support clinicians’ information needs

    Science.gov (United States)

    Mishra, Rashmi; Fiol, Guilherme Del; Kilicoglu, Halil; Jonnalagadda, Siddhartha; Fiszman, Marcelo

    2013-01-01

    Clinicians raise several information needs in the course of care. Most of these needs can be met by online health knowledge resources such as UpToDate. However, finding relevant information in these resources often requires significant time and cognitive effort. Objective: To design and assess algorithms for extracting from UpToDate the sentences that represent the most clinically useful information for patient care decision making. Methods: We developed algorithms based on semantic predications extracted with SemRep, a semantic natural language processing parser. Two algorithms were compared against a gold standard composed of UpToDate sentences rated in terms of clinical usefulness. Results: Clinically useful sentences were strongly correlated with predication frequency (correlation= 0.95). The two algorithms did not differ in terms of top ten precision (53% vs. 49%; p=0.06). Conclusions: Semantic predications may serve as the basis for extracting clinically useful sentences. Future research is needed to improve the algorithms. PMID:24551389

  10. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning play a critical role for visual perception tasks. We focus on improving the robustness of the kernel descriptors (KDES) by embedding context cues and further learning a compact and discriminative feature codebook for feature reduction using Rényi entropy based mutual...... improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting...

  11. Quantitation of 35S promoter in maize DNA extracts from genetically modified organisms using real-time polymerase chain reaction, part 2: interlaboratory study.

    Science.gov (United States)

    Feinberg, Max; Fernandez, Sophie; Cassard, Sylvanie; Bertheau, Yves

    2005-01-01

    The European Committee for Standardization (CEN) and the European Network of GMO Working Laboratories have proposed development of a modular strategy for stepwise validation of complex analytical techniques. When applied to the quantitation of genetically modified organisms (GMOs) in food products, the instrumental quantitation step of the technique is separately validated from the DNA extraction step to better control the sources of uncertainty and facilitate the validation of GMO-specific polymerase chain reaction (PCR) tests. This paper presents the results of an interlaboratory study on the quantitation step of the method standardized by CEN for the detection of a regulatory element commonly inserted in GMO maize-based foods. This is focused on the quantitation of P35S promoter through using the quantitative real-time PCR (QRT-PCR). Fifteen French laboratories participated in the interlaboratory study of the P35S quantitation operating procedure on DNA extract samples using either the thermal cycler ABI Prism 7700 (Applied Biosystems, Foster City, CA) or Light Cycler (Roche Diagnostics, Indianapolis, IN). Attention was focused on DNA extract samples used to calibrate the method and unknown extract samples. Data were processed according to the recommendations of ISO 5725 standard. Performance criteria, obtained using the robust algorithm, were compared to the classic data processing after rejection of outliers by the Cochran and Grubbs tests. Two laboratories were detected as outliers by the Grubbs test. The robust precision criteria gave values between the classical values estimated before and after rejection of the outliers. Using the robust method, the relative expanded uncertainty by the quantitation method is about 20% for a 1% Bt176 content, whereas it can reach 40% for a 0.1% Bt176. The performances of the quantitation assay are relevant to the application of the European regulation, which has an accepted tolerance interval of about +/-50%. These data

  12. Bounds on the entropy generated when timing information is extracted from microscopic systems

    CERN Document Server

    Janzing, D; Janzing, Dominik; Beth, Thomas

    2003-01-01

    We consider Hamiltonian quantum systems with energy bandwidth \\Delta E and show that each measurement that determines the time up to an error \\Delta t generates at least the entropy (\\hbar/(\\Delta t \\Delta E))^2/2. Our result describes quantitatively to what extent all timing information is quantum information in systems with limited energy. It provides a lower bound on the dissipated energy when timing information of microscopic systems is converted to classical information. This is relevant for low power computation since it shows the amount of heat generated whenever a band limited signal controls a classical bit switch. Our result provides a general bound on the information-disturbance trade-off for von-Neumann measurements that distinguish states on the orbits of continuous unitary one-parameter groups with bounded spectrum. In contrast, information gain without disturbance is possible for some completely positive semi-groups. This shows that readout of timing information can be possible without entropy ...

  13. BioDARA: Data Summarization Approach to Extracting Bio-Medical Structuring Information

    Directory of Open Access Journals (Sweden)

    Chung S. Kheau

    2011-01-01

    Full Text Available Problem statement: Due to the ever growing amount of biomedical datasets stored in multiple tables, Information Extraction (IE from these datasets is increasingly recognized as one of the crucial technologies in bioinformatics. However, for IE to be practically applicable, adaptability of a system is crucial, considering extremely diverse demands in biomedical IE application. One should be able to extract a set of hidden patterns from these biomedical datasets at low cost. Approach: In this study, a new method is proposed, called Bio-medical Data Aggregation for Relational Attributes (BioDARA, for automatic structuring information extraction for biomedical datasets. BioDARA summarizes biomedical data stored in multiple tables in order to facilitate data modeling efforts in a multi-relational setting. BioDARA has the advantages or capabilities to transform biomedical data stored in multiple tables or databases into a Vector Space model, summarize biomedical data using the Information Retrieval theory and finally extract frequent patterns that describe the characteristics of these biomedical datasets. Results: the results show that data summarization performed by DARA, can be beneficial in summarizing biomedical datasets in a complex multi-relational environment, in which biomedical datasets are stored in a multi-level of one-to-many relationships and also in the case of datasets stored in more than one one-to-many relationships with non-target tables. Conclusion: This study concludes that data summarization performed by BioDARA, can be beneficial in summarizing biomedical datasets in a complex multi-relational environment, in which biomedical datasets are stored in a multi-level of one-to-many relationships.

  14. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have the...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/....

  15. Information extraction and CT reconstruction of liver images based on diffraction enhanced imaging

    Institute of Scientific and Technical Information of China (English)

    Chunhong Hu; Tao Zhao; Lu Zhang; Hui Li; Xinyan Zhao; Shuqian Luo

    2009-01-01

    X-ray phase-contrast imaging (PCI) is a new emerging imaging technique that generates a high spatial resolution and high contrast of biological soft tissues compared to conventional radiography. Herein a biomedical application of diffraction enhanced imaging (DEI) is presented. As one of the PCI methods, DEI derives contrast from many different kinds of sample information, such as the sample's X-ray absorption, refraction gradient and ultra-small-angle X-ray scattering (USAXS) properties, and the sample information is expressed by three parametric images. Combined with computed tomography (CT), DEI-CT can produce 3D volumetric images of the sample and can be used for investigating micro-structures of biomedical samples. Our DEI experiments for fiver samples were implemented at the topog-raphy station of Beijing Synchrotron Radiation Facility (BSRF). The results show that by using our provided information extraction method and DEI-CT reconstruction approach, the obtained parametric images clearly display the inner structures of liver tissues and the morphology of blood vessels. Furthermore, the reconstructed 3D view of the fiver blood vessels exhibits the micro blood vessels whose minimum diameter is on the order of about tens of microns, much better than its conventional CT reconstruction at a millimeter resolution.In conclusion, both the information extraction method and DEI-CT have the potential for use in biomedical micro-structures analysis.

  16. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    Science.gov (United States)

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021

  17. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2009-04-01

    Full Text Available This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to.

  18. [Method validation according to ISO 15189 and SH GTA 04: application for the extraction of DNA and its quantitative evaluation by a spectrophotometric assay].

    Science.gov (United States)

    Harlé, Alexandre; Lion, Maëva; Husson, Marie; Dubois, Cindy; Merlin, Jean-Louis

    2013-01-01

    According to the French legislation on medical biology (January 16th, 2010), all biological laboratories must be accredited according to ISO 15189 for at least 50% of their activities before the end of 2016. The extraction of DNA from a sample of interest, whether solid or liquid is one of the critical steps in molecular biology and specifically in somatic or constitutional genetic. The extracted DNA must meet a number of criteria such quality and also be in sufficient concentration to allow molecular biology assays such as the detection of somatic mutations. This paper describes the validation of the extraction and purification of DNA using chromatographic column extraction and quantitative determination by spectrophotometric assay, according to ISO 15189 and the accreditation technical guide in Human Health SH-GTA-04.

  19. Head-to-Head Comparison of Ultra-High-Performance Liquid Chromatography with Diode Array Detection versus Quantitative Nuclear Magnetic Resonance for the Quantitative Analysis of the Silymarin Complex in Silybum marianum Fruit Extracts.

    Science.gov (United States)

    Cheilari, Antigoni; Sturm, Sonja; Intelmann, Daniel; Seger, Christoph; Stuppner, Hermann

    2016-02-24

    Quantitative nuclear magnetic resonance (qNMR) spectroscopy is known as an excellent alternative to chromatography-based mixture analysis. NMR spectroscopy is a non-destructive method, needs only limited sample preparation, and can be readily automated. A head-to-head comparison of qNMR to an ultra-high-performance liquid chromatography with diode array detection (uHPLC-DAD)-based quantitative analysis of six flavonolignan congeners (silychristin, silydianin, silybin A, silybin B, isosilybin A, and isosilybin B) of the Silybum marianum silymarin complex is presented. Both assays showed similar performance characteristics (linear range, accuracy, precision, and limits of quantitation) with analysis times below 30 min/sample. The assays were applied to industrial S. marianum extracts (AC samples) and to extracts locally prepared from S. marianum fruits (PL samples). An assay comparison by Bland-Altman plots (relative method bias AC samples, -0.1%; 2SD range, ±5.1%; relative method bias PL samples, -0.3%; 2SD range, ±7.8%) and Passing-Bablok regression analysis (slope and intercept for AC and PL samples not significantly different from 1.00 and 0.00, respectively; Spearman's coefficient of rank correlation, >0.99) did show that qNMR and uHPLC-DAD can be used interchangeably to quantitate flavonolignans in the silymarin complex.

  20. Automated Building Extraction from High-Resolution Satellite Imagery in Urban Areas Using Structural, Contextual, and Spectral Information

    Directory of Open Access Journals (Sweden)

    Curt H. Davis

    2005-08-01

    Full Text Available High-resolution satellite imagery provides an important new data source for building extraction. We demonstrate an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas. Buildings are extracted using structural, contextual, and spectral information. First, a series of geodesic opening and closing operations are used to build a differential morphological profile (DMP that provides image structural information. Building hypotheses are generated and verified through shape analysis applied to the DMP. Second, shadows are extracted using the DMP to provide reliable contextual information to hypothesize position and size of adjacent buildings. Seed building rectangles are verified and grown on a finely segmented image. Next, bright buildings are extracted using spectral information. The extraction results from the different information sources are combined after independent extraction. Performance evaluation of the building extraction on an urban test site using IKONOS satellite imagery of the City of Columbia, Missouri, is reported. With the combination of structural, contextual, and spectral information, 72.7% of the building areas are extracted with a quality percentage 58.8%.

  1. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  2. A weighted information criterion for multiple minor components and its adaptive extraction algorithms.

    Science.gov (United States)

    Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an

    2017-05-01

    Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  4. Dual-wavelength in-line phase-shifting interferometry based on two dc-term-suppressed intensities with a special phase shift for quantitative phase extraction.

    Science.gov (United States)

    Xu, Xiaoqing; Wang, Yawei; Xu, Yuanyuan; Jin, Weifeng

    2016-06-01

    To efficiently promote the phase retrieval in quantitative phase imaging, a new approach of quantitative phase extraction is proposed based on two intensities with dual wavelength after filtering the corresponding dc terms for each wavelength, in which a special phase shift is used. In this approach, only the combination of the phase-shifting technique and subtraction procedures is needed, and no additional algorithms are required. The thickness of the phase object can be achieved from the phase image, which is related to the synthetic beat wavelength. The feasibility of this method is verified by the simulated experiments of the optically transparent objects.

  5. High-resolution gas chromatography/mass spectrometry method for characterization and quantitative analysis of ginkgolic acids in Ginkgo biloba plants, extracts, and dietary supplements.

    Science.gov (United States)

    Wang, Mei; Zhao, Jianping; Avula, Bharathi; Wang, Yan-Hong; Avonto, Cristina; Chittiboyina, Amar G; Wylie, Philip L; Parcher, Jon F; Khan, Ikhlas A

    2014-12-17

    A high-resolution gas chromatography/mass spectrometry (GC/MS) with selected ion monitor method focusing on the characterization and quantitative analysis of ginkgolic acids (GAs) in Ginkgo biloba L. plant materials, extracts, and commercial products was developed and validated. The method involved sample extraction with (1:1) methanol and 10% formic acid, liquid-liquid extraction with n-hexane, and derivatization with trimethylsulfonium hydroxide (TMSH). Separation of two saturated (C13:0 and C15:0) and six unsaturated ginkgolic acid methyl esters with different positional double bonds (C15:1 Δ8 and Δ10, C17:1 Δ8, Δ10, and Δ12, and C17:2) was achieved on a very polar (88% cyanopropyl) aryl-polysiloxane HP-88 capillary GC column. The double bond positions in the GAs were determined by ozonolysis. The developed GC/MS method was validated according to ICH guidelines, and the quantitation results were verified by comparison with a standard high-performance liquid chromatography method. Nineteen G. biloba authenticated and commercial plant samples and 21 dietary supplements purported to contain G. biloba leaf extracts were analyzed. Finally, the presence of the marker compounds, terpene trilactones and flavonol glycosides for Ginkgo biloba in the dietary supplements was determined by UHPLC/MS and used to confirm the presence of G. biloba leaf extracts in all of the botanical dietary supplements.

  6. Note: Sound recovery from video using SVD-based information extraction.

    Science.gov (United States)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  7. Taming Big Data: An Information Extraction Strategy for Large Clinical Text Corpora.

    Science.gov (United States)

    Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gupta, Kalpana; Trautner, Barbara

    2015-01-01

    Concepts of interest for clinical and research purposes are not uniformly distributed in clinical text available in electronic medical records. The purpose of our study was to identify filtering techniques to select 'high yield' documents for increased efficacy and throughput. Using two large corpora of clinical text, we demonstrate the identification of 'high yield' document sets in two unrelated domains: homelessness and indwelling urinary catheters. For homelessness, the high yield set includes homeless program and social work notes. For urinary catheters, concepts were more prevalent in notes from hospitalized patients; nursing notes accounted for a majority of the high yield set. This filtering will enable customization and refining of information extraction pipelines to facilitate extraction of relevant concepts for clinical decision support and other uses.

  8. Note: Sound recovery from video using SVD-based information extraction

    Science.gov (United States)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  9. National information service in mining, mineral processing and extractive metallurgy. [MINTEC

    Energy Technology Data Exchange (ETDEWEB)

    Romaniuk, A.S.; MacDonald, R.J.C.

    1979-03-01

    More than a dedade ago, CANMET management recognized the need to make better use of existing technological information in mining and extractive metallurgy, two fields basic to the economic well-being of Canada. There were at that time no indexes or files didicated to disseminating technical information for the many minerals mined and processed in Canada, including coal. CANMET, with the nation's largest research and library resources in the minerals field, was in a unique position to fill this need. Initial efforts were concentrated on building a mining file beginning with identification of world sources of published information, development of a special thesaurus of terms for language control and adoption of a manual indexing/retrieval system. By early 1973, this file held 8,300 references, with source, abstract and keywords given for each reference. In mid-1973, operations were computerized. Software for indexing and retrieval by batch mode was written by CANMET staff to utilize the hardware facilities of EMR's Computer Science Center. The resulting MINTEC file, one of the few files of technological information produced in Canada, is the basis for the national literature search service in mining offered by CANMET. Attention is now focussed on building a sister-file in extractive metallurgy using the system already developed. Published information sources have been identified and a thesaurus of terms is being compiled and tested. The software developed for CANMET's file-building operations has several features, including the selective dissemination of information and production from magnetic tape of photoready copy for publication, as in a bi-monthly abstracts journal.

  10. The Readability of Electronic Cigarette Health Information and Advice: A Quantitative Analysis of Web-Based Information

    Science.gov (United States)

    Zhu, Shu-Hong; Conway, Mike

    2017-01-01

    Background The popularity and use of electronic cigarettes (e-cigarettes) has increased across all demographic groups in recent years. However, little is currently known about the readability of health information and advice aimed at the general public regarding the use of e-cigarettes. Objective The objective of our study was to examine the readability of publicly available health information as well as advice on e-cigarettes. We compared information and advice available from US government agencies, nongovernment organizations, English speaking government agencies outside the United States, and for-profit entities. Methods A systematic search for health information and advice on e-cigarettes was conducted using search engines. We manually verified search results and converted to plain text for analysis. We then assessed readability of the collected documents using 4 readability metrics followed by pairwise comparisons of groups with adjustment for multiple comparisons. Results A total of 54 documents were collected for this study. All 4 readability metrics indicate that all information and advice on e-cigarette use is written at a level higher than that recommended for the general public by National Institutes of Health (NIH) communication guidelines. However, health information and advice written by for-profit entities, many of which were promoting e-cigarettes, were significantly easier to read. Conclusions A substantial proportion of potential and current e-cigarette users are likely to have difficulty in fully comprehending Web-based health information regarding e-cigarettes, potentially hindering effective health-seeking behaviors. To comply with NIH communication guidelines, government entities and nongovernment organizations would benefit from improving the readability of e-cigarettes information and advice. PMID:28062390

  11. Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry

    Science.gov (United States)

    Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.

    2015-09-01

    Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.

  12. Information extraction approaches to unconventional data sources for "Injury Surveillance System": the case of newspapers clippings.

    Science.gov (United States)

    Berchialla, Paola; Scarinzi, Cecilia; Snidero, Silvia; Rahim, Yousif; Gregori, Dario

    2012-04-01

    Injury Surveillance Systems based on traditional hospital records or clinical data have the advantage of being a well established, highly reliable source of information for making an active surveillance on specific injuries, like choking in children. However, they suffer the drawback of delays in making data available to the analysis, due to inefficiencies in data collection procedures. In this sense, the integration of clinical based registries with unconventional data sources like newspaper articles has the advantage of making the system more useful for early alerting. Usage of such sources is difficult since information is only available in the form of free natural-language documents rather than structured databases as required by traditional data mining techniques. Information Extraction (IE) addresses the problem of transforming a corpus of textual documents into a more structured database. In this paper, on a corpora of Italian newspapers articles related to choking in children due to ingestion/inhalation of foreign body we compared the performance of three IE algorithms- (a) a classical rule based system which requires a manual annotation of the rules; (ii) a rule based system which allows for the automatic building of rules; (b) a machine learning method based on Support Vector Machine. Although some useful indications are extracted from the newspaper clippings, this approach is at the time far from being routinely implemented for injury surveillance purposes.

  13. Qualitative and quantitative information flow analysis for multi-thread programs

    NARCIS (Netherlands)

    Ngo, Tri Minh

    2014-01-01

    In today's information-based society, guaranteeing information security plays an important role in all aspects of life: communication between citizens and governments, military, companies, financial information systems, web-based services etc. With the increasing popularity of computer systems with

  14. Extraction of depth information for 3D imaging using pixel aperture technique

    Science.gov (United States)

    Choi, Byoung-Soo; Bae, Myunghan; Kim, Sang-Hwan; Lee, Jimin; Oh, Chang-Woo; Chang, Seunghyuk; Park, JongHo; Lee, Sang-Jin; Shin, Jang-Kyoo

    2017-02-01

    A 3dimensional (3D) imaging is an important area which can be applied to face detection, gesture recognition, and 3D reconstruction. In this paper, extraction of depth information for 3D imaging using pixel aperture technique is presented. An active pixel sensor (APS) with in-pixel aperture has been developed for this purpose. In the conventional camera systems using a complementary metal-oxide-semiconductor (CMOS) image sensor, an aperture is located behind the camera lens. However, in our proposed camera system, the aperture implemented by metal layer of CMOS process is located on the White (W) pixel which means a pixel without any color filter on top of the pixel. 4 types of pixels including Red (R), Green (G), Blue (B), and White (W) pixels were used for pixel aperture technique. The RGB pixels produce a defocused image with blur, while W pixels produce a focused image. The focused image is used as a reference image to extract the depth information for 3D imaging. This image can be compared with the defocused image from RGB pixels. Therefore, depth information can be extracted by comparing defocused image with focused image using the depth from defocus (DFD) method. Size of the pixel for 4-tr APS is 2.8 μm × 2.8 μm and the pixel structure was designed and simulated based on 0.11 μm CMOS image sensor (CIS) process. Optical performances of the pixel aperture technique were evaluated using optical simulation with finite-difference time-domain (FDTD) method and electrical performances were evaluated using TCAD.

  15. A comparison of sorptive extraction techniques coupled to a new quantitative, sensitive, high throughput GC-MS/MS method for methoxypyrazine analysis in wine.

    Science.gov (United States)

    Hjelmeland, Anna K; Wylie, Philip L; Ebeler, Susan E

    2016-02-01

    Methoxypyrazines are volatile compounds found in plants, microbes, and insects that have potent vegetal and earthy aromas. With sensory detection thresholds in the low ng L(-1) range, modest concentrations of these compounds can profoundly impact the aroma quality of foods and beverages, and high levels can lead to consumer rejection. The wine industry routinely analyzes the most prevalent methoxypyrazine, 2-isobutyl-3-methoxypyrazine (IBMP), to aid in harvest decisions, since concentrations decrease during berry ripening. In addition to IBMP, three other methoxypyrazines IPMP (2-isopropyl-3-methoxypyrazine), SBMP (2-sec-butyl-3-methoxypyrazine), and EMP (2-ethyl-3-methoxypyrazine) have been identified in grapes and/or wine and can impact aroma quality. Despite their routine analysis in the wine industry (mostly IBMP), accurate methoxypyrazine quantitation is hindered by two major challenges: sensitivity and resolution. With extremely low sensory detection thresholds (~8-15 ng L(-1) in wine for IBMP), highly sensitive analytical methods to quantify methoxypyrazines at trace levels are necessary. Here we were able to achieve resolution of IBMP as well as IPMP, EMP, and SBMP from co-eluting compounds using one-dimensional chromatography coupled to positive chemical ionization tandem mass spectrometry. Three extraction techniques HS-SPME (headspace-solid phase microextraction), SBSE (stirbar sorptive extraction), and HSSE (headspace sorptive extraction) were validated and compared. A 30 min extraction time was used for HS-SPME and SBSE extraction techniques, while 120 min was necessary to achieve sufficient sensitivity for HSSE extractions. All extraction methods have limits of quantitation (LOQ) at or below 1 ng L(-1) for all four methoxypyrazines analyzed, i.e., LOQ's at or below reported sensory detection limits in wine. The method is high throughput, with resolution of all compounds possible with a relatively rapid 27 min GC oven program.

  16. Single-Step RNA Extraction from Different Hydrogel-Embedded Mesenchymal Stem Cells for Quantitative Reverse Transcription-Polymerase Chain Reaction Analysis.

    Science.gov (United States)

    Köster, Natascha; Schmiermund, Alexandra; Grubelnig, Stefan; Leber, Jasmin; Ehlicke, Franziska; Czermak, Peter; Salzig, Denise

    2016-06-01

    For many tissue engineering applications, cells such as human mesenchymal stem cells (hMSCs) must be embedded in hydrogels. The analysis of embedded hMSCs requires RNA extraction, but common extraction procedures often produce low yields and/or poor quality RNA. We systematically investigated four homogenization methods combined with eight RNA extraction protocols for hMSCs embedded in three common hydrogel types (alginate, agarose, and gelatin). We found for all three hydrogel types that using liquid nitrogen or a rotor-stator produced low RNA yields, whereas using a microhomogenizer or enzymatic/chemical hydrogel digestion achieved better yields regardless of which extraction protocol was subsequently applied. The hot phenol extraction protocol generally achieved the highest A260 values (representing up to 40.8 μg RNA per 10(6) cells), but the cetyltrimethylammonium bromide (CTAB) method produced RNA of better quality, with A260/A280 and A260/A230 ratios and UV spectra similar to the pure RNA control. The RNA produced by this method was also suitable as a template for endpoint and quantitative reverse transcription-PCR (qRT-PCR), achieving low Ct values of ∼20. The prudent choice of hydrogel homogenization and RNA extraction methods can ensure the preparation of high-quality RNA that generates reliable endpoint and quantitative RT-PCR data. We therefore propose a universal method that is suitable for the extraction of RNA from cells embedded in all three hydrogel types commonly used for tissue engineering.

  17. Analysis on health information extracted from an urban professional population in Beijing

    Institute of Scientific and Technical Information of China (English)

    ZHANG Tie-mei; ZHANG Yan; LIU Bin; JIA Hong-bo; LIU Yun-jie; ZHU Ling; LUO Sen-lin; HAN Yi-wen; ZHANG Yan; YANG Shu-wen; LIU An-nan; MA Lan-jun; ZHAO Yan-yan

    2011-01-01

    Background The assembled data from a population could provide information on health trends within the population.The aim of this research was to extract and know basic health information from an urban professional population in Beijing.Methods Data analysis was carried out in a population who underwent a routine medical check-up and aged >20 years,including 30 058 individuals.General information,data from physical examinations and blood samples were collected in the same method.The health status was separated into three groups by the criteria generated in this study,i.e.,people with common chronic diseases,people in a sub-clinic situation,and healthy people.The proportion of both common diseases suffered and health risk distribution of different age groups were also analyzed.Results The proportion of people with common chronic diseases,in the sub-clinic group and in the healthy group was 28.6%,67.8% and 3.6% respectively.There were significant differences in the health situation in different age groups.Hypertension was on the top of list of self-reported diseases.The proportion of chronic diseases increased significantly in people after 35 years of age.Meanwhile,the proportion of sub-clinic conditions was decreasing at the same rate.The complex risk factors to health in this population were metabolic disturbances (61.3%),risk for tumor (2.7%),abnormal results of morphological examination (8.2%) and abnormal results of lab tests of serum (27.8%).Conclusions Health information could be extracted from a complex data set from the heath check-ups of the general population.The information should be applied to support prevention and control chronic diseases as well as for directing intervention for patients with risk factors for disease.

  18. Red Tide Information Extraction Based on Multi-source Remote Sensing Data in Haizhou Bay

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The aim was to extract red tide information in Haizhou Bay on the basis of multi-source remote sensing data.[Method] Red tide in Haizhou Bay was studied based on multi-source remote sensing data,such as IRS-P6 data on October 8,2005,Landsat 5-TM data on May 20,2006,MODIS 1B data on October 6,2006 and HY-1B second-grade data on April 22,2009,which were firstly preprocessed through geometric correction,atmospheric correction,image resizing and so on.At the same time,the synchronous environment mon...

  19. The method of earthquake landslide information extraction with high-resolution remote sensing

    Science.gov (United States)

    Wu, Jian; Chen, Peng; Liu, Yaolin; Wang, Jing

    2014-05-01

    As a kind of secondary geological disaster caused by strong earthquake, the earthquake-induced landslide has drawn much attention in the world due to the severe hazard. The high-resolution remote sensing, as a new technology for investigation and monitoring, has been widely applied in landslide susceptibility and hazard mapping. The Ms 8.0 Wenchuan earthquake, occurred on 12 May 2008, caused many buildings collapse and half million people be injured. Meanwhile, damage caused by earthquake-induced landslides, collapse and debris flow became the major part of total losses. By analyzing the property of the Zipingpu landslide occurred in the Wenchuan earthquake, the present study advanced a quick-and-effective way for landslide extraction based on NDVI and slope information, and the results were validated with pixel-oriented and object-oriented methods. The main advantage of the idea lies in the fact that it doesn't need much professional knowledge and data such as crustal movement, geological structure, fractured zone, etc. and the researchers can provide the landslide monitoring information for earthquake relief as soon as possible. In pixel-oriented way, the NDVI-differential image as well as slope image was analyzed and segmented to extract the landslide information. When it comes to object-oriented method, the multi-scale segmentation algorithm was applied in order to build up three-layer hierarchy. The spectral, textural, shape, location and contextual information of individual object classes, and GLCM (Grey Level Concurrence Matrix homogeneity, shape index etc. were extracted and used to establish the fuzzy decision rule system of each layer for earthquake landslide extraction. Comparison of the results generated from the two methods, showed that the object-oriented method could successfully avoid the phenomenon of NDVI-differential bright noise caused by the spectral diversity of high-resolution remote sensing data and achieved better result with an overall

  20. Multi-Paradigm and Multi-Lingual Information Extraction as Support for Medical Web Labelling Authorities

    Directory of Open Access Journals (Sweden)

    Martin Labsky

    2010-10-01

    Full Text Available Until recently, quality labelling of medical web content has been a pre-dominantly manual activity. However, the advances in automated text processing opened the way to computerised support of this activity. The core enabling technology is information extraction (IE. However, the heterogeneity of websites offering medical content imposes particular requirements on the IE techniques to be applied. In the paper we discuss these requirements and describe a multi-paradigm approach to IE addressing them. Experiments on multi-lingual data are reported. The research has been carried out within the EU MedIEQ project.

  1. Enhancing understanding and recall of quantitative information about medical risks: a cross-cultural comparison between Germany and Spain.

    Science.gov (United States)

    Garcia-Retamero, Rocio; Galesic, Mirta; Gigerenzer, Gerd

    2011-05-01

    In two experiments, we analyzed cross-cultural differences in understanding and recalling information about medical risks in two countries--Germany and Spain--whose students differ substantially in their quantitative literacy according to the 2003 Programme for International Student Assessment (PISA; OECD, 2003, 2010). We further investigated whether risk understanding can be enhanced by using visual aids (Experiment 1), and whether different ways of describing risks affect recall (Experiment 2). Results showed that Spanish students are more vulnerable to misunderstanding and forgetting the risk information than their German counterparts. Spanish students, however, benefit more than German students from representing the risk information using ecologically rational formats--which exploit the way information is represented in the human mind. We concluded that our results can have important implications for clinical practice.

  2. [A quantitative and qualitative study on effective information flow for infectious disease control in welfare facilities for the elderly].

    Science.gov (United States)

    Koshida, Mihoko; Inaoka, Yumiko; Iwatsuki, Masakazu; Okayama, Miho; Takehara, Megumi; Tomita, Yasuko; Hironaka, Megumi; Miwa, Satoshi; Sone, Tomofumi; Morita, Takae

    2004-12-01

    To clarify factors associated with effective information tranges among staff of welfare facilities for the elderly, and to propose measures for an appropriate information flow system in welfare facilities and public health centers, communication channels and methods, and encouraging factors and barriers were investigated in terms of a printed medium on the control and management of scabies infections. A self-administered questionnaire survey and an interview survey were conducted with the staff of welfare facilities for the elderly where "Control and management manual of scabies infection" had been distributed by the Tama-Tachikawa Public Health Center in Tokyo. A self-administered questionnaire was sent to managers and chief practitioners of 66 facilities. Respondents were obtained from 66 managers and chief practitioners (response rate: 84.8%), and 831 practitioners (response rate: 53.1%). The questionnaire consisted of 20 items for managers and 18 items for chief practitioners, including experience of scabies epidemics in facilities, training experience for the use of "Control and management manual of scabies infection," measures for information gathering, and current information flow within the facility. A semi-structured interview survey was conducted with the manager and/or chief practitioner and practitioners in five facilities. The number of respondents was 10. The interview questions included job description, scabies control measures, dissemination of the manual to the staff, use of the manual, flows of health-related information, and factors associated with information flows. Summarized codes were extracted from the transcriptions from tape recording and were categorized repeatedly according to similarity. In the questionnaire survey, differences of Community information flow by types of facilities and professional backgrounds were found. Variation was detected in measures for information gathering and focuses in information management between managers

  3. Quantitative and qualitative optimization of allergen extraction from peanut and selected tree nuts. Part 1. Screening of optimal extraction conditions using a D-optimal experimental design.

    Science.gov (United States)

    L'Hocine, Lamia; Pitre, Mélanie

    2016-03-01

    A D-optimal design was constructed to optimize allergen extraction efficiency simultaneously from roasted, non-roasted, defatted, and non-defatted almond, hazelnut, peanut, and pistachio flours using three non-denaturing aqueous (phosphate, borate, and carbonate) buffers at various conditions of ionic strength, buffer-to-protein ratio, extraction temperature, and extraction duration. Statistical analysis showed that roasting and non-defatting significantly lowered protein recovery for all nuts. Increasing the temperature and the buffer-to-protein ratio during extraction significantly increased protein recovery, whereas increasing the extraction time had no significant impact. The impact of the three buffers on protein recovery varied significantly among the nuts. Depending on the extraction conditions, protein recovery varied from 19% to 95% for peanut, 31% to 73% for almond, 17% to 64% for pistachio, and 27% to 88% for hazelnut. A modulation by the buffer type and ionic strength of protein and immunoglobuline E binding profiles of extracts was evidenced, where high protein recovery levels did not always correlate with high immunoreactivity.

  4. Quantitative photoacoustic characterization of blood clot in blood: A mechanobiological assessment through spectral information

    Science.gov (United States)

    Biswas, Deblina; Vasudevan, Srivathsan; Chen, George C. K.; Sharma, Norman

    2017-02-01

    Formation of blood clots, called thrombus, can happen due to hyper-coagulation of blood. Thrombi, while moving through blood vessels can impede blood flow, an important criterion for many critical diseases like deep vein thrombosis and heart attacks. Understanding mechanical properties of clot formation is vital for assessment of severity of thrombosis and proper treatment. However, biomechanics of thrombus is less known to clinicians and not very well investigated. Photoacoustic (PA) spectral response, a non-invasive technique, is proposed to investigate the mechanism of formation of blood clots through elasticity and also differentiate clots from blood. Distinct shift (increase in frequency) of the PA response dominant frequency during clot formation is reported. In addition, quantitative differentiation of blood clots from blood has been achieved through parameters like dominant frequency and spectral energy of PA spectral response. Nearly twofold increases in dominant frequency in blood clots compared to blood were found in the PA spectral response. Significant changes in energy also help in quantitatively differentiating clots from blood, in the blood. Our results reveal that increase in density during clot formation is reflected in the PA spectral response, a significant step towards understanding the mechanobiology of thrombus formation. Hence, the proposed tool, in addition to detecting thrombus formation, could reveal mechanical properties of the sample through quantitative photoacoustic spectral parameters.

  5. Management and dissemination of MS proteomic data with PROTICdb: example of a quantitative comparison between methods of protein extraction.

    Science.gov (United States)

    Langella, Olivier; Valot, Benoît; Jacob, Daniel; Balliau, Thierry; Flores, Raphaël; Hoogland, Christine; Joets, Johann; Zivy, Michel

    2013-05-01

    High throughput MS-based proteomic experiments generate large volumes of complex data and necessitate bioinformatics tools to facilitate their handling. Needs include means to archive data, to disseminate them to the scientific communities, and to organize and annotate them to facilitate their interpretation. We present here an evolution of PROTICdb, a database software that now handles MS data, including quantification. PROTICdb has been developed to be as independent as possible from tools used to produce the data. Biological samples and proteomics data are described using ontology terms. A Taverna workflow is embedded, thus permitting to automatically retrieve information related to identified proteins by querying external databases. Stored data can be displayed graphically and a "Query Builder" allows users to make sophisticated queries without knowledge on the underlying database structure. All resources can be accessed programmatically using a Java client API or RESTful web services, allowing the integration of PROTICdb in any portal. An example of application is presented, where proteins extracted from a maize leaf sample by four different methods were compared using a label-free shotgun method. Data are available at http://moulon.inra.fr/protic/public. PROTICdb thus provides means for data storage, enrichment, and dissemination of proteomics data. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. A Quantitative and Qualitative Inquiry into Future Teachers' Use of Information and Communications Technology to Develop Students' Information Literacy Skills

    Science.gov (United States)

    Simard, Stéphanie; Karsenti, Thierry

    2016-01-01

    This study aims to understand how preservice programs prepare future teachers to use ICT to develop students' information literacy skills. A survey was conducted from January 2014 through May 2014 with 413 future teachers in four French Canadian universities. In the spring of 2015, qualitative data were also collected from 48 students in their…

  7. The information extraction of Gannan citrus orchard based on the GF-1 remote sensing image

    Science.gov (United States)

    Wang, S.; Chen, Y. L.

    2017-02-01

    The production of Gannan oranges is the largest in China, which occupied an important part in the world. The extraction of citrus orchard quickly and effectively has important significance for fruit pathogen defense, fruit production and industrial planning. The traditional spectra extraction method of citrus orchard based on pixel has a lower classification accuracy, difficult to avoid the “pepper phenomenon”. In the influence of noise, the phenomenon that different spectrums of objects have the same spectrum is graveness. Taking Xunwu County citrus fruit planting area of Ganzhou as the research object, aiming at the disadvantage of the lower accuracy of the traditional method based on image element classification method, a decision tree classification method based on object-oriented rule set is proposed. Firstly, multi-scale segmentation is performed on the GF-1 remote sensing image data of the study area. Subsequently the sample objects are selected for statistical analysis of spectral features and geometric features. Finally, combined with the concept of decision tree classification, a variety of empirical values of single band threshold, NDVI, band combination and object geometry characteristics are used hierarchically to execute the information extraction of the research area, and multi-scale segmentation and hierarchical decision tree classification is implemented. The classification results are verified with the confusion matrix, and the overall Kappa index is 87.91%.

  8. A Quantitative Study on Japanese Workers' Awareness to Information Security Using the Data Collected by Web-Based Survey

    Directory of Open Access Journals (Sweden)

    Toshihiko Takemura

    2010-01-01

    Full Text Available Problem statement: The researches in the field of social sciences such as economics and business management were not conducted until around 2000. Particularly, there are few empirical studies on information security. Primary reasons among various ones are that there is no data on information security countermeasures and we cannot easily use the data even if the data exist. Though it is in such a research environment, it is necessary to accumulate the research from not only promotion of academic research but also the social role. In this study, the author quantitatively analyzed Japanese workers’ awareness to information security. Approach: The author examined whether or not there are differences of the workers’ awareness to information security based on various attributes by using Analysis Of Variance (ANOVA based on non-parametric method. Results: It is found that Japanese workers’ awareness to information security is different in attributes such as organizational attributes and the education about information security countermeasures. Conclusion: The author suggested the necessity of enhancing information security education and introducing firm system such as authority handover system and/or stock option system in order to motivate to take the efficient information security countermeasures.

  9. Foreground and Background Lexicons and Word Sense Disambiguation for Information Extraction

    CERN Document Server

    Kilgarriff, A

    1999-01-01

    Lexicon acquisition from machine-readable dictionaries and corpora is currently a dynamic field of research, yet it is often not clear how lexical information so acquired can be used, or how it relates to structured meaning representations. In this paper I look at this issue in relation to Information Extraction (hereafter IE), and one subtask for which both lexical and general knowledge are required, Word Sense Disambiguation (WSD). The analysis is based on the widely-used, but little-discussed distinction between an IE system's foreground lexicon, containing the domain's key terms which map onto the database fields of the output formalism, and the background lexicon, containing the remainder of the vocabulary. For the foreground lexicon, human lexicography is required. For the background lexicon, automatic acquisition is appropriate. For the foreground lexicon, WSD will occur as a by-product of finding a coherent semantic interpretation of the input. WSD techniques as discussed in recent literature are suit...

  10. Non-linear correlation of content and metadata information extracted from biomedical article datasets.

    Science.gov (United States)

    Theodosiou, Theodosios; Angelis, Lefteris; Vakali, Athena

    2008-02-01

    Biomedical literature databases constitute valuable repositories of up to date scientific knowledge. The development of efficient machine learning methods in order to facilitate the organization of these databases and the extraction of novel biomedical knowledge is becoming increasingly important. Several of these methods require the representation of the documents as vectors of variables forming large multivariate datasets. Since the amount of information contained in different datasets is voluminous, an open issue is to combine information gained from various sources to a concise new dataset, which will efficiently represent the corpus of documents. This paper investigates the use of the multivariate statistical approach, called Non-Linear Canonical Correlation Analysis (NLCCA), for exploiting the correlation among the variables of different document representations and describing the documents with only one new dataset. Experiments with document datasets represented by text words, Medical Subject Headings (MeSH) and Gene Ontology (GO) terms showed the effectiveness of NLCCA.

  11. Solution of Multiple——Point Statistics to Extracting Information from Remotely Sensed Imagery

    Institute of Scientific and Technical Information of China (English)

    Ge Yong; Bai Hexiang; Cheng Qiuming

    2008-01-01

    Two phenomena of similar objects with different spectra and different objects with similar spectrum often result in the difficulty of separation and identification of all types of geographical objects only using spectral information.Therefore,there is a need to incorporate spatial structural and spatial association properties of the surfaces of objects into image processing to improve the accuracy of classification of remotely sensed imagery.In the current article,a new method is proposed on the basis of the principle of multiple-point statistics for combining spectral information and spatial information for image classification.The method was validated by applying to a case study on road extraction based on Landsat TM taken over the Chinese YeHow River delta on August 8,1999. The classification results have shown that this new method provides overall better results than the traditional methods such as maximum likelihood classifier (MLC)

  12. Detailed design specification for the ALT Shuttle Information Extraction Subsystem (SIES)

    Science.gov (United States)

    Clouette, G. L.; Fitzpatrick, W. N.

    1976-01-01

    The approach and landing test (ALT) shuttle information extraction system (SIES) is described in terms of general requirements and system characteristics output products and processing options, output products and data sources, and system data flow. The ALT SIES is a data reduction system designed to satisfy certain data processing requirements for the ALT phase of the space shuttle program. The specific ALT SIES data processing requirements are stated in the data reduction complex approach and landing test data processing requirements. In general, ALT SIES must produce time correlated data products as a result of standardized data reduction or special purpose analytical processes. The main characteristics of ALT SIES are: (1) the system operates in a batch (non-interactive) mode; (2) the processing is table driven; (3) it is data base oriented; (4) it has simple operating procedures; and (5) it requires a minimum of run time information.

  13. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Iwano Koji

    2007-01-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  14. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  15. [An object-based information extraction technology for dominant tree species group types].

    Science.gov (United States)

    Tian, Tian; Fan, Wen-yi; Lu, Wei; Xiao, Xiang

    2015-06-01

    Information extraction for dominant tree group types is difficult in remote sensing image classification, howevers, the object-oriented classification method using high spatial resolution remote sensing data is a new method to realize the accurate type information extraction. In this paper, taking the Jiangle Forest Farm in Fujian Province as the research area, based on the Quickbird image data in 2013, the object-oriented method was adopted to identify the farmland, shrub-herbaceous plant, young afforested land, Pinus massoniana, Cunninghamia lanceolata and broad-leave tree types. Three types of classification factors including spectral, texture, and different vegetation indices were used to establish a class hierarchy. According to the different levels, membership functions and the decision tree classification rules were adopted. The results showed that the method based on the object-oriented method by using texture, spectrum and the vegetation indices achieved the classification accuracy of 91.3%, which was increased by 5.7% compared with that by only using the texture and spectrum.

  16. Extracting Urban Ground Object Information from Images and LiDAR Data

    Science.gov (United States)

    Yi, Lina; Zhao, Xuesheng; Li, Luan; Zhang, Guifeng

    2016-06-01

    To deal with the problem of urban ground object information extraction, the paper proposes an object-oriented classification method using aerial image and LiDAR data. Firstly, we select the optimal segmentation scales of different ground objects and synthesize them to get accurate object boundaries. Then, this paper uses ReliefF algorithm to select the optimal feature combination and eliminate the Hughes phenomenon. Eventually, the multiple classifier combination method is applied to get the outcome of the classification. In order to validate the feasible of this method, this paper selects two experimental regions in Stuttgart and Germany (Region A and B, covers 0.21 km2 and 1.1 km2 respectively). The aim of the first experiment on the Region A is to get the optimal segmentation scales and classification features. The overall accuracy of the classification reaches to 93.3 %. The purpose of the experiment on region B is to validate the application-ability of this method for a large area, which is turned out to be reaches 88.4 % overall accuracy. In the end of this paper, the conclusion shows that the proposed method can be performed accurately and efficiently in terms of urban ground information extraction and be of high application value.

  17. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  18. High-resolution multispectral satellite imagery for extracting bathymetric information of Antarctic shallow lakes

    Science.gov (United States)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-05-01

    High-resolution pansharpened images from WorldView-2 were used for bathymetric mapping around Larsemann Hills and Schirmacher oasis, east Antarctica. We digitized the lake features in which all the lakes from both the study areas were manually extracted. In order to extract the bathymetry values from multispectral imagery we used two different models: (a) Stumpf model and (b) Lyzenga model. Multiband image combinations were used to improve the results of bathymetric information extraction. The derived depths were validated against the in-situ measurements and root mean square error (RMSE) was computed. We also quantified the error between in-situ and satellite-estimated lake depth values. Our results indicated a high correlation (R = 0.60 0.80) between estimated depth and in-situ depth measurements, with RMSE ranging from 0.10 to 1.30 m. This study suggests that the coastal blue band in the WV-2 imagery could retrieve accurate bathymetry information compared to other bands. To test the effect of size and dimension of lake on bathymetry retrieval, we distributed all the lakes on the basis of size and depth (reference data), as some of the lakes were open, some were semi frozen and others were completely frozen. Several tests were performed on open lakes on the basis of size and depth. Based on depth, very shallow lakes provided better correlation (≈ 0.89) compared to shallow (≈ 0.67) and deep lakes (≈ 0.48). Based on size, large lakes yielded better correlation in comparison to medium and small lakes.

  19. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  20. A quantitative approach to measure road network information based on edge diversity

    Science.gov (United States)

    Wu, Xun; Zhang, Hong; Lan, Tian; Cao, Weiwei; He, Jing

    2015-12-01

    The measure of map information has been one of the key issues in assessing cartographic quality and map generalization algorithms. It is also important for developing efficient approaches to transfer geospatial information. Road network is the most common linear object in real world. Approximately describe road network information will benefit road map generalization, navigation map production and urban planning. Most of current approaches focused on node diversities and supposed that all the edges are the same, which is inconsistent to real-life condition, and thus show limitations in measuring network information. As real-life traffic flow are directed and of different quantities, the original undirected vector road map was first converted to a directed topographic connectivity map. Then in consideration of preferential attachment in complex network study and rich-club phenomenon in social network, the from and to weights of each edge are assigned. The from weight of a given edge is defined as the connectivity of its end node to the sum of the connectivities of all the neighbors of the from nodes of the edge. After getting the from and to weights of each edge, edge information, node information and the whole network structure information entropies could be obtained based on information theory. The approach has been applied to several 1 square mile road network samples. Results show that information entropies based on edge diversities could successfully describe the structural differences of road networks. This approach is a complementarity to current map information measurements, and can be extended to measure other kinds of geographical objects.

  1. Combining qualitative and quantitative spatial and temporal information in a hierarchical structure: Approximate reasoning for plan execution monitoring

    Science.gov (United States)

    Hoebel, Louis J.

    1993-01-01

    The problem of plan generation (PG) and the problem of plan execution monitoring (PEM), including updating, queries, and resource-bounded replanning, have different reasoning and representation requirements. PEM requires the integration of qualitative and quantitative information. PEM is the receiving of data about the world in which a plan or agent is executing. The problem is to quickly determine the relevance of the data, the consistency of the data with respect to the expected effects, and if execution should continue. Only spatial and temporal aspects of the plan are addressed for relevance in this work. Current temporal reasoning systems are deficient in computational aspects or expressiveness. This work presents a hybrid qualitative and quantitative system that is fully expressive in its assertion language while offering certain computational efficiencies. In order to proceed, methods incorporating approximate reasoning using hierarchies, notions of locality, constraint expansion, and absolute parameters need be used and are shown to be useful for the anytime nature of PEM.

  2. A quantitative assessment of changing trends in internet usage for cancer information.

    LENUS (Irish Health Repository)

    McHugh, Seamus M

    2012-02-01

    BACKGROUND: The internet is an important source of healthcare information. To date, assessment of its use as a source of oncologic information has been restricted to retrospective surveys. METHODS: The cancer-related searches of approximately 361,916,185 people in the United States and the United Kingdom were examined. Data were collected from two separate 100-day periods in 2008 and 2010. RESULTS: In 2008, there were 97,531 searches. The majority of searches related to basic cancer information (18,700, 19%), followed by treatment (8404, 9%) and diagnosis (6460, 7%). This compares with 179,025 searches in 2010 representing an increase of 183%. In 2008 breast cancer accounted for 21,102 (21%) individual searches, increasing to 85,825 searches in 2010. In 2010 a total of 0.2% (321) of searches focused on litigation, with those searching for breast cancer information most likely to research this topic (P=0.000). CONCLUSION: Use of the internet as a source of oncological information is increasing rapidly. These searches represent the most sensitive information relating to cancer, including prognosis and litigation. It is imperative now that efforts are made to ensure the reliability and comprehensiveness of this information.

  3. Extracting duration information in a picture category decoding task using hidden Markov Models

    Science.gov (United States)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  4. Providing Quantitative Information and a Nudge to Undergo Stool Testing in a Colorectal Cancer Screening Decision Aid: A Randomized Clinical Trial.

    Science.gov (United States)

    Schwartz, Peter H; Perkins, Susan M; Schmidt, Karen K; Muriello, Paul F; Althouse, Sandra; Rawl, Susan M

    2017-08-01

    Guidelines recommend that patient decision aids should provide quantitative information about probabilities of potential outcomes, but the impact of this information is unknown. Behavioral economics suggests that patients confused by quantitative information could benefit from a "nudge" towards one option. We conducted a pilot randomized trial to estimate the effect sizes of presenting quantitative information and a nudge. Primary care patients (n = 213) eligible for colorectal cancer screening viewed basic screening information and were randomized to view (a) quantitative information (quantitative module), (b) a nudge towards stool testing with the fecal immunochemical test (FIT) (nudge module), (c) neither a nor b, or (d) both a and b. Outcome measures were perceived colorectal cancer risk, screening intent, preferred test, and decision conflict, measured before and after viewing the decision aid, and screening behavior at 6 months. Patients viewing the quantitative module were more likely to be screened than those who did not ( P = 0.012). Patients viewing the nudge module had a greater increase in perceived colorectal cancer risk than those who did not ( P = 0.041). Those viewing the quantitative module had a smaller increase in perceived risk than those who did not ( P = 0.046), and the effect was moderated by numeracy. Among patients with high numeracy who did not view the nudge module, those who viewed the quantitative module had a greater increase in intent to undergo FIT ( P = 0.028) than did those who did not. The limitations of this study were the limited sample size and single healthcare system. Adding quantitative information to a decision aid increased uptake of colorectal cancer screening, while adding a nudge to undergo FIT did not increase uptake. Further research on quantitative information in decision aids is warranted.

  5. Quantitative assessment of drivers of recent climate variability: An information theoretic approach

    CERN Document Server

    Bhaskar, Ankush; Vichare, Geeta; Koganti, Triven; Gurubaran, S

    2016-01-01

    Identification and quantification of possible drivers of recent climate variability remain a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases, CO2, CH4, and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance (TSI ) and cosmic ray flux (CR); El Nino Southern Oscillation (ENSO) and Global Mean Temperature Anomaly (GMTA) made during 1984-2005 are utilized to distinguish driving and responding climate signals. Estimates of their relative contributions reveal that CO 2 (~24%), CH 4 (~19%) and volcanic aerosols (~23%) are the primary contributors to the observed variations in GMTA. While, UV (~9%) and ENSO (~12%) act as secondary dri...

  6. Transforming a research-oriented dataset for evaluation of tactical information extraction technologies

    Science.gov (United States)

    Roy, Heather; Kase, Sue E.; Knight, Joanne

    2016-05-01

    The most representative and accurate data for testing and evaluating information extraction technologies is real-world data. Real-world operational data can provide important insights into human and sensor characteristics, interactions, and behavior. However, several challenges limit the feasibility of experimentation with real-world operational data. Realworld data lacks the precise knowledge of a "ground truth," a critical factor for benchmarking progress of developing automated information processing technologies. Additionally, the use of real-world data is often limited by classification restrictions due to the methods of collection, procedures for processing, and tactical sensitivities related to the sources, events, or objects of interest. These challenges, along with an increase in the development of automated information extraction technologies, are fueling an emerging demand for operationally-realistic datasets for benchmarking. An approach to meet this demand is to create synthetic datasets, which are operationally-realistic yet unclassified in content. The unclassified nature of these unclassified synthetic datasets facilitates the sharing of data between military and academic researchers thus increasing coordinated testing efforts. This paper describes the expansion and augmentation of two synthetic text datasets, one initially developed through academic research collaborations with the Army. Both datasets feature simulated tactical intelligence reports regarding fictitious terrorist activity occurring within a counterinsurgency (COIN) operation. The datasets were expanded and augmented to create two military relevant datasets. The first resulting dataset was created by augmenting and merging the two to create a single larger dataset containing ground-truth. The second resulting dataset was restructured to more realistically represent the format and content of intelligence reports. The dataset transformation effort, the final datasets, and their

  7. Informal payments and health worker effort: a quantitative study from Tanzania.

    Science.gov (United States)

    Lindkvist, Ida

    2013-10-01

    Informal payments-payments made from patients to health personnel in excess of official fees--are widespread in low-income countries. It is not obvious how such payments affect health worker effort. On the one hand, one could argue that because informal payments resemble formal pay for performance schemes, they will incite higher effort in the health sector. On the other hand, health personnel may strategically adjust their base effort downwards to maximise patients' willingness to pay informally for extra services. To explore the relationship between informal payments and health worker effort, we use a unique data set from Tanzania with over 2000 observations on the performance of 156 health workers. Patient data on informal payments are used to assess the likelihood that a particular health worker accepts informal payment. We find that health workers who likely accept payments do not exert higher average effort. They do however have a higher variability in the effort they exert to different patients. These health workers are also less sensitive to the medical condition of the patient. A likely explanation for these findings is that health workers engage in rent seeking and lower baseline effort to induce patients to pay.

  8. Quantitative Analysis of Bioactive Compounds from Aromatic Plants by Means of Dynamic Headspace Extraction and Multiple Headspace Extraction-Gas Chromatography-Mass Spectrometry.

    Science.gov (United States)

    Omar, Jone; Olivares, Maitane; Alonso, Ibone; Vallejo, Asier; Aizpurua-Olaizola, Oier; Etxebarria, Nestor

    2016-04-01

    Seven monoterpenes in 4 aromatic plants (sage, cardamom, lavender, and rosemary) were quantified in liquid extracts and directly in solid samples by means of dynamic headspace-gas chromatography-mass spectrometry (DHS-GC-MS) and multiple headspace extraction-gas chromatography-mass spectrometry (MHSE), respectively. The monoterpenes were 1st extracted by means of supercritical fluid extraction (SFE) and analyzed by an optimized DHS-GC-MS. The optimization of the dynamic extraction step and the desorption/cryo-focusing step were tackled independently by experimental design assays. The best working conditions were set at 30 °C for the incubation temperature, 5 min of incubation time, and 40 mL of purge volume for the dynamic extraction step of these bioactive molecules. The conditions of the desorption/cryo-trapping step from the Tenax TA trap were set at follows: the temperature was increased from 30 to 300 °C at 150 °C/min, although the cryo-trapping was maintained at -70 °C. In order to estimate the efficiency of the SFE process, the analysis of monoterpenes in the 4 aromatic plants was directly carried out by means of MHSE because it did not require any sample preparation. Good linearity (r2) > 0.99) and reproducibility (relative standard deviation % <12) was obtained for solid and liquid quantification approaches, in the ranges of 0.5 to 200 ng and 10 to 500 ng/mL, respectively. The developed methods were applied to analyze the concentration of 7 monoterpenes in aromatic plants obtaining concentrations in the range of 2 to 6000 ng/g and 0.25 to 110 μg/mg, respectively.

  9. The Study on Height Information Extraction of Cultural Features in Remote Sensing Images Based on Shadow Areas

    Science.gov (United States)

    Bao-Ming, Z.; Hai-Tao, G.; Jun, L.; Zhi-Qing, L.; Hong, H.

    2011-09-01

    Cultural feature is important element in geospatial information library and the height information is important information of cultural features. The existences of the height information and its precision have direct influence over topographic map, especially the quality of large-scale and medium-scale topographic map, and the level of surveying and mapping support. There are a lot of methods about height information extraction, in which the main methods are ground survey (field direct measurement) spatial sensor and photogrammetric ways. However, automatic extraction is very tough. This paper has had an emphasis on segmentation algorithm on shadow areas under multiple constraints and realized automatic extraction of height information by using shadow. Binarization image can be obtained using gray threshold estimated under the multiple constraints. On the interesting area, spot elimination and region splitting are made. After region labeling and non-shadowed regions elimination, shadow area of cultural features can be found. Then height of the cultural features can be calculated using shadow length, sun altitude angle, azimuth angle, and sensor altitude angle, azimuth angle. A great many of experiments have shown that mean square error of the height information of cultural features extraction is close to 2 meter and automatic extraction rate is close to 70%.

  10. THE STUDY ON HEIGHT INFORMATION EXTRACTION OF CULTURAL FEATURES IN REMOTE SENSING IMAGES BASED ON SHADOW AREAS

    Directory of Open Access Journals (Sweden)

    Z. Bao-Ming

    2012-09-01

    Full Text Available Cultural feature is important element in geospatial information library and the height information is important information of cultural features. The existences of the height information and its precision have direct influence over topographic map, especially the quality of large-scale and medium-scale topographic map, and the level of surveying and mapping support. There are a lot of methods about height information extraction, in which the main methods are ground survey (field direct measurement spatial sensor and photogrammetric ways. However, automatic extraction is very tough. This paper has had an emphasis on segmentation algorithm on shadow areas under multiple constraints and realized automatic extraction of height information by using shadow. Binarization image can be obtained using gray threshold estimated under the multiple constraints. On the interesting area, spot elimination and region splitting are made. After region labeling and non-shadowed regions elimination, shadow area of cultural features can be found. Then height of the cultural features can be calculated using shadow length, sun altitude angle, azimuth angle, and sensor altitude angle, azimuth angle. A great many of experiments have shown that mean square error of the height information of cultural features extraction is close to 2 meter and automatic extraction rate is close to 70%.

  11. Classification and Extraction of Urban Land-Use Information from High-Resolution Image Based on Object Multi-features

    Institute of Scientific and Technical Information of China (English)

    Kong Chunfang; Xu Kai; Wu Chonglong

    2006-01-01

    Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noticeable. Urban administrators and decision-makers seek modern methods and technology to provide information support for urban growth. Recently, with the fast development of high-resolution sensor technology, more relevant data can be obtained, which is an advantage in studying the sustainable development of urban land-use. However, these data are only information sources and are a mixture of "information" and "noise". Processing, analysis and information extraction from remote sensing data is necessary to provide useful information. This paper extracts urban land-use information from a high-resolution image by using the multi-feature information of the image objects, and adopts an object-oriented image analysis approach and multi-scale image segmentation technology. A classification and extraction model is set up based on the multi-features of the image objects, in order to contribute to information for reasonable planning and effective management. This new image analysis approach offers a satisfactory solution for extracting information quickly and efficiently.

  12. Identification and quantitative determination of carbohydrates in ethanolic extracts of two conifers using 13C NMR spectroscopy.

    Science.gov (United States)

    Duquesnoy, Emilie; Castola, Vincent; Casanova, Joseph

    2008-04-07

    We developed a method for the direct identification and quantification of carbohydrates in raw vegetable extracts using (13)C NMR spectroscopy without any preliminary step of precipitation or reduction of the components. This method has been validated (accuracy, precision and response linearity) using pure compounds and artificial mixtures before being applied to authentic ethanolic extracts of pine needles, pine wood and pine cones and fir twigs. We determined that carbohydrates represented from 15% to 35% of the crude extracts in which pinitol was the principal constituent accompanied by arabinitol, mannitol, glucose and fructose.

  13. Quantitative extraction of nucleotides from frozen muscle samples of Atlantic salmon ( Salmo salar ) and rainbow trout ( Oncorhynchus mykiss ) : Effects of time taken to sample and extraction method

    DEFF Research Database (Denmark)

    Thomas, P.M.; Bremner, Allan; Pankhurst, N.W.

    2000-01-01

    Muscle excised from the dorsal flank of Atlantic salmon and rainbow trout at death and up to 120 min postmortem (P.M.) was frozen in liquid N-2 and stored at -80C. Following acid extraction, on ice (method I), or dry ice (method 2) samples were analyzed for cyclic nucleotides to determine...

  14. Qualitative and quantitative evaluation of the genomic DNA extracted from GMO and non-GMO foodstuffs with four different extraction methods.

    Science.gov (United States)

    Peano, Clelia; Samson, Maria Cristina; Palmieri, Luisa; Gulli, Mariolina; Marmiroli, Nelson

    2004-11-17

    The presence of DNA in foodstuffs derived from or containing genetically modified organisms (GMO) is the basic requirement for labeling of GMO foods in Council Directive 2001/18/CE (Off. J. Eur. Communities 2001, L1 06/2). In this work, four different methods for DNA extraction were evaluated and compared. To rank the different methods, the quality and quantity of DNA extracted from standards, containing known percentages of GMO material and from different food products, were considered. The food products analyzed derived from both soybean and maize and were chosen on the basis of the mechanical, technological, and chemical treatment they had been subjected to during processing. Degree of DNA degradation at various stages of food production was evaluated through the amplification of different DNA fragments belonging to the endogenous genes of both maize and soybean. Genomic DNA was extracted from Roundup Ready soybean and maize MON810 standard flours, according to four different methods, and quantified by real-time Polymerase Chain Reaction (PCR), with the aim of determining the influence of the extraction methods on the DNA quantification through real-time PCR.

  15. Salbutamol extraction from urine and its stability in different solutions: identification of methylation products and validation of a quantitative analytical method.

    Science.gov (United States)

    Garrido, Bruno Carius; Silva, Mayra Leal Chrisóstomo; Borges, Ricardo Moreira; Padilha, Monica Costa; de Aquino Neto, Francisco Radler

    2013-12-01

    Salbutamol is commonly used in asthma treatment, being considered a short-effect bronchodilator. This drug poses special interest in certain fields of chemical analysis, such as food, clinical and doping analyses, in which it needs to be analyzed with quantitative precision and accuracy. Salbutamol, however, is known to degrade under certain conditions and this is critical if quantitative results must be generated. The present work aimed to investigate salbutamol extraction from urine samples, to determine whether salbutamol is unstable in other solvents as well as in urine samples, to elucidate the structures of the possible degradation products and to validate an analytical method using the extraction procedure evaluated. Stability investigations were performed in urine at different pH values, in methanol and acetone at different temperatures. Semi-preparative liquid chromatography was performed for the isolation of degradation products, and gas chromatography coupled to mass spectrometry as well as nuclear magnetic resonance were used for identification. Three unreported methylation products were detected in methanolic solutions and had their structures elucidated. Urine samples showed a reduction in salbutamol concentration of up to 25.8% after 5 weeks. These results show that special care must be taken regarding salbutamol quantitative analyses, since degradation either in standard solutions or in urine could lead to incorrect values.

  16. Quantitative Analysis of Total Petroleum Hydrocarbons in Soils: Comparison between Reflectance Spectroscopy and Solvent Extraction by 3 Certified Laboratories

    Directory of Open Access Journals (Sweden)

    Guy Schwartz

    2012-01-01

    Full Text Available The commonly used analytic method for assessing total petroleum hydrocarbons (TPH in soil, EPA method 418.1, is usually based on extraction with 1,1,2-trichlorotrifluoroethane (Freon 113 and FTIR spectroscopy of the extracted solvent. This method is widely used for initial site investigation, due to the relative low price per sample. It is known that the extraction efficiency varies depending on the extracting solvent and other sample properties. This study’s main goal was to evaluate reflectance spectroscopy as a tool for TPH assessment, as compared with three commercial certified laboratories using traditional methods. Large variations were found between the results of the three commercial laboratories, both internally (average deviation up to 20%, and between laboratories (average deviation up to 103%. Reflectance spectroscopy method was found be as good as the commercial laboratories in terms of accuracy and could be a viable field-screening tool that is rapid, environmental friendly, and cost effective.

  17. EVALUATION OF DIFFERENT METHODS FOR THE EXTRACTION OF DNA FROM FUNGAL CONIDIA BY QUANTITATIVE COMPETITIVE PCR ANALYSIS

    Science.gov (United States)

    Five different DNA extraction methods were evaluated for their effectiveness in recovering PCR templates from the conidia of a series of fungal species often encountered in indoor air. The test organisms were Aspergillus versicolor, Penicillium chrysogenum, Stachybotrys chartaru...

  18. Development of quantitative extraction method of amygdalin without enzymatic hydrolysis from tonin(Persicae Semen) by high performance liquid chromatography.

    Science.gov (United States)

    Hwang, Eun-Young; Lee, Sang-Soo; Lee, Je-Hyun; Hong, Seon-Pyo

    2002-08-01

    Tonin(Persicae Semen) is the herb medicine that contains amygdalin as a major ingredient. Amygdalin in water is decomposed into benzaldehyde, HCN, and glucose by emulsin, a hydrolysis enzyme in tonin. A useful and practical method for the optimum extraction condition of amygdalin without enzymatic hydrolysis is required. The extraction yield of amygdalin of natural formula tonin was 0.1% from crude powders, 1.4% from small pieces, 3.5% from half pieces and 2.4% from whole pieces. The extraction yield of amygdalin of outer shell-eliminated tonin was 0.3% from crude powders, 1.4% from small pieces, and 3.5% from half pieces and whole pieces respectively. The extraction yield of amygdalin was most high when using the size larger than half.

  19. Enriching a document collection by integrating information extraction and PDF annotation

    Science.gov (United States)

    Powley, Brett; Dale, Robert; Anisimoff, Ilya

    2009-01-01

    Modern digital libraries offer all the hyperlinking possibilities of the World Wide Web: when a reader finds a citation of interest, in many cases she can now click on a link to be taken to the cited work. This paper presents work aimed at providing the same ease of navigation for legacy PDF document collections that were created before the possibility of integrating hyperlinks into documents was ever considered. To achieve our goal, we need to carry out two tasks: first, we need to identify and link citations and references in the text with high reliability; and second, we need the ability to determine physical PDF page locations for these elements. We demonstrate the use of a high-accuracy citation extraction algorithm which significantly improves on earlier reported techniques, and a technique for integrating PDF processing with a conventional text-stream based information extraction pipeline. We demonstrate these techniques in the context of a particular document collection, this being the ACL Anthology; but the same approach can be applied to other document sets.

  20. Face Contour Extraction of Information%人脸轮廓信息的提取

    Institute of Scientific and Technical Information of China (English)

    原瑾

    2011-01-01

    边缘提取在模式识别、机器视觉、图像分析及图像编码等领域都有着重要的研究价值。人脸检测技术是一种人脸识别技术的前提。文章针对人脸检测中人脸定位提出了人脸轮廓信息提取技术,确定人脸检测的主要区域。首先介绍了几种边缘检测算子,然后提出了动态阈值方法来改进图像阈值,提高了边缘检测精度。%Edge extraction has important research value in the fields of pattern recognition, machine vision, image analysis and image coding. Face detection technology is prerequisite of face recognition technology. In view of person face localization in person face detection, the dissertation proposes an extraction technology of face outline information to identify the main regional of face. This article first introduced several edge detection operators, and then proposed the method of dynamic threshold value to improves the image threshold value, which increased the edge detection accuracy.

  1. Success Rates by Software Development Methodology in Information Technology Project Management: A Quantitative Analysis

    Science.gov (United States)

    Wright, Gerald P.

    2013-01-01

    Despite over half a century of Project Management research, project success rates are still too low. Organizations spend a tremendous amount of valuable resources on Information Technology projects and seek to maximize the utility gained from their efforts. The author investigated the impact of software development methodology choice on ten…

  2. Quantitative Modeling of Human Performance in Information Systems. Technical Research Note 232.

    Science.gov (United States)

    Baker, James D.

    1974-01-01

    A general information system model was developed which focuses on man and considers the computer only as a tool. The ultimate objective is to produce a simulator which will yield measures of system performance under different mixes of equipment, personnel, and procedures. The model is structured around three basic dimensions: (1) data flow and…

  3. Quantitative Analysis of Non-Financial Motivators and Job Satisfaction of Information Technology Professionals

    Science.gov (United States)

    Mieszczak, Gina L.

    2013-01-01

    Organizations depend extensively on Information Technology professionals to drive and deliver technology solutions quickly, efficiently, and effectively to achieve business goals and profitability. It has been demonstrated that professionals with experience specific to the company are valuable assets, and their departure puts technology projects…

  4. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    Science.gov (United States)

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.

    2017-02-01

    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2 , CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance (TSI) and cosmic ray flux (CR); El Niño Southern Oscillation (ENSO) and Global Mean Temperature Anomaly (GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 % ), CH4 ({˜ } 19 % ) and volcanic aerosols ({˜ }23 % ) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 % ) and ENSO ({˜ } 12 % ) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  5. Success Rates by Software Development Methodology in Information Technology Project Management: A Quantitative Analysis

    Science.gov (United States)

    Wright, Gerald P.

    2013-01-01

    Despite over half a century of Project Management research, project success rates are still too low. Organizations spend a tremendous amount of valuable resources on Information Technology projects and seek to maximize the utility gained from their efforts. The author investigated the impact of software development methodology choice on ten…

  6. Metaproteomics: extracting and mining proteome information to characterize metabolic activities in microbial communities.

    Science.gov (United States)

    Abraham, Paul E; Giannone, Richard J; Xiong, Weili; Hettich, Robert L

    2014-06-17

    Contemporary microbial ecology studies usually employ one or more "omics" approaches to investigate the structure and function of microbial communities. Among these, metaproteomics aims to characterize the metabolic activities of the microbial membership, providing a direct link between the genetic potential and functional metabolism. The successful deployment of metaproteomics research depends on the integration of high-quality experimental and bioinformatic techniques for uncovering the metabolic activities of a microbial community in a way that is complementary to other "meta-omic" approaches. The essential, quality-defining informatics steps in metaproteomics investigations are: (1) construction of the metagenome, (2) functional annotation of predicted protein-coding genes, (3) protein database searching, (4) protein inference, and (5) extraction of metabolic information. In this article, we provide an overview of current bioinformatic approaches and software implementations in metaproteome studies in order to highlight the key considerations needed for successful implementation of this powerful community-biology tool.

  7. Metaproteomics: extracting and mining proteome information to characterize metabolic activities in microbial communities

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, Paul E [ORNL; Giannone, Richard J [ORNL; Xiong, Weili [ORNL; Hettich, Robert {Bob} L [ORNL

    2014-01-01

    Contemporary microbial ecology studies usually employ one or more omics approaches to investigate the structure and function of microbial communities. Among these, metaproteomics aims to characterize the metabolic activities of the microbial membership, providing a direct link between the genetic potential and functional metabolism. The successful deployment of metaproteomics research depends on the integration of high-quality experimental and bioinformatic techniques for uncovering the metabolic activities of a microbial community in a way that is complementary to other meta-omic approaches. The essential, quality-defining informatics steps in metaproteomics investigations are: (1) construction of the metagenome, (2) functional annotation of predicted protein-coding genes, (3) protein database searching, (4) protein inference, and (5) extraction of metabolic information. In this article, we provide an overview of current bioinformatic approaches and software implementations in metaproteome studies in order to highlight the key considerations needed for successful implementation of this powerful community-biology tool.

  8. Optimal Extraction of Cosmological Information from Supernova Datain the Presence of Calibration Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Alex G.; Miquel, Ramon

    2005-09-26

    We present a new technique to extract the cosmological information from high-redshift supernova data in the presence of calibration errors and extinction due to dust. While in the traditional technique the distance modulus of each supernova is determined separately, in our approach we determine all distance moduli at once, in a process that achieves a significant degree of self-calibration. The result is a much reduced sensitivity of the cosmological parameters to the calibration uncertainties. As an example, for a strawman mission similar to that outlined in the SNAP satellite proposal, the increased precision obtained with the new approach is roughly equivalent to a factor of five decrease in the calibration uncertainty.

  9. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    Science.gov (United States)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  10. Developing a Process Model for the Forensic Extraction of Information from Desktop Search Applications

    Directory of Open Access Journals (Sweden)

    Timothy Pavlic

    2008-03-01

    Full Text Available Desktop search applications can contain cached copies of files that were deleted from the file system. Forensic investigators see this as a potential source of evidence, as documents deleted by suspects may still exist in the cache. Whilst there have been attempts at recovering data collected by desktop search applications, there is no methodology governing the process, nor discussion on the most appropriate means to do so. This article seeks to address this issue by developing a process model that can be applied when developing an information extraction application for desktop search applications, discussing preferred methods and the limitations of each. This work represents a more structured approach than other forms of current research.

  11. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  12. Intelligent information extraction to aid science decision making in autonomous space exploration

    Science.gov (United States)

    Merényi, Erzsébet; Tasdemir, Kadim; Farrand, William H.

    2008-04-01

    Effective scientific exploration of remote targets such as solar system objects increasingly calls for autonomous data analysis and decision making on-board. Today, robots in space missions are programmed to traverse from one location to another without regard to what they might be passing by. By not processing data as they travel, they can miss important discoveries, or will need to travel back if scientists on Earth find the data warrant backtracking. This is a suboptimal use of resources even on relatively close targets such as the Moon or Mars. The farther mankind ventures into space, the longer the delay in communication, due to which interesting findings from data sent back to Earth are made too late to command a (roving, floating, or orbiting) robot to further examine a given location. However, autonomous commanding of robots in scientific exploration can only be as reliable as the scientific information extracted from the data that is collected and provided for decision making. In this paper, we focus on the discovery scenario, where information extraction is accomplished with unsupervised clustering. For high-dimensional data with complicated structure, detailed segmentation that identifies all significant groups and discovers the small, surprising anomalies in the data, is a challenging task at which conventional algorithms often fail. We approach the problem with precision manifold learning using self-organizing neural maps with non-standard features developed in the course of our research. We demonstrate the effectiveness and robustness of this approach on multi-spectral imagery from the Mars Exploration Rovers Pancam, and on synthetic hyperspectral imagery.

  13. Combined use of phenotypic and genotypic information in sampling animalsfor genotyping in detection of quantitative trait loci

    DEFF Research Database (Denmark)

    Ansari-Mahyari, S; Berg, P

    2008-01-01

    Conventional selective genotyping which is using the extreme phenotypes (EP) was compared with alternative criteria to find the most informative animals for genotyping with respects to mapping quantitative trait loci (QTL). Alternative sampling strategies were based on minimizing the sampling error...... of the estimated QTL effect (MinERR) and maximizing likelihood ratio test (MaxLRT) using both phenotypic and genotypic information. In comparison, animals were randomly genotyped either within or across families. One hundred data sets were simulated each with 30 half-sib families and 120 daughters per family....... The strategies were compared in these datasets with respect to estimated effect and position of a QTL within a previously defined genomic region at genotyping 10, 20 or 30% of the animals. Combined linkage disequilibrium linkage analysis (LDLA) was applied in a variance component approach. Power to detect QTL...

  14. Quantitative analysis of access strategies to remote information in network services

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Schwefel, Hans-Peter; Hansen, Martin Bøgsted

    2006-01-01

    Remote access to dynamically changing information elements is a required functionality for various network services, including routing and instances of context-sensitive networking. Three fundamentally different strategies for such access are investigated in this paper: (1) a reactive approach......, network delay characterization) and specific requirements on mismatch probability, traffic overhead, and access delay. Finally, the analysis is applied to the use-case of context-sensitive service discovery....

  15. Personal information of adolescents on the Internet: A quantitative content analysis of MySpace.

    Science.gov (United States)

    Hinduja, Sameer; Patchin, Justin W

    2008-02-01

    Many youth have recently embraced online social networking sites such as MySpace (myspace.com) to meet their social and relational needs. While manifold benefits stem from participating in such web-based environments, the popular media has been quick to demonize MySpace even though an exponentially small proportion of its users have been victimized due to irresponsible or naive usage of the technology it affords. Major concerns revolve around the possibility of sexual predators and pedophiles finding and then assaulting adolescents who carelessly or unwittingly reveal identifiable information on their personal profile pages. The current study sought to empirically ascertain the type of information youth are publicly posting through an extensive content analysis of randomly sampled MySpace profile pages. Among other findings, 8.8% revealed their full name, 57% included a picture, 27.8% listed their school, and 0.3% provided their telephone number. When considered in its proper context, these results indicate that the problem of personal information disclosure on MySpace may not be as widespread as many assume, and that the overwhelming majority of adolescents are responsibly using the web site. Implications for Internet safety among adolescents and future research regarding adolescent Internet use are discussed.

  16. A comprehensive method for extraction and quantitative analysis of sterols and secosteroids from human plasma[S

    Science.gov (United States)

    McDonald, Jeffrey G.; Smith, Daniel D.; Stiles, Ashlee R.; Russell, David W.

    2012-01-01

    We describe the development of a method for the extraction and analysis of 62 sterols, oxysterols, and secosteroids from human plasma using a combination of HPLC-MS and GC-MS. Deuterated standards are added to 200 μl of human plasma. Bulk lipids are extracted with methanol:dichloromethane, the sample is hydrolyzed using a novel procedure, and sterols and secosteroids are isolated using solid-phase extraction (SPE). Compounds are resolved on C18 core-shell HPLC columns and by GC. Sterols and oxysterols are measured using triple quadrupole mass spectrometers, and lathosterol is measured using GC-MS. Detection for each compound measured by HPLC-MS was ∪ 1 ng/ml of plasma. Extraction efficiency was between 85 and 110%; day-to-day variability showed a relative standard error of <10%. Numerous oxysterols were detected, including the side chain oxysterols 22-, 24-, 25-, and 27-hydroxycholesterol, as well as ring-structure oxysterols 7α- and 4β-hydroxycholesterol. Intermediates from the cholesterol biosynthetic pathway were also detected, including zymosterol, desmosterol, and lanosterol. This method also allowed the quantification of six secosteroids, including the 25-hydroxylated species of vitamins D2 and D3. Application of this method to plasma samples revealed that at least 50 samples could be extracted in a routine day. PMID:22517925

  17. Blind Analysis of Fortified Pesticide Residues in Carrot Extracts using GC-MS to Evaluate Qualitative and Quantitative Performance

    Science.gov (United States)

    Unlike quantitative analysis, the quality of the qualitative results in the analysis of pesticide residues in food are generally ignored in practice. Instead, chemists tend to rely on advanced mass spectrometric techniques and general subjective guidelines or fixed acceptability criteria when makin...

  18. Determination and quality evaluation of green tea extracts through qualitative and quantitative analysis of multi-components by single marker (QAMS).

    Science.gov (United States)

    Li, Da-Wei; Zhu, Ming; Shao, Yun-Dong; Shen, Zhe; Weng, Chen-Chen; Yan, Wei-Dong

    2016-04-15

    The quality of tea is mainly attributed to tea polyphenols and caffeine. In this paper, a new strategy for quality evaluation of green tea extracts was explored and verified through qualitative and quantitative analysis of multi-components by single marker (QAMS). Taguchi Design was introduced to evaluate the fluctuations of the relative conversion factors (fx) of tea catechins, gallic acid and caffeine to epigallocatechin gallate. The regression model (Sig.=0.000) and the deviations (R(2)>0.999) between QAMS and normal external standard method proved the consistency of the two methods. Hierarchical cluster analysis and canonical discriminant analysis were employed to classify 26 batches of commercial Longjing green tea extracts (LJGTEs) collected from different producers. The results showed a significant difference in component profile between the samples from different origins. The QAMS method was verified to be an alternative and promising method to comprehensively and effectively control the quality of LJGTEs from different origins.

  19. Extraction and quantitation of coumarin from cinnamon and its effect on enzymatic browning in fresh apple juice: a bioinformatics approach to illuminate its antibrowning activity.

    Science.gov (United States)

    Thada, Rajarajeshwari; Chockalingam, Shivashri; Dhandapani, Ramesh Kumar; Panchamoorthy, Rajasekar

    2013-06-05

    Enzymatic browning by polyphenoloxidase (PPO) affects food quality and taste in fruits and vegetables. Thus, the study was designed to reduce browning in apple juice by coumarin. The ethanolic extract of cinnamon was prepared and its coumarin content was quantitated by HPLC, using authentic coumarin (AC) as standard. The effect of cinnamon extract (CE) and AC on enzymatic browning, its time dependent effects, and the specific activity of PPO and peroxidase (POD) were studied in apple juice. The docking of coumarin with PPO and POD was also performed to elucidate its antibrowning mechanism. The CE (73%) and AC (82%) showed better reduction in browning, maintained its antibrowning effect at all time points, and significantly (p < 0.05) reduced the specific activity of PPO and POD when compared with controls. Coumarin showed strong interaction with binding pockets of PPO and POD, suggesting its potential use as inhibitor to enzyme mediated browning in apple juice.

  20. An information theory based approach for quantitative evaluation of man-machine interface complexity

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Gook

    1999-02-15

    In complex and high-risk work conditions, especially such as in nuclear power plants, human understanding of the plant is highly cognitive and thus largely dependent on the effectiveness of the man-machine interface system. In order to provide more effective and reliable operating conditions for future nuclear power plants, developing more credible and easy to use evaluation methods will afford great help in designing interface systems in a more efficient manner. In this study, in order to analyze the human-machine interactions, I propose the Human-processor Communication(HPC) model which is based on the information flow concept. It identifies the information flow around a human-processor. Information flow has two aspects: appearance and content. Based on the HPC model, I propose two kinds of measures for evaluating a user interface from the viewpoint of these two aspects of information flow. They measure the communicative complexity of each aspect. In this study, for the evaluation of the aspect of appearance, I propose three complexity measures: Operation Complexity, Transition Complexity, and Screen Complexity. Each one of these measures has its own physical meaning. Two experiments carried out in this work support the utility of these measures. The result of the quiz game experiment shows that as the complexity of task context increases, the usage of the interface system becomes more complex. The experimental results of the three example systems(digital view, LDP style view and hierarchy view) show the utility of the proposed complexity measures. In this study, for the evaluation of the aspect of content, I propose the degree of informational coincidence, R (K, P) as a measure for the usefulness of an alarm-processing system. It is designed to perform user-oriented evaluation based on the informational entropy concept. It will be especially useful inearly design phase because designers can estimate the usefulness of an alarm system by short calculations instead

  1. Towards quantitative measures of Information Security: A Cloud Computing case study

    Directory of Open Access Journals (Sweden)

    Mouna Jouini

    2015-05-01

    Full Text Available Cloud computing is a prospering technology that most organizations consider as a cost effective strategy to manage Information Technology (IT. It delivers computing services as a public utility rather than a personal one. However, despite the significant benefits, these technologies present many challenges including less control and a lack of security. In this paper, we illustrate the use of a cyber security metrics to define an economic security model for cloud computing system. We, also, suggest two cyber security measures in order to better understand system threats and, thus, propose appropriate counter measure to mitigate them.

  2. Quantitative and qualitative validations of a sonication-based DNA extraction approach for PCR-based molecular biological analyses.

    Science.gov (United States)

    Dai, Xiaohu; Chen, Sisi; Li, Ning; Yan, Han

    2016-05-15

    The aim of this study was to comprehensively validate the sonication-based DNA extraction method, in hope of the replacement of the so-called 'standard DNA extraction method' - the commercial kit method. Microbial cells in the digested sludge sample, containing relatively high amount of PCR-inhibitory substances, such as humic acid and protein, were applied as the experimental alternatives. The procedure involving solid/liquid separation of sludge sample and dilution of both DNA templates and inhibitors, the minimum templates for PCR-based analyses, and the in-depth understanding from the bias analysis by pyrosequencing technology were obtained and confirmed the availability of the sonication-based DNA extraction method. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A comparison of techniques for extracting emissivity information from thermal infrared data for geologic studies

    Science.gov (United States)

    Hook, Simon J.; Gabell, A. R.; Green, A. A.; Kealy, P. S.

    1992-01-01

    This article evaluates three techniques developed to extract emissivity information from multispectral thermal infrared data. The techniques are the assumed Channel 6 emittance model, thermal log residuals, and alpha residuals. These techniques were applied to calibrated, atmospherically corrected thermal infrared multispectral scanner (TIMS) data acquired over Cuprite, Nevada in September 1990. Results indicate that the two new techniques (thermal log residuals and alpha residuals) provide two distinct advantages over the assumed Channel 6 emittance model. First, they permit emissivity information to be derived from all six TIMS channels. The assumed Channel 6 emittance model only permits emissivity values to be derived from five of the six TIMS channels. Second, both techniques are less susceptible to noise than the assumed Channel 6 emittance model. The disadvantage of both techniques is that laboratory data must be converted to thermal log residuals or alpha residuals to facilitate comparison with similarly processed image data. An additional advantage of the alpha residual technique is that the processed data are scene-independent unlike those obtained with the other techniques.

  4. How do granites solidify? Information from quantitative textural studies using cathodoluminescence and other techniques

    Science.gov (United States)

    Higgins, Michael

    2017-04-01

    The qualitative and quantitative study of granitic textures (microstructures) has been somewhat neglected, as compared to mafic rocks. Certainly some granite samples are not readily susceptible to textural analysis, particularly if they are altered, but many acidic rocks can be examined in the same way as mafic rocks, using the same techniques. The earliest studies were of K-feldspar megacrysts in granitoids, a component that can be easily quantified by direct measurement in the field and image analysis of stained slabs. However, analysis of thin sections requires other techniques. Although the main components of granites, plagioclase, K-feldspar and quartz, can be readily distinguished in thin section by experienced petrographers, they cannot be quantified readily from optical images using automatic or semi-automatic image analysis methods. An alternative approach is to use cold-cathode cathodoluminescence (CL). This microscope-based method easily distinguishes these three phases and can also identify alteration. Minor colour differences and zonation in CL can sometimes reveal the presence of different crystal populations. Apatite, zircon and other minor phases are also imaged, but all silicate minerals that contain iron do not luminesce. A combination of CL and unpolarised light can be used to classify a thin section into almost all significant phases. In these phase maps adjoining crystals of the same phase are amalgamated. Segmenting the phase maps into crystal maps requires the addition of a cross-polarised image and manual crystal tracing, but provides much richer data. CL images of unaltered granites can reveal a wealth of different textures which will be illustrated with granitoid samples from the Illapel Plutonic suite, Chile and elsewhere. The overall goal is to understand the solidification process. CL was used to select the least altered samples and a mosaic of about half a thin section was produced for each sample. Plagioclase is always the earliest

  5. Quantitative 31P NMR for Simultaneous Trace Analysis of Organophosphorus Pesticides in Aqueous Media Using the Stir Bar Sorptive Extraction Method

    Science.gov (United States)

    Ansari, S.; Talebpour, Z.; Molaabasi, F.; Bijanzadeh, H. R.; Khazaeli, S.

    2016-09-01

    The analysis of pesticides in water samples is of primary concern for quality control laboratories due to the toxicity of these compounds and their associated public health risk. A novel analytical method based on stir bar sorptive extraction (SBSE), followed by 31P quantitative nuclear magnetic resonance (31P QNMR), has been developed for simultaneously monitoring and determining four organophosphorus pesticides (OPPs) in aqueous media. The effects of factors on the extraction efficiency of OPPs were investigated using a Draper-Lin small composite design. An optimal sample volume of 4.2 mL, extraction time of 96 min, extraction temperature of 42°C, and desorption time of 11 min were obtained. The results showed reasonable linearity ranges for all pesticides with correlation coefficients greater than 0.9920. The limit of quantification (LOQ) ranged from 0.1 to 2.60 mg/L, and the recoveries of spiked river water samples were from 82 to 94% with relative standard deviation (RSD) values less than 4%. The results show that this method is simple, selective, rapid, and can be applied to other sample matrices.

  6. Extraction of Artemisinin, an Active Antimalarial Phytopharmaceutical from Dried Leaves of Artemisia annua L., Using Microwaves and a Validated HPTLC-Visible Method for Its Quantitative Determination

    Directory of Open Access Journals (Sweden)

    Himanshu Misra

    2014-01-01

    Full Text Available A simple, rapid, precise, and accurate high-performance thin-layer chromatographic method coupled with visible densitometric detection of artemisinin is developed and validated. Samples of the dried Artemisia annua leaves were extracted via microwaves using different solvents. This method shows the advantage of shorter extraction time of artemisinin from leaves under the influence of electromagnetic radiations. Results obtained from microwave-assisted extraction (MAE were compared with hot soxhlet extraction. Chromatographic separation of artemisinin from plant extract was performed over silica gel 60 F254 HPTLC plate using n-hexane : ethyl acetate as mobile phase in the ratio of 75 : 25, v/v. The plate was developed at room temperature 25 ± 2.0°C. Artemisinin separation over thin-layer plate was visualized after postchromatographic derivatization with anisaldehyde-sulphuric acid reagent. HPTLC plate was scanned in a CAMAG’s TLC scanner 3 at 540 nm. Artemisinin responses were found to be linear over a range of 400–2800 ng spot−1 with a correlation coefficient 0.99754. Limits of detection and quantification were 40 and 80 ng spot−1, respectively. The HPTLC method was validated in terms of system suitability, precision, accuracy, sensitivity (LOD and LOQ, and robustness. Additionally, calculation of plate efficiency and flow constant were included as components of validation. Extracts prepared from different parts of the plant (leaves, branches, main stem, and roots were analyzed for artemisinin content, in which, artemisinin content was found higher in the leaf extract with respect to branches and main stem extracts; however, no artemisinin was detected in root extract. The developed HPTLC-visible method of artemisinin determination will be very useful for pharmaceutical industries, which are involved in monitoring of artemisinin content during different growth stages (in vitro and in vivo of A. annua for qualitative

  7. Quantitative Methods for Evaluating the Informal Economy. Case Study at the Level of Romania

    Directory of Open Access Journals (Sweden)

    Tudorel ANDREI

    2010-07-01

    Full Text Available The evaluation of the hidden economy involves major difficulties related to the use of an adequate methodology and to ensuring the necessary database for estimating some econometric models and some economic variables. The studies conducted showed that the size and forms of the informal economy differ from one country to another. The transition from the economies of the former socialist countries led to an increase in the size of the hidden economy. The highest levels are recorded in some former Soviet republics and in some South American countries. The evaluations made at the level of Romania estimated that the hidden economy accounts for approximately 30% of the Gross Domestic Product. This paper evaluates the size of the hidden economy on the basis of an econometric approach which assesses the cash outside the banking system according to various factors and to its use in official and hidden economy transactions.

  8. Information Extraction and Dependency on Open Government Data (ogd) for Environmental Monitoring

    Science.gov (United States)

    Abdulmuttalib, Hussein

    2016-06-01

    Environmental monitoring practices support decision makers of different government / private institutions, besides environmentalists and planners among others. This support helps them act towards the sustainability of our environment, and also take efficient measures for protecting human beings in general, but it is difficult to explore useful information from 'OGD' and assure its quality for the purpose. On the other hand, Monitoring itself comprises detecting changes as happens, or within the mitigation period range, which means that any source of data, that is to be used for monitoring, should replicate the information related to the period of environmental monitoring, or otherwise it's considered almost useless or history. In this paper the assessment of information extraction and structuring from Open Government Data 'OGD', that can be useful to environmental monitoring is performed, looking into availability, usefulness to environmental monitoring of a certain type, checking its repetition period and dependences. The particular assessment is being performed on a small sample selected from OGD, bearing in mind the type of the environmental change monitored, such as the increase and concentrations of built up areas, and reduction of green areas, or monitoring the change of temperature in a specific area. The World Bank mentioned in its blog that Data is open if it satisfies both conditions of, being technically open, and legally open. The use of Open Data thus, is regulated by published terms of use, or an agreement which implies some conditions without violating the above mentioned two conditions. Within the scope of the paper I wish to share the experience of using some OGD for supporting an environmental monitoring work, that is performed to mitigate the production of carbon dioxide, by regulating energy consumption, and by properly designing the test area's landscapes, thus using Geodesign tactics, meanwhile wish to add to the results achieved by many

  9. A hybrid DNA extraction method for the qualitative and quantitative assessment of bacterial communities from poultry production samples

    Science.gov (United States)

    The efficacy of DNA extraction protocols can be highly dependent upon both the type of sample being investigated and the types of downstream analyses performed. Considering that the use of new bacterial community analysis techniques (e.g., microbiomics, metagenomics) is becoming more prevalent in th...

  10. Method development towards qualitative and semi-quantitative analysis of multiple pesticides from food surfaces and extracts by desorption electrospray ionization mass spectrometry as a preselective tool for food control.

    Science.gov (United States)

    Gerbig, Stefanie; Stern, Gerold; Brunn, Hubertus E; Düring, Rolf-Alexander; Spengler, Bernhard; Schulz, Sabine

    2017-03-01

    Direct analysis of fruit and vegetable surfaces is an important tool for in situ detection of food contaminants such as pesticides. We tested three different ways to prepare samples for the qualitative desorption electrospray ionization mass spectrometry (DESI-MS) analysis of 32 pesticides found on nine authentic fruits collected from food control. Best recovery rates for topically applied pesticides (88%) were found by analyzing the surface of a glass slide which had been rubbed against the surface of the food. Pesticide concentration in all samples was at or below the maximum residue level allowed. In addition to the high sensitivity of the method for qualitative analysis, quantitative or, at least, semi-quantitative information is needed in food control. We developed a DESI-MS method for the simultaneous determination of linear calibration curves of multiple pesticides of the same chemical class using normalization to one internal standard (ISTD). The method was first optimized for food extracts and subsequently evaluated for the quantification of pesticides in three authentic food extracts. Next, pesticides and the ISTD were applied directly onto food surfaces, and the corresponding calibration curves were obtained. The determination of linear calibration curves was still feasible, as demonstrated for three different food surfaces. This proof-of-principle method was used to simultaneously quantify two pesticides on an authentic sample, showing that the method developed could serve as a fast and simple preselective tool for disclosure of pesticide regulation violations. Graphical Abstract Multiple pesticide residues were detected and quantified in-situ from an authentic set of food items and extracts in a proof of principle study.

  11. Urban Built-Up Area Extraction from Landsat TM/ETM+ Images Using Spectral Information and Multivariate Texture

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2014-08-01

    Full Text Available Urban built-up area information is required by various applications. However, urban built-up area extraction using moderate resolution satellite data, such as Landsat series data, is still a challenging task due to significant intra-urban heterogeneity and spectral confusion with other land cover types. In this paper, a new method that combines spectral information and multivariate texture is proposed. The multivariate textures are separately extracted from multispectral data using a multivariate variogram with different distance measures, i.e., Euclidean, Mahalanobis and spectral angle distances. The multivariate textures and the spectral bands are then combined for urban built-up area extraction. Because the urban built-up area is the only target class, a one-class classifier, one-class support vector machine, is used. For comparison, the classical gray-level co-occurrence matrix (GLCM is also used to extract image texture. The proposed method was evaluated using bi-temporal Landsat TM/ETM+ data of two megacity areas in China. Results demonstrated that the proposed method outperformed the use of spectral information alone and the joint use of the spectral information and the GLCM texture. In particular, the inclusion of multivariate variogram textures with spectral angle distance achieved the best results. The proposed method provides an effective way of extracting urban built-up areas from Landsat series images and could be applicable to other applications.

  12. Selection of Suitable DNA Extraction Methods for Genetically Modified Maize 3272, and Development and Evaluation of an Event-Specific Quantitative PCR Method for 3272.

    Science.gov (United States)

    Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.

  13. An Informed Approach to Improving Quantitative Literacy and Mitigating Math Anxiety in Undergraduates Through Introductory Science Courses

    Science.gov (United States)

    Follette, K.; McCarthy, D.

    2012-08-01

    Current trends in the teaching of high school and college science avoid numerical engagement because nearly all students lack basic arithmetic skills and experience anxiety when encountering numbers. Nevertheless, such skills are essential to science and vital to becoming savvy consumers, citizens capable of recognizing pseudoscience, and discerning interpreters of statistics in ever-present polls, studies, and surveys in which our society is awash. Can a general-education collegiate course motivate students to value numeracy and to improve their quantitative skills in what may well be their final opportunity in formal education? We present a tool to assess whether skills in numeracy/quantitative literacy can be fostered and improved in college students through the vehicle of non-major introductory courses in astronomy. Initial classroom applications define the magnitude of this problem and indicate that significant improvements are possible. Based on these initial results we offer this tool online and hope to collaborate with other educators, both formal and informal, to develop effective mechanisms for encouraging all students to value and improve their skills in basic numeracy.

  14. Presenting quantitative information about decision outcomes: a risk communication primer for patient decision aid developers.

    Science.gov (United States)

    Trevena, Lyndal J; Zikmund-Fisher, Brian J; Edwards, Adrian; Gaissmaier, Wolfgang; Galesic, Mirta; Han, Paul K J; King, John; Lawson, Margaret L; Linder, Suzanne K; Lipkus, Isaac; Ozanne, Elissa; Peters, Ellen; Timmermans, Danielle; Woloshin, Steven

    2013-01-01

    Making evidence-based decisions often requires comparison of two or more options. Research-based evidence may exist which quantifies how likely the outcomes are for each option. Understanding these numeric estimates improves patients' risk perception and leads to better informed decision making. This paper summarises current "best practices" in communication of evidence-based numeric outcomes for developers of patient decision aids (PtDAs) and other health communication tools. An expert consensus group of fourteen researchers from North America, Europe, and Australasia identified eleven main issues in risk communication. Two experts for each issue wrote a "state of the art" summary of best evidence, drawing on the PtDA, health, psychological, and broader scientific literature. In addition, commonly used terms were defined and a set of guiding principles and key messages derived from the results. The eleven key components of risk communication were: 1) Presenting the chance an event will occur; 2) Presenting changes in numeric outcomes; 3) Outcome estimates for test and screening decisions; 4) Numeric estimates in context and with evaluative labels; 5) Conveying uncertainty; 6) Visual formats; 7) Tailoring estimates; 8) Formats for understanding outcomes over time; 9) Narrative methods for conveying the chance of an event; 10) Important skills for understanding numerical estimates; and 11) Interactive web-based formats. Guiding principles from the evidence summaries advise that risk communication formats should reflect the task required of the user, should always define a relevant reference class (i.e., denominator) over time, should aim to use a consistent format throughout documents, should avoid "1 in x" formats and variable denominators, consider the magnitude of numbers used and the possibility of format bias, and should take into account the numeracy and graph literacy of the audience. A substantial and rapidly expanding evidence base exists for risk

  15. Quantitative determination of volatile organic compounds (VOC) in milk by multiple dynamic headspace extraction and GC-MS.

    Science.gov (United States)

    Ciccioli, Paolo; Brancaleoni, Enzo; Frattoni, Massimiliano; Fedele, Vincenzo; Claps, Salvatore; Signorelli, Federica

    2004-01-01

    A method for the accurate determination of volatile organic compounds (VOC) in milk samples has been developed and tested. It combines multiple dynamic headspace extraction with GC-MS. Absolute amounts of VOC in the liquid phase are obtained by determining the first order kinetic dependence of the stepwise extraction of the analytes and internal standards from the liquid matrix. Compounds released from milk were collected on a train of traps filled with different solid sorbents to cover all components having a number of carbon atoms ranging from 4 to 15. They were analysed by GC-MS after thermal desorption of VOC from the collecting traps. Quantification of VOC in milk was performed using deuterated compounds as internal standards. The method was used to follow seasonal variations of monoterpenes in goat milk and to detect the impact of air pollution on the quality of milk.

  16. Quantitative determination, Metal analysis and Antiulcer evaluation of Methanol seeds extract of Citrullus lanatus Thunb (Cucurbitaceae) in Rats

    OpenAIRE

    Okunrobo O. Lucky; Uwaya O. John; Imafidon E. Kate; Osarumwense O. Peter; Omorodion E. Jude

    2012-01-01

    Objective: The use of herbs in treatment of diseases is gradually becoming universally accepted especially in non industrialized societies. Citrullus lanatus Thunb (Cucurbitaceae) commonly called water melon is widely consumed in this part of the world as food and medicine. This work was conducted to investigate the phytochemical composition, proximate and metal content analysis of the seed of Citrullus lanatus and to determine the antiulcer action of the methanol seed extract....

  17. Composition of Essential Oils and Ethanol Extracts of the Leaves of Lippia Species: Identification, Quantitation and Antioxidant Capacity

    OpenAIRE

    Trevisan,Maria T. S.; Ricardo A. Marques; Silva,Maria G. V.; Dominique Scherer; Roswitha Haubner; Ulrich, Cornelia M.; Owen, Robert W.

    2016-01-01

    The principal components of essential oils, obtained by steam-hydrodistillation from the fresh leaves of five species of the genus Lippia, namely Lippia gracilis AV, Lippia sidoides Mart , Lippia alba carvoneifera, Lippia alba citralifera and Lippia alba myrceneifera and ethanol extracts , were evaluated. The greater antioxidant capacity (IC 50 = 980 µg/mL; p < 0.05), assessed by the HPLC-based hypoxanthine/xanthine oxidase assay, was determined in the essential oil obtained from Lippia alba ...

  18. Quantitation of volatile oils in ground cumin by supercritical fluid extraction and gas chromatography with flame ionization detection.

    Science.gov (United States)

    Heikes, D L; Scott, B; Gorzovalitis, N A

    2001-01-01

    Ground cumin is used as a flavoring agent in a number of ethnic cuisines. The chemical entities, which primarily establish its characteristically pungent flavor, are found in the volatile oil of cumin. Fixed oils and carbohydrates tend to round out the harshness of the volatile oil components. However, the quantity of volatile oil is commonly the measure of the quality of this spice. For several decades, the spice industry has used a classical distillation procedure for the determination of volatile oil in cumin and other spices. However, the method is cumbersome and requires nearly 8 h to complete. Supercritical fluid extraction with capillary gas chromatography-flame ionization detection is utilized in the formulation of a rapid, accurate, and specific method for the determination of volatile oil in ground cumin. Samples are extracted in a static-dynamic mode with CO2 at 550 bar and 100 degrees C. Toluene is used as a static modifier addition. The extracted volatile oil, collected in toluene, is analyzed directly using tetradecane as the internal standard. Integration is performed as grouped peaks to include all chemical entities found in cumin volatile oil recovered from the official distillation procedure. Results from this procedure compare favorably with those obtained by the official procedure (coefficient of correlation = 0.995, 24 samples).

  19. An optimized microplate assay system for quantitative evaluation of plant cell wall-degrading enzyme activity of fungal culture extracts.

    Science.gov (United States)

    King, Brian C; Donnelly, Marie K; Bergstrom, Gary C; Walker, Larry P; Gibson, Donna M

    2009-03-01

    Developing enzyme cocktails for cellulosic biomass hydrolysis complementary to current cellulase systems is a critical step needed for economically viable biofuels production. Recent genomic analysis indicates that some plant pathogenic fungi are likely a largely untapped resource in which to prospect for novel hydrolytic enzymes for biomass conversion. In order to develop high throughput screening assays for enzyme bioprospecting, a standardized microplate assay was developed for rapid analysis of polysaccharide hydrolysis by fungal extracts, incorporating biomass substrates. Fungi were grown for 10 days on cellulose- or switchgrass-containing media to produce enzyme extracts for analysis. Reducing sugar released from filter paper, Avicel, corn stalk, switchgrass, carboxymethylcellulose, and arabinoxylan was quantified using a miniaturized colorimetric assay based on 3,5-dinitrosalicylic acid. Significant interactions were identified among fungal species, growth media composition, assay substrate, and temperature. Within a small sampling of plant pathogenic fungi, some extracts had crude activities comparable to or greater than T. reesei, particularly when assayed at lower temperatures and on biomass substrates. This microplate assay system should prove useful for high-throughput bioprospecting for new sources of novel enzymes for biofuel production.

  20. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group