WorldWideScience

Sample records for domain-independent information extraction

  1. Domain-independent information extraction in unstructured text

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H. [Sandia National Labs., Albuquerque, NM (United States). Software Surety Dept.

    1996-09-01

    Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness when compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.

  2. Understanding disciplinary vocabularies using a full-text enabled domain-independent term extraction approach.

    Science.gov (United States)

    Yan, Erjia; Williams, Jake; Chen, Zheng

    2017-01-01

    Publication metadata help deliver rich analyses of scholarly communication. However, research concepts and ideas are more effectively expressed through unstructured fields such as full texts. Thus, the goals of this paper are to employ a full-text enabled method to extract terms relevant to disciplinary vocabularies, and through them, to understand the relationships between disciplines. This paper uses an efficient, domain-independent term extraction method to extract disciplinary vocabularies from a large multidisciplinary corpus of PLoS ONE publications. It finds a power-law pattern in the frequency distributions of terms present in each discipline, indicating a semantic richness potentially sufficient for further study and advanced analysis. The salient relationships amongst these vocabularies become apparent in application of a principal component analysis. For example, Mathematics and Computer and Information Sciences were found to have similar vocabulary use patterns along with Engineering and Physics; while Chemistry and the Social Sciences were found to exhibit contrasting vocabulary use patterns along with the Earth Sciences and Chemistry. These results have implications to studies of scholarly communication as scholars attempt to identify the epistemological cultures of disciplines, and as a full text-based methodology could lead to machine learning applications in the automated classification of scholarly work according to disciplinary vocabularies.

  3. Information extraction system

    Science.gov (United States)

    Lemmond, Tracy D; Hanley, William G; Guensche, Joseph Wendell; Perry, Nathan C; Nitao, John J; Kidwell, Paul Brandon; Boakye, Kofi Agyeman; Glaser, Ron E; Prenger, Ryan James

    2014-05-13

    An information extraction system and methods of operating the system are provided. In particular, an information extraction system for performing meta-extraction of named entities of people, organizations, and locations as well as relationships and events from text documents are described herein.

  4. Multimedia Information Extraction

    CERN Document Server

    Maybury, Mark T

    2012-01-01

    The advent of increasingly large consumer collections of audio (e.g., iTunes), imagery (e.g., Flickr), and video (e.g., YouTube) is driving a need not only for multimedia retrieval but also information extraction from and across media. Furthermore, industrial and government collections fuel requirements for stock media access, media preservation, broadcast news retrieval, identity management, and video surveillance.  While significant advances have been made in language processing for information extraction from unstructured multilingual text and extraction of objects from imagery and vid

  5. Challenges in Managing Information Extraction

    Science.gov (United States)

    Shen, Warren H.

    2009-01-01

    This dissertation studies information extraction (IE), the problem of extracting structured information from unstructured data. Example IE tasks include extracting person names from news articles, product information from e-commerce Web pages, street addresses from emails, and names of emerging music bands from blogs. IE is all increasingly…

  6. A domain-independent descriptive design model and its application to structured reflection on design processes

    NARCIS (Netherlands)

    Reymen, Isabelle; Hammer, D.K.; Kroes, P.A.; van Aken, Joan Ernst; van Aken, J.E.; Dorst, C.H.; Bax, M.F.T.; Basten, T

    2006-01-01

    Domain-independent models of the design process are an important means for facilitating interdisciplinary communication and for supporting multidisciplinary design. Many so-called domain-independent models are, however, not really domain independent. We state that to be domain independent, the

  7. Scenario Customization for Information Extraction

    National Research Council Canada - National Science Library

    Yangarber, Roman

    2001-01-01

    Information Extraction (IE) is an emerging NLP technology, whose function is to process unstructured, natural language text, to locate specific pieces of information, or facts, in the text, and to use these facts to fill a database...

  8. How to Program a Domain Independent Tracer for Explanations

    Science.gov (United States)

    Ishizaka, Alessio; Lusti, Markus

    2006-01-01

    Explanations are essential in the teaching process. Tracers are one possibility to provide students with explanations in an intelligent tutoring system. Their development can be divided into four steps: (a) the definition of the trace model; (b) the extraction of the information from this model; (c) the analysis and abstraction of the extracted…

  9. Extracting useful information from images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic and heter......The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic...

  10. Extracting information from multiplex networks

    Science.gov (United States)

    Iacovacci, Jacopo; Bianconi, Ginestra

    2016-06-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering, and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from big data. For these reasons, characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper, we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function Θ ˜ S for describing their mesoscale organization and community structure. As working examples for studying these measures, we consider three multiplex network datasets coming for social science.

  11. Transductive Pattern Learning for Information Extraction

    National Research Council Canada - National Science Library

    McLernon, Brian; Kushmerick, Nicholas

    2006-01-01

    .... We present TPLEX, a semi-supervised learning algorithm for information extraction that can acquire extraction patterns from a small amount of labelled text in conjunction with a large amount of unlabelled text...

  12. Information Extraction for Social Media

    NARCIS (Netherlands)

    Habib, M. B.; Keulen, M. van

    2014-01-01

    The rapid growth in IT in the last two decades has led to a growth in the amount of information available online. A new style for sharing information is social media. Social media is a continuously instantly updated source of information. In this position paper, we propose a framework for

  13. Information Extraction From Chemical Patents

    Directory of Open Access Journals (Sweden)

    Sandra Bergmann

    2012-01-01

    Full Text Available The development of new chemicals or pharmaceuticals is preceded by an indepth analysis of published patents in this field. This information retrieval is a costly and time inefficient step when done by a human reader, yet it is mandatory for potential success of an investment. The goal of the research project UIMA-HPC is to automate and hence speed-up the process of knowledge mining about patents. Multi-threaded analysis engines, developed according to UIMA (Unstructured Information Management Architecture standards, process texts and images in thousands of documents in parallel. UNICORE (UNiform Interface to COmputing Resources workflow control structures make it possible to dynamically allocate resources for every given task to gain best cpu-time/realtime ratios in an HPC environment.

  14. A Domain Independent Framework for Extracting Linked Semantic Data from Tables

    Science.gov (United States)

    2012-01-01

    queda, J ., Ezzat, A.: A survey of current approaches for mapping of relational databases to rdf. Tech. rep., W3C (2009) 24. Salton , G., Mcgill, M.J...Once again ψ2 will assign a score to each entity which can be used to rank the entities. Thus, ψ2 = exp(w T 2 .f2(Ri, j , Ei, j )) where w2 is the weight...vector, Ei, j is the candidate entity and Ri, j is string value in column i and row j . The feature vector f2 is composed as follows: f2

  15. Extracting Information from Multimedia Meeting Collections

    OpenAIRE

    Gatica-Perez, Daniel; Zhang, Dong; Bengio, Samy

    2005-01-01

    Multimedia meeting collections, composed of unedited audio and video streams, handwritten notes, slides, and electronic documents that jointly constitute a raw record of complex human interaction processes in the workplace, have attracted interest due to the increasing feasibility of recording them in large quantities, by the opportunities for information access and retrieval applications derived from the automatic extraction of relevant meeting information, and by the challenges that the ext...

  16. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...

  17. Domain Independent Vocabulary Generation and Its Use in Category-based Small Footprint Language Model

    Directory of Open Access Journals (Sweden)

    KIM, K.-H.

    2011-02-01

    Full Text Available The work in this paper pertains to domain independent vocabulary generation and its use in category-based small footprint Language Model (LM. Two major constraints of the conventional LMs in the embedded environment are memory capacity limitation and data sparsity for the domain-specific application. This data sparsity adversely affects vocabulary coverage and LM performance. To overcome these constraints, we define a set of domain independent categories using a Part-Of-Speech (POS tagged corpus. Also, we generate a domain independent vocabulary based on this set using the corpus and knowledge base. Then, we propose a mathematical framework for a category-based LM using this set. In this LM, one word can be assigned assign multiple categories. In order to reduce its memory requirements, we propose a tree-based data structure. In addition, we determine the history length of a category n-gram, and the independent assumption applying to a category history generation. The proposed vocabulary generation method illustrates at least 13.68% relative improvement in coverage for a SMS text corpus, where data are sparse due to the difficulties in data collection. The proposed category-based LM requires only 215KB which is 55% and 13% compared to the conventional category-based LM and the word-based LM, respectively. It successively improves the performance, achieving 54.9% and 60.6% perplexity reduction compared to the conventional category-based LM and the word-based LM in terms of normalized perplexity.

  18. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  19. Extracting the information backbone in online system.

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  20. Information extraction from muon radiography data

    International Nuclear Information System (INIS)

    Borozdin, K.N.; Asaki, T.J.; Chartrand, R.; Hengartner, N.W.; Hogan, G.E.; Morris, C.L.; Priedhorsky, W.C.; Schirato, R.C.; Schultz, L.J.; Sottile, M.J.; Vixie, K.R.; Wohlberg, B.E.; Blanpied, G.

    2004-01-01

    Scattering muon radiography was proposed recently as a technique of detection and 3-d imaging for dense high-Z objects. High-energy cosmic ray muons are deflected in matter in the process of multiple Coulomb scattering. By measuring the deflection angles we are able to reconstruct the configuration of high-Z material in the object. We discuss the methods for information extraction from muon radiography data. Tomographic methods widely used in medical images have been applied to a specific muon radiography information source. Alternative simple technique based on the counting of high-scattered muons in the voxels seems to be efficient in many simulated scenes. SVM-based classifiers and clustering algorithms may allow detection of compact high-Z object without full image reconstruction. The efficiency of muon radiography can be increased using additional informational sources, such as momentum estimation, stopping power measurement, and detection of muonic atom emission.

  1. Extracting the information backbone in online system.

    Directory of Open Access Journals (Sweden)

    Qian-Ming Zhang

    Full Text Available Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  2. Extracting the Information Backbone in Online System

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such “less can be more” feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency. PMID:23690946

  3. Chaotic spectra: How to extract dynamic information

    International Nuclear Information System (INIS)

    Taylor, H.S.; Gomez Llorente, J.M.; Zakrzewski, J.; Kulander, K.C.

    1988-10-01

    Nonlinear dynamics is applied to chaotic unassignable atomic and molecular spectra with the aim of extracting detailed information about regular dynamic motions that exist over short intervals of time. It is shown how this motion can be extracted from high resolution spectra by doing low resolution studies or by Fourier transforming limited regions of the spectrum. These motions mimic those of periodic orbits (PO) and are inserts into the dominant chaotic motion. Considering these inserts and the PO as a dynamically decoupled region of space, resonant scattering theory and stabilization methods enable us to compute ladders of resonant states which interact with the chaotic quasi-continuum computed in principle from basis sets placed off the PO. The interaction of the resonances with the quasicontinuum explains the low resolution spectra seen in such experiments. It also allows one to associate low resolution features with a particular PO. The motion on the PO thereby supplies the molecular movements whose quantization causes the low resolution spectra. Characteristic properties of the periodic orbit based resonances are discussed. The method is illustrated on the photoabsorption spectrum of the hydrogen atom in a strong magnetic field and on the photodissociation spectrum of H 3 + . Other molecular systems which are currently under investigation using this formalism are also mentioned. 53 refs., 10 figs., 2 tabs

  4. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  5. Extraction of temporal information in functional MRI

    Science.gov (United States)

    Singh, M.; Sungkarat, W.; Jeong, Jeong-Won; Zhou, Yongxia

    2002-10-01

    The temporal resolution of functional MRI (fMRI) is limited by the shape of the haemodynamic response function (hrf) and the vascular architecture underlying the activated regions. Typically, the temporal resolution of fMRI is on the order of 1 s. We have developed a new data processing approach to extract temporal information on a pixel-by-pixel basis at the level of 100 ms from fMRI data. Instead of correlating or fitting the time-course of each pixel to a single reference function, which is the common practice in fMRI, we correlate each pixel's time-course to a series of reference functions that are shifted with respect to each other by 100 ms. The reference function yielding the highest correlation coefficient for a pixel is then used as a time marker for that pixel. A Monte Carlo simulation and experimental study of this approach were performed to estimate the temporal resolution as a function of signal-to-noise ratio (SNR) in the time-course of a pixel. Assuming a known and stationary hrf, the simulation and experimental studies suggest a lower limit in the temporal resolution of approximately 100 ms at an SNR of 3. The multireference function approach was also applied to extract timing information from an event-related motor movement study where the subjects flexed a finger on cue. The event was repeated 19 times with the event's presentation staggered to yield an approximately 100-ms temporal sampling of the haemodynamic response over the entire presentation cycle. The timing differences among different regions of the brain activated by the motor task were clearly visualized and quantified by this method. The results suggest that it is possible to achieve a temporal resolution of /spl sim/200 ms in practice with this approach.

  6. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    International Nuclear Information System (INIS)

    Fan, W J; Lu, Y

    2006-01-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting

  7. Respiratory Information Extraction from Electrocardiogram Signals

    KAUST Repository

    Amin, Gamal El Din Fathy

    2010-12-01

    The Electrocardiogram (ECG) is a tool measuring the electrical activity of the heart, and it is extensively used for diagnosis and monitoring of heart diseases. The ECG signal reflects not only the heart activity but also many other physiological processes. The respiratory activity is a prominent process that affects the ECG signal due to the close proximity of the heart and the lungs. In this thesis, several methods for the extraction of respiratory process information from the ECG signal are presented. These methods allow an estimation of the lung volume and the lung pressure from the ECG signal. The potential benefit of this is to eliminate the corresponding sensors used to measure the respiration activity. A reduction of the number of sensors connected to patients will increase patients’ comfort and reduce the costs associated with healthcare. As a further result, the efficiency of diagnosing respirational disorders will increase since the respiration activity can be monitored with a common, widely available method. The developed methods can also improve the detection of respirational disorders that occur while patients are sleeping. Such disorders are commonly diagnosed in sleeping laboratories where the patients are connected to a number of different sensors. Any reduction of these sensors will result in a more natural sleeping environment for the patients and hence a higher sensitivity of the diagnosis.

  8. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  9. The Agent of extracting Internet Information with Lead Order

    Science.gov (United States)

    Mo, Zan; Huang, Chuliang; Liu, Aijun

    In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.

  10. Cause Information Extraction from Financial Articles Concerning Business Performance

    Science.gov (United States)

    Sakai, Hiroyuki; Masuyama, Shigeru

    We propose a method of extracting cause information from Japanese financial articles concerning business performance. Our method acquires cause informtion, e. g. “_??__??__??__??__??__??__??__??__??__??_ (zidousya no uriage ga koutyou: Sales of cars were good)”. Cause information is useful for investors in selecting companies to invest. Our method extracts cause information as a form of causal expression by using statistical information and initial clue expressions automatically. Our method can extract causal expressions without predetermined patterns or complex rules given by hand, and is expected to be applied to other tasks for acquiring phrases that have a particular meaning not limited to cause information. We compared our method with our previous one originally proposed for extracting phrases concerning traffic accident causes and experimental results showed that our new method outperforms our previous one.

  11. Can we replace curation with information extraction software?

    Science.gov (United States)

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.

  12. Mining knowledge from text repositories using information extraction ...

    Indian Academy of Sciences (India)

    Information extraction (IE); text mining; text repositories; knowledge discovery from .... general purpose English words. However ... of precision and recall, as extensive experimentation is required due to lack of public tagged corpora. 4. Mining ...

  13. Mars Target Encyclopedia: Information Extraction for Planetary Science

    Science.gov (United States)

    Wagstaff, K. L.; Francis, R.; Gowda, T.; Lu, Y.; Riloff, E.; Singh, K.

    2017-06-01

    Mars surface targets / and published compositions / Seek and ye will find. We used text mining methods to extract information from LPSC abstracts about the composition of Mars surface targets. Users can search by element, mineral, or target.

  14. Integrating Information Extraction Agents into a Tourism Recommender System

    Science.gov (United States)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  15. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    Science.gov (United States)

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  16. Information extraction from multi-institutional radiology reports.

    Science.gov (United States)

    Hassanpour, Saeed; Langlotz, Curtis P

    2016-01-01

    The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We

  17. Fine-grained information extraction from German transthoracic echocardiography reports.

    Science.gov (United States)

    Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank

    2015-11-12

    Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports

  18. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  19. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  20. Knowledge Dictionary for Information Extraction on the Arabic Text Data

    Directory of Open Access Journals (Sweden)

    Wahyu Jauharis Saputra

    2013-04-01

    Full Text Available Information extraction is an early stage of a process of textual data analysis. Information extraction is required to get information from textual data that can be used for process analysis, such as classification and categorization. A textual data is strongly influenced by the language. Arabic is gaining a significant attention in many studies because Arabic language is very different from others, and in contrast to other languages, tools and research on the Arabic language is still lacking. The information extracted using the knowledge dictionary is a concept of expression. A knowledge dictionary is usually constructed manually by an expert and this would take a long time and is specific to a problem only. This paper proposed a method for automatically building a knowledge dictionary. Dictionary knowledge is formed by classifying sentences having the same concept, assuming that they will have a high similarity value. The concept that has been extracted can be used as features for subsequent computational process such as classification or categorization. Dataset used in this paper was the Arabic text dataset. Extraction result was tested by using a decision tree classification engine and the highest precision value obtained was 71.0% while the highest recall value was 75.0%. 

  1. Ontology-Based Information Extraction for Business Intelligence

    Science.gov (United States)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  2. NAMED ENTITY RECOGNITION FROM BIOMEDICAL TEXT -AN INFORMATION EXTRACTION TASK

    Directory of Open Access Journals (Sweden)

    N. Kanya

    2016-07-01

    Full Text Available Biomedical Text Mining targets the Extraction of significant information from biomedical archives. Bio TM encompasses Information Retrieval (IR and Information Extraction (IE. The Information Retrieval will retrieve the relevant Biomedical Literature documents from the various Repositories like PubMed, MedLine etc., based on a search query. The IR Process ends up with the generation of corpus with the relevant document retrieved from the Publication databases based on the query. The IE task includes the process of Preprocessing of the document, Named Entity Recognition (NER from the documents and Relationship Extraction. This process includes Natural Language Processing, Data Mining techniques and machine Language algorithm. The preprocessing task includes tokenization, stop word Removal, shallow parsing, and Parts-Of-Speech tagging. NER phase involves recognition of well-defined objects such as genes, proteins or cell-lines etc. This process leads to the next phase that is extraction of relationships (IE. The work was based on machine learning algorithm Conditional Random Field (CRF.

  3. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  4. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  5. Extracting Semantic Information from Visual Data: A Survey

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2016-03-01

    Full Text Available The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods.

  6. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  7. Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    Directory of Open Access Journals (Sweden)

    Yeh Chia-Hung

    2005-01-01

    Full Text Available A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1 the moving cast shadow effect, (2 the occlusion effect, and (3 nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

  8. Advanced applications of natural language processing for performing information extraction

    CERN Document Server

    Rodrigues, Mário

    2015-01-01

    This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses.   ·         Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for t...

  9. Improving information extraction using a probability-based approach

    DEFF Research Database (Denmark)

    Kim, S.; Ahmed, Saeema; Wallace, K.

    2007-01-01

    Information plays a crucial role during the entire life-cycle of a product. It has been shown that engineers frequently consult colleagues to obtain the information they require to solve problems. However, the industrial world is now more transient and key personnel move to other companies...... or retire. It is becoming essential to retrieve vital information from archived product documents, if it is available. There is, therefore, great interest in ways of extracting relevant and sharable information from documents. A keyword-based search is commonly used, but studies have shown...... the recall, while maintaining the high precision, a learning approach that makes identification decisions based on a probability model, rather than simply looking up the presence of the pre-defined variations, looks promising. This paper presents the results of developing such a probability-based entity...

  10. Transliteration normalization for Information Extraction and Machine Translation

    Directory of Open Access Journals (Sweden)

    Yuval Marton

    2014-12-01

    Full Text Available Foreign name transliterations typically include multiple spelling variants. These variants cause data sparseness and inconsistency problems, increase the Out-of-Vocabulary (OOV rate, and present challenges for Machine Translation, Information Extraction and other natural language processing (NLP tasks. This work aims to identify and cluster name spelling variants using a Statistical Machine Translation method: word alignment. The variants are identified by being aligned to the same “pivot” name in another language (the source-language in Machine Translation settings. Based on word-to-word translation and transliteration probabilities, as well as the string edit distance metric, names with similar spellings in the target language are clustered and then normalized to a canonical form. With this approach, tens of thousands of high-precision name transliteration spelling variants are extracted from sentence-aligned bilingual corpora in Arabic and English (in both languages. When these normalized name spelling variants are applied to Information Extraction tasks, improvements over strong baseline systems are observed. When applied to Machine Translation tasks, a large improvement potential is shown.

  11. Abscisic Acid Regulates Inflammation via Ligand-binding Domain-independent Activation of Peroxisome Proliferator-activated Receptor γ*

    Science.gov (United States)

    Bassaganya-Riera, Josep; Guri, Amir J.; Lu, Pinyi; Climent, Montse; Carbo, Adria; Sobral, Bruno W.; Horne, William T.; Lewis, Stephanie N.; Bevan, David R.; Hontecillas, Raquel

    2011-01-01

    Abscisic acid (ABA) has shown efficacy in the treatment of diabetes and inflammation; however, its molecular targets and the mechanisms of action underlying its immunomodulatory effects remain unclear. This study investigates the role of peroxisome proliferator-activated receptor γ (PPAR γ) and lanthionine synthetase C-like 2 (LANCL2) as molecular targets for ABA. We demonstrate that ABA increases PPAR γ reporter activity in RAW 264.7 macrophages and increases ppar γ expression in vivo, although it does not bind to the ligand-binding domain of PPAR γ. LANCL2 knockdown studies provide evidence that ABA-mediated activation of macrophage PPAR γ is dependent on lancl2 expression. Consistent with the association of LANCL2 with G proteins, we provide evidence that ABA increases cAMP accumulation in immune cells. ABA suppresses LPS-induced prostaglandin E2 and MCP-1 production via a PPAR γ-dependent mechanism possibly involving activation of PPAR γ and suppression of NF-κB and nuclear factor of activated T cells. LPS challenge studies in PPAR γ-expressing and immune cell-specific PPAR γ null mice demonstrate that ABA down-regulates toll-like receptor 4 expression in macrophages and T cells in vivo through a PPAR γ-dependent mechanism. Global transcriptomic profiling and confirmatory quantitative RT-PCR suggest novel candidate targets and demonstrate that ABA treatment mitigates the effect of LPS on the expression of genes involved in inflammation, metabolism, and cell signaling, in part, through PPAR γ. In conclusion, ABA decreases LPS-mediated inflammation and regulates innate immune responses through a bifurcating pathway involving LANCL2 and an alternative, ligand-binding domain-independent mechanism of PPAR γ activation. PMID:21088297

  12. Abscisic acid regulates inflammation via ligand-binding domain-independent activation of peroxisome proliferator-activated receptor gamma.

    Science.gov (United States)

    Bassaganya-Riera, Josep; Guri, Amir J; Lu, Pinyi; Climent, Montse; Carbo, Adria; Sobral, Bruno W; Horne, William T; Lewis, Stephanie N; Bevan, David R; Hontecillas, Raquel

    2011-01-28

    Abscisic acid (ABA) has shown efficacy in the treatment of diabetes and inflammation; however, its molecular targets and the mechanisms of action underlying its immunomodulatory effects remain unclear. This study investigates the role of peroxisome proliferator-activated receptor γ (PPAR γ) and lanthionine synthetase C-like 2 (LANCL2) as molecular targets for ABA. We demonstrate that ABA increases PPAR γ reporter activity in RAW 264.7 macrophages and increases ppar γ expression in vivo, although it does not bind to the ligand-binding domain of PPAR γ. LANCL2 knockdown studies provide evidence that ABA-mediated activation of macrophage PPAR γ is dependent on lancl2 expression. Consistent with the association of LANCL2 with G proteins, we provide evidence that ABA increases cAMP accumulation in immune cells. ABA suppresses LPS-induced prostaglandin E(2) and MCP-1 production via a PPAR γ-dependent mechanism possibly involving activation of PPAR γ and suppression of NF-κB and nuclear factor of activated T cells. LPS challenge studies in PPAR γ-expressing and immune cell-specific PPAR γ null mice demonstrate that ABA down-regulates toll-like receptor 4 expression in macrophages and T cells in vivo through a PPAR γ-dependent mechanism. Global transcriptomic profiling and confirmatory quantitative RT-PCR suggest novel candidate targets and demonstrate that ABA treatment mitigates the effect of LPS on the expression of genes involved in inflammation, metabolism, and cell signaling, in part, through PPAR γ. In conclusion, ABA decreases LPS-mediated inflammation and regulates innate immune responses through a bifurcating pathway involving LANCL2 and an alternative, ligand-binding domain-independent mechanism of PPAR γ activation.

  13. Knowledge discovery: Extracting usable information from large amounts of data

    International Nuclear Information System (INIS)

    Whiteson, R.

    1998-01-01

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best

  14. Evolving spectral transformations for multitemporal information extraction using evolutionary computation

    Science.gov (United States)

    Momm, Henrique; Easson, Greg

    2011-01-01

    Remote sensing plays an important role in assessing temporal changes in land features. The challenge often resides in the conversion of large quantities of raw data into actionable information in a timely and cost-effective fashion. To address this issue, research was undertaken to develop an innovative methodology integrating biologically-inspired algorithms with standard image classification algorithms to improve information extraction from multitemporal imagery. Genetic programming was used as the optimization engine to evolve feature-specific candidate solutions in the form of nonlinear mathematical expressions of the image spectral channels (spectral indices). The temporal generalization capability of the proposed system was evaluated by addressing the task of building rooftop identification from a set of images acquired at different dates in a cross-validation approach. The proposed system generates robust solutions (kappa values > 0.75 for stage 1 and > 0.4 for stage 2) despite the statistical differences between the scenes caused by land use and land cover changes coupled with variable environmental conditions, and the lack of radiometric calibration between images. Based on our results, the use of nonlinear spectral indices enhanced the spectral differences between features improving the clustering capability of standard classifiers and providing an alternative solution for multitemporal information extraction.

  15. Recognition techniques for extracting information from semistructured documents

    Science.gov (United States)

    Della Ventura, Anna; Gagliardi, Isabella; Zonta, Bruna

    2000-12-01

    Archives of optical documents are more and more massively employed, the demand driven also by the new norms sanctioning the legal value of digital documents, provided they are stored on supports that are physically unalterable. On the supply side there is now a vast and technologically advanced market, where optical memories have solved the problem of the duration and permanence of data at costs comparable to those for magnetic memories. The remaining bottleneck in these systems is the indexing. The indexing of documents with a variable structure, while still not completely automated, can be machine supported to a large degree with evident advantages both in the organization of the work, and in extracting information, providing data that is much more detailed and potentially significant for the user. We present here a system for the automatic registration of correspondence to and from a public office. The system is based on a general methodology for the extraction, indexing, archiving, and retrieval of significant information from semi-structured documents. This information, in our prototype application, is distributed among the database fields of sender, addressee, subject, date, and body of the document.

  16. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  17. Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2011-01-01

    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration

  18. Information Extraction for Clinical Data Mining: A Mammography Case Study.

    Science.gov (United States)

    Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David

    2009-01-01

    Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts' input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F 1 -score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level.

  19. INFORMATION EXTRACTION IN TOMB PIT USING HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    X. Yang

    2018-04-01

    Full Text Available Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  20. Information Extraction in Tomb Pit Using Hyperspectral Data

    Science.gov (United States)

    Yang, X.; Hou, M.; Lyu, S.; Ma, S.; Gao, Z.; Bai, S.; Gu, M.; Liu, Y.

    2018-04-01

    Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  1. Automated Extraction of Substance Use Information from Clinical Texts.

    Science.gov (United States)

    Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.

  2. Extracting and Using Photon Polarization Information in Radiative B Decays

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Yuval

    2000-05-09

    The authors discuss the uses of conversion electron pairs for extracting photon polarization information in weak radiative B decays. Both cases of leptons produced through a virtual and real photon are considered. Measurements of the angular correlation between the (K-pi) and (e{sup +}e{sup {minus}}) decay planes in B --> K*(--> K-pi)gamma (*)(--> e{sup +}e{sup {minus}}) decays can be used to determine the helicity amplitudes in the radiative B --> K*gamma decays. A large right-handed helicity amplitude in B-bar decays is a signal of new physics. The time-dependent CP asymmetry in the B{sup 0} decay angular correlation is shown to measure sin 2-beta and cos 2-beta with little hadronic uncertainty.

  3. Extraction of neutron spectral information from Bonner-Sphere data

    CERN Document Server

    Haney, J H; Zaidins, C S

    1999-01-01

    We have extended a least-squares method of extracting neutron spectral information from Bonner-Sphere data which was previously developed by Zaidins et al. (Med. Phys. 5 (1978) 42). A pulse-height analysis with background stripping is employed which provided a more accurate count rate for each sphere. Newer response curves by Mares and Schraube (Nucl. Instr. and Meth. A 366 (1994) 461) were included for the moderating spheres and the bare detector which comprise the Bonner spectrometer system. Finally, the neutron energy spectrum of interest was divided using the philosophy of fuzzy logic into three trapezoidal regimes corresponding to slow, moderate, and fast neutrons. Spectral data was taken using a PuBe source in two different environments and the analyzed data is presented for these cases as slow, moderate, and fast neutron fluences. (author)

  4. ONTOGRABBING: Extracting Information from Texts Using Generative Ontologies

    DEFF Research Database (Denmark)

    Nilsson, Jørgen Fischer; Szymczak, Bartlomiej Antoni; Jensen, P.A.

    2009-01-01

    We describe principles for extracting information from texts using a so-called generative ontology in combination with syntactic analysis. Generative ontologies are introduced as semantic domains for natural language phrases. Generative ontologies extend ordinary finite ontologies with rules...... for producing recursively shaped terms representing the ontological content (ontological semantics) of NL noun phrases and other phrases. We focus here on achieving a robust, often only partial, ontology-driven parsing of and ascription of semantics to a sentence in the text corpus. The aim of the ontological...... analysis is primarily to identify paraphrases, thereby achieving a search functionality beyond mere keyword search with synsets. We further envisage use of the generative ontology as a phrase-based rather than word-based browser into text corpora....

  5. Information extraction and knowledge graph construction from geoscience literature

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen

    2018-03-01

    Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.

  6. Data Assimilation to Extract Soil Moisture Information from SMAP Observations

    Directory of Open Access Journals (Sweden)

    Jana Kolassa

    2017-11-01

    Full Text Available This study compares different methods to extract soil moisture information through the assimilation of Soil Moisture Active Passive (SMAP observations. Neural network (NN and physically-based SMAP soil moisture retrievals were assimilated into the National Aeronautics and Space Administration (NASA Catchment model over the contiguous United States for April 2015 to March 2017. By construction, the NN retrievals are consistent with the global climatology of the Catchment model soil moisture. Assimilating the NN retrievals without further bias correction improved the surface and root zone correlations against in situ measurements from 14 SMAP core validation sites (CVS by 0.12 and 0.16, respectively, over the model-only skill, and reduced the surface and root zone unbiased root-mean-square error (ubRMSE by 0.005 m 3 m − 3 and 0.001 m 3 m − 3 , respectively. The assimilation reduced the average absolute surface bias against the CVS measurements by 0.009 m 3 m − 3 , but increased the root zone bias by 0.014 m 3 m − 3 . Assimilating the NN retrievals after a localized bias correction yielded slightly lower surface correlation and ubRMSE improvements, but generally the skill differences were small. The assimilation of the physically-based SMAP Level-2 passive soil moisture retrievals using a global bias correction yielded similar skill improvements, as did the direct assimilation of locally bias-corrected SMAP brightness temperatures within the SMAP Level-4 soil moisture algorithm. The results show that global bias correction methods may be able to extract more independent information from SMAP observations compared to local bias correction methods, but without accurate quality control and observation error characterization they are also more vulnerable to adverse effects from retrieval errors related to uncertainties in the retrieval inputs and algorithm. Furthermore, the results show that using global bias correction approaches without a

  7. Multi-Filter String Matching and Human-Centric Entity Matching for Information Extraction

    Science.gov (United States)

    Sun, Chong

    2012-01-01

    More and more information is being generated in text documents, such as Web pages, emails and blogs. To effectively manage this unstructured information, one broadly used approach includes locating relevant content in documents, extracting structured information and integrating the extracted information for querying, mining or further analysis. In…

  8. Earth Science Data Analytics: Preparing for Extracting Knowledge from Information

    Science.gov (United States)

    Kempler, Steven; Barbieri, Lindsay

    2016-01-01

    Data analytics is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. Data analytics is a broad term that includes data analysis, as well as an understanding of the cognitive processes an analyst uses to understand problems and explore data in meaningful ways. Analytics also include data extraction, transformation, and reduction, utilizing specific tools, techniques, and methods. Turning to data science, definitions of data science sound very similar to those of data analytics (which leads to a lot of the confusion between the two). But the skills needed for both, co-analyzing large amounts of heterogeneous data, understanding and utilizing relevant tools and techniques, and subject matter expertise, although similar, serve different purposes. Data Analytics takes on a practitioners approach to applying expertise and skills to solve issues and gain subject knowledge. Data Science, is more theoretical (research in itself) in nature, providing strategic actionable insights and new innovative methodologies. Earth Science Data Analytics (ESDA) is the process of examining, preparing, reducing, and analyzing large amounts of spatial (multi-dimensional), temporal, or spectral data using a variety of data types to uncover patterns, correlations and other information, to better understand our Earth. The large variety of datasets (temporal spatial differences, data types, formats, etc.) invite the need for data analytics skills that understand the science domain, and data preparation, reduction, and analysis techniques, from a practitioners point of view. The application of these skills to ESDA is the focus of this presentation. The Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster was created in recognition of the practical need to facilitate the co-analysis of large amounts of data and information for Earth science. Thus, from a to

  9. Testing the reliability of information extracted from ancient zircon

    Science.gov (United States)

    Kielman, Ross; Whitehouse, Martin; Nemchin, Alexander

    2015-04-01

    Studies combining zircon U-Pb chronology, trace element distribution as well as O and Hf isotope systematics are a powerful way to gain understanding of the processes shaping Earth's evolution, especially in detrital populations where constraints from the original host are missing. Such studies of the Hadean detrital zircon population abundant in sedimentary rocks in Western Australia have involved analysis of an unusually large number of individual grains, but also highlighted potential problems with the approach, only apparent when multiple analyses are obtained from individual grains. A common feature of the Hadean as well as many early Archaean zircon populations is their apparent inhomogeneity, which reduces confidence in conclusions based on studies combining chemistry and isotopic characteristics of zircon. In order to test the reliability of information extracted from early Earth zircon, we report results from one of the first in-depth multi-method study of zircon from a relatively simple early Archean magmatic rock, used as an analogue to ancient detrital zircon. The approach involves making multiple SIMS analyses in individual grains in order to be comparable to the most advanced studies of detrital zircon populations. The investigated sample is a relatively undeformed, non-migmatitic ca. 3.8 Ga tonalite collected a few kms south of the Isua Greenstone Belt, southwest Greenland. Extracted zircon grains can be combined into three different groups based on the behavior of their U-Pb systems: (i) grains that show internally consistent and concordant ages and define an average age of 3805±15 Ma, taken to be the age of the rock, (ii) grains that are distributed close to the concordia line, but with significant variability between multiple analyses, suggesting an ancient Pb loss and (iii) grains that have multiple analyses distributed along a discordia pointing towards a zero intercept, indicating geologically recent Pb-loss. This overall behavior has

  10. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    Science.gov (United States)

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  11. Medicaid Analytic eXtract (MAX) General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Analytic eXtract (MAX) data is a set of person-level data files on Medicaid eligibility, service utilization, and payments. The MAX data are created to...

  12. Information Extraction with Character-level Neural Networks and Free Noisy Supervision

    OpenAIRE

    Meerkamp, Philipp; Zhou, Zhengyi

    2016-01-01

    We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn compl...

  13. Semantics-based information extraction for detecting economic events

    NARCIS (Netherlands)

    A.C. Hogenboom (Alexander); F. Frasincar (Flavius); K. Schouten (Kim); O. van der Meer

    2013-01-01

    textabstractAs today's financial markets are sensitive to breaking news on economic events, accurate and timely automatic identification of events in news items is crucial. Unstructured news items originating from many heterogeneous sources have to be mined in order to extract knowledge useful for

  14. Tagline: Information Extraction for Semi-Structured Text Elements in Medical Progress Notes

    Science.gov (United States)

    Finch, Dezon Kile

    2012-01-01

    Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in…

  15. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    Science.gov (United States)

    Jonnalagadda, Siddhartha

    2011-01-01

    In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…

  16. A rapid extraction of landslide disaster information research based on GF-1 image

    Science.gov (United States)

    Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na

    2015-08-01

    In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.

  17. Towards an information extraction and knowledge formation framework based on Shannon entropy

    Directory of Open Access Journals (Sweden)

    Iliescu Dragoș

    2017-01-01

    Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.

  18. Extracting local information from crowds through betting markets

    Science.gov (United States)

    Weijs, Steven

    2015-04-01

    In this research, a set-up is considered in which users can bet against a forecasting agency to challenge their probabilistic forecasts. From an information theory standpoint, a reward structure is considered that either provides the forecasting agency with better information, paying the successful providers of information for their winning bets, or funds excellent forecasting agencies through users that think they know better. Especially for local forecasts, the approach may help to diagnose model biases and to identify local predictive information that can be incorporated in the models. The challenges and opportunities for implementing such a system in practice are also discussed.

  19. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  20. Sifting Through Chaos: Extracting Information from Unstructured Legal Opinions.

    Science.gov (United States)

    Oliveira, Bruno Miguel; Guimarães, Rui Vasconcellos; Antunes, Luís; Rodrigues, Pedro Pereira

    2018-01-01

    Abiding to the law is, in some cases, a delicate balance between the rights of different players. Re-using health records is such a case. While the law grants reuse rights to public administration documents, in which health records produced in public health institutions are included, it also grants privacy to personal records. To safeguard a correct usage of data, public hospitals in Portugal employ jurists that are responsible for allowing or withholding access rights to health records. To help decision making, these jurists can consult the legal opinions issued by the national committee on public administration documents usage. While these legal opinions are of undeniable value, due to their doctrine contribution, they are only available in a format best suited from printing, forcing individual consultation of each document, with no option, whatsoever of clustered search, filtering or indexing, which are standard operations nowadays in a document management system. When having to decide on tens of data requests a day, it becomes unfeasible to consult the hundreds of legal opinions already available. With the objective to create a modern document management system, we devised an open, platform agnostic system that extracts and compiles the legal opinions, ex-tracts its contents and produces metadata, allowing for a fast searching and filtering of said legal opinions.

  1. Information extraction from FN plots of tungsten microemitters

    Energy Technology Data Exchange (ETDEWEB)

    Mussa, Khalil O. [Department of Physics, Mu' tah University, Al-Karak (Jordan); Mousa, Marwan S., E-mail: mmousa@mutah.edu.jo [Department of Physics, Mu' tah University, Al-Karak (Jordan); Fischer, Andreas, E-mail: andreas.fischer@physik.tu-chemnitz.de [Institut für Physik, Technische Universität Chemnitz, Chemnitz (Germany)

    2013-09-15

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10{sup −8}mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  2. Information extraction from FN plots of tungsten microemitters

    International Nuclear Information System (INIS)

    Mussa, Khalil O.; Mousa, Marwan S.; Fischer, Andreas

    2013-01-01

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10 −8 mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  3. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    Science.gov (United States)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  4. OPTIMAL INFORMATION EXTRACTION OF LASER SCANNING DATASET BY SCALE-ADAPTIVE REDUCTION

    Directory of Open Access Journals (Sweden)

    Y. Zang

    2018-04-01

    Full Text Available 3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  5. Information extraction from FN plots of tungsten microemitters.

    Science.gov (United States)

    Mussa, Khalil O; Mousa, Marwan S; Fischer, Andreas

    2013-09-01

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials-such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current-voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)-screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10(-8) mbar when baked at up to ∼180 °C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler-Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in particular

  6. Study on methods and techniques of aeroradiometric weak information extraction for sandstone-hosted uranium deposits based on GIS

    International Nuclear Information System (INIS)

    Han Shaoyang; Ke Dan; Hou Huiqun

    2005-01-01

    The weak information extraction is one of the important research contents in the current sandstone-type uranium prospecting in China. This paper introduces the connotation of aeroradiometric weak information extraction, and discusses the formation theories of aeroradiometric weak information extraction, and discusses the formation theories of aeroradiometric weak information and establishes some effective mathematic models for weak information extraction. Models for weak information extraction are realized based on GIS software platform. Application tests of weak information extraction are realized based on GIS software platform. Application tests of weak information extraction are completed in known uranium mineralized areas. Research results prove that the prospective areas of sandstone-type uranium deposits can be rapidly delineated by extracting aeroradiometric weak information. (authors)

  7. Extraction of Graph Information Based on Image Contents and the Use of Ontology

    Science.gov (United States)

    Kanjanawattana, Sarunya; Kimura, Masaomi

    2016-01-01

    A graph is an effective form of data representation used to summarize complex information. Explicit information such as the relationship between the X- and Y-axes can be easily extracted from a graph by applying human intelligence. However, implicit knowledge such as information obtained from other related concepts in an ontology also resides in…

  8. Extracting information of fixational eye movements through pupil tracking

    Science.gov (United States)

    Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng

    2018-01-01

    Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.

  9. Extracting Social Networks and Contact Information From Email and the Web

    National Research Council Canada - National Science Library

    Culotta, Aron; Bekkerman, Ron; McCallum, Andrew

    2005-01-01

    ...-suited for such information extraction tasks. By recursively calling itself on new people discovered on the Web, the system builds a social network with multiple degrees of separation from the user...

  10. Lithium NLP: A System for Rich Information Extraction from Noisy User Generated Text on Social Media

    OpenAIRE

    Bhargava, Preeti; Spasojevic, Nemanja; Hu, Guoning

    2017-01-01

    In this paper, we describe the Lithium Natural Language Processing (NLP) system - a resource-constrained, high- throughput and language-agnostic system for information extraction from noisy user generated text on social media. Lithium NLP extracts a rich set of information including entities, topics, hashtags and sentiment from text. We discuss several real world applications of the system currently incorporated in Lithium products. We also compare our system with existing commercial and acad...

  11. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  12. Overview of ImageCLEF 2017: information extraction from images

    OpenAIRE

    Ionescu, Bogdan; Müller, Henning; Villegas, Mauricio; Arenas, Helbert; Boato, Giulia; Dang Nguyen, Duc Tien; Dicente Cid, Yashin; Eickhoff, Carsten; Seco de Herrera, Alba G.; Gurrin, Cathal; Islam, Bayzidul; Kovalev, Vassili; Liauchuk, Vitali; Mothe, Josiane; Piras, Luca

    2017-01-01

    This paper presents an overview of the ImageCLEF 2017 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) labs 2017. ImageCLEF is an ongoing initiative (started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval for providing information access to collections of images in various usage scenarios and domains. In 2017, the 15th edition of ImageCLEF, three main tasks were proposed and one pil...

  13. Statistical techniques to extract information during SMAP soil moisture assimilation

    Science.gov (United States)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-12-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, the need for bias correction prior to an assimilation of these estimates is reduced, which could result in a more effective use of the independent information provided by the satellite observations. In this study, a statistical neural network (NN) retrieval algorithm is calibrated using SMAP brightness temperature observations and modeled soil moisture estimates (similar to those used to calibrate the SMAP Level 4 DA system). Daily values of surface soil moisture are estimated using the NN and then assimilated into the NASA Catchment model. The skill of the assimilation estimates is assessed based on a comprehensive comparison to in situ measurements from the SMAP core and sparse network sites as well as the International Soil Moisture Network. The NN retrieval assimilation is found to significantly improve the model skill, particularly in areas where the model does not represent processes related to agricultural practices. Additionally, the NN method is compared to assimilation experiments using traditional bias correction techniques. The NN retrieval assimilation is found to more effectively use the independent information provided by SMAP resulting in larger model skill improvements than assimilation experiments using traditional bias correction techniques.

  14. Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame

    Science.gov (United States)

    Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi

    2018-01-01

    At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.

  15. [Extraction of management information from the national quality assurance program].

    Science.gov (United States)

    Stausberg, Jürgen; Bartels, Claus; Bobrowski, Christoph

    2007-07-15

    Starting with clinically motivated projects, the national quality assurance program has established a legislative obligatory framework. Annual feedback of results is an important means of quality control. The annual reports cover quality-related information with high granularity. A synopsis for corporate management is missing, however. Therefore, the results of the University Clinics in Greifswald, Germany, have been analyzed and aggregated to support hospital management. Strengths were identified by the ranking of results within the state for each quality indicator, weaknesses by the comparison with national reference values. The assessment was aggregated per clinical discipline and per category (indication, process, and outcome). A composition of quality indicators was claimed multiple times. A coherent concept is still missing. The method presented establishes a plausible summary of strengths and weaknesses of a hospital from the point of view of the national quality assurance program. Nevertheless, further adaptation of the program is needed to better assist corporate management.

  16. Extracting of implicit information in English advertising texts with phonetic and lexical-morphological means

    Directory of Open Access Journals (Sweden)

    Traikovskaya Natalya Petrovna

    2015-12-01

    Full Text Available The article deals with phonetic and lexical-morphological language means participating in the process of extracting implicit information in English-speaking advertising texts for men and women. The functioning of phonetic means of the English language is not the basis for implication of information in advertising texts. Lexical and morphological means play the role of markers of relevant information, playing the role of the activator ofimplicit information in the texts of advertising.

  17. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  18. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    Science.gov (United States)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  19. A method for automating the extraction of specialized information from the web

    NARCIS (Netherlands)

    Lin, L.; Liotta, A.; Hippisley, A.; Hao, Y.; Liu, J.; Wang, Y.; Cheung, Y-M.; Yin, H.; Jiao, L.; Ma, j.; Jiao, Y-C.

    2005-01-01

    The World Wide Web can be viewed as a gigantic distributed database including millions of interconnected hosts some of which publish information via web servers or peer-to-peer systems. We present here a novel method for the extraction of semantically rich information from the web in a fully

  20. Information analysis of iris biometrics for the needs of cryptology key extraction

    Directory of Open Access Journals (Sweden)

    Adamović Saša

    2013-01-01

    Full Text Available The paper presents a rigorous analysis of iris biometric information for the synthesis of an optimized system for the extraction of a high quality cryptology key. Estimations of local entropy and mutual information were identified as segments of the iris most suitable for this purpose. In order to optimize parameters, corresponding wavelets were transformed, in order to obtain the highest possible entropy and mutual information lower in the transformation domain, which set frameworks for the synthesis of systems for the extraction of truly random sequences of iris biometrics, without compromising authentication properties. [Projekat Ministarstva nauke Republike Srbije, br. TR32054 i br. III44006

  1. MedTime: a temporal information extraction system for clinical narratives.

    Science.gov (United States)

    Lin, Yu-Kai; Chen, Hsinchun; Brown, Randall A

    2013-12-01

    Temporal information extraction from clinical narratives is of critical importance to many clinical applications. We participated in the EVENT/TIMEX3 track of the 2012 i2b2 clinical temporal relations challenge, and presented our temporal information extraction system, MedTime. MedTime comprises a cascade of rule-based and machine-learning pattern recognition procedures. It achieved a micro-averaged f-measure of 0.88 in both the recognitions of clinical events and temporal expressions. We proposed and evaluated three time normalization strategies to normalize relative time expressions in clinical texts. The accuracy was 0.68 in normalizing temporal expressions of dates, times, durations, and frequencies. This study demonstrates and evaluates the integration of rule-based and machine-learning-based approaches for high performance temporal information extraction from clinical narratives. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    Science.gov (United States)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  3. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  4. End-to-end information extraction without token-level supervision

    DEFF Research Database (Denmark)

    Palm, Rasmus Berg; Hovy, Dirk; Laws, Florian

    2017-01-01

    Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels...... and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction...

  5. Extraction Method for Earthquake-Collapsed Building Information Based on High-Resolution Remote Sensing

    International Nuclear Information System (INIS)

    Chen, Peng; Wu, Jian; Liu, Yaolin; Wang, Jing

    2014-01-01

    At present, the extraction of earthquake disaster information from remote sensing data relies on visual interpretation. However, this technique cannot effectively and quickly obtain precise and efficient information for earthquake relief and emergency management. Collapsed buildings in the town of Zipingpu after the Wenchuan earthquake were used as a case study to validate two kinds of rapid extraction methods for earthquake-collapsed building information based on pixel-oriented and object-oriented theories. The pixel-oriented method is based on multi-layer regional segments that embody the core layers and segments of the object-oriented method. The key idea is to mask layer by layer all image information, including that on the collapsed buildings. Compared with traditional techniques, the pixel-oriented method is innovative because it allows considerably rapid computer processing. As for the object-oriented method, a multi-scale segment algorithm was applied to build a three-layer hierarchy. By analyzing the spectrum, texture, shape, location, and context of individual object classes in different layers, the fuzzy determined rule system was established for the extraction of earthquake-collapsed building information. We compared the two sets of results using three variables: precision assessment, visual effect, and principle. Both methods can extract earthquake-collapsed building information quickly and accurately. The object-oriented method successfully overcomes the pepper salt noise caused by the spectral diversity of high-resolution remote sensing data and solves the problem of same object, different spectrums and that of same spectrum, different objects. With an overall accuracy of 90.38%, the method achieves more scientific and accurate results compared with the pixel-oriented method (76.84%). The object-oriented image analysis method can be extensively applied in the extraction of earthquake disaster information based on high-resolution remote sensing

  6. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.

    Science.gov (United States)

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single

  7. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    Science.gov (United States)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  8. Information retrieval and terminology extraction in online resources for patients with diabetes.

    Science.gov (United States)

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

  9. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  10. Methods to extract information on the atomic and molecular states from scientific abstracts

    International Nuclear Information System (INIS)

    Sasaki, Akira; Ueshima, Yutaka; Yamagiwa, Mitsuru; Murata, Masaki; Kanamaru, Toshiyuki; Shirado, Tamotsu; Isahara, Hitoshi

    2005-01-01

    We propose a new application of information technology to recognize and extract expressions of atomic and molecular states from electrical forms of scientific abstracts. Present results will help scientists to understand atomic states as well as the physics discussed in the articles. Combining with the internet search engines, it will make one possible to collect not only atomic and molecular data but broader scientific information over a wide range of research fields. (author)

  11. System and method for extracting physiological information from remotely detected electromagnetic radiation

    NARCIS (Netherlands)

    2016-01-01

    The present invention relates to a device and a method for extracting physiological information indicative of at least one health symptom from remotely detected electromagnetic radiation. The device comprises an interface (20) for receiving a data stream comprising remotely detected image data

  12. System and method for extracting physiological information from remotely detected electromagnetic radiation

    NARCIS (Netherlands)

    2015-01-01

    The present invention relates to a device and a method for extracting physiological information indicative of at least one health symptom from remotely detected electromagnetic radiation. The device comprises an interface (20) for receiving a data stream comprising remotely detected image data

  13. Network and Ensemble Enabled Entity Extraction in Informal Text (NEEEEIT) final report

    Energy Technology Data Exchange (ETDEWEB)

    Kegelmeyer, Philip W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shead, Timothy M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunlavy, Daniel M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Information Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods.

  14. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  15. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have the...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/......., organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual...

  16. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  17. RESEARCH ON REMOTE SENSING GEOLOGICAL INFORMATION EXTRACTION BASED ON OBJECT ORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2018-04-01

    Full Text Available The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  18. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    Science.gov (United States)

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  19. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2018-04-01

    Full Text Available Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT. First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  20. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Science.gov (United States)

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  1. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...... or disappear depending on the experimental conditions. Such biomarkers are found by comparing the relative volumes of individual spots in the individual gels. Multivariate statistical analysis and modelling of 2-DE data for comparison and classification is an alternative approach utilising the combination...

  2. From remote sensing data about information extraction for 3D geovisualization - Development of a workflow

    International Nuclear Information System (INIS)

    Tiede, D.

    2010-01-01

    With an increased availability of high (spatial) resolution remote sensing imagery since the late nineties, the need to develop operative workflows for the automated extraction, provision and communication of information from such data has grown. Monitoring requirements, aimed at the implementation of environmental or conservation targets, management of (environmental-) resources, and regional planning as well as international initiatives, especially the joint initiative of the European Commission and ESA (European Space Agency) for Global Monitoring for Environment and Security (GMES) play also a major part. This thesis addresses the development of an integrated workflow for the automated provision of information derived from remote sensing data. Considering applied data and fields of application, this work aims to design the workflow as generic as possible. Following research questions are discussed: What are the requirements of a workflow architecture that seamlessly links the individual workflow elements in a timely manner and secures accuracy of the extracted information effectively? How can the workflow retain its efficiency if mounds of data are processed? How can the workflow be improved with regards to automated object-based image analysis (OBIA)? Which recent developments could be of use? What are the limitations or which workarounds could be applied in order to generate relevant results? How can relevant information be prepared target-oriented and communicated effectively? How can the more recently developed freely available virtual globes be used for the delivery of conditioned information under consideration of the third dimension as an additional, explicit carrier of information? Based on case studies comprising different data sets and fields of application it is demonstrated how methods to extract and process information as well as to effectively communicate results can be improved and successfully combined within one workflow. It is shown that (1

  3. Addressing Risk Assessment for Patient Safety in Hospitals through Information Extraction in Medical Reports

    Science.gov (United States)

    Proux, Denys; Segond, Frédérique; Gerbier, Solweig; Metzger, Marie Hélène

    Hospital Acquired Infections (HAI) is a real burden for doctors and risk surveillance experts. The impact on patients' health and related healthcare cost is very significant and a major concern even for rich countries. Furthermore required data to evaluate the threat is generally not available to experts and that prevents from fast reaction. However, recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems allow now to address this problem.

  4. From Specific Information Extraction to Inferences: A Hierarchical Framework of Graph Comprehension

    Science.gov (United States)

    2004-09-01

    The skill to interpret the information displayed in graphs is so important to have, the National Council of Teachers of Mathematics has created...guidelines to ensure that students learn these skills ( NCTM : Standards for Mathematics , 2003). These guidelines are based primarily on the extraction of...graphical perception. Human Computer Interaction, 8, 353-388. NCTM : Standards for Mathematics . (2003, 2003). Peebles, D., & Cheng, P. C.-H. (2002

  5. Extracting breathing rate information from a wearable reflectance pulse oximeter sensor.

    Science.gov (United States)

    Johnston, W S; Mendelson, Y

    2004-01-01

    The integration of multiple vital physiological measurements could help combat medics and field commanders to better predict a soldier's health condition and enhance their ability to perform remote triage procedures. In this paper we demonstrate the feasibility of extracting accurate breathing rate information from a photoplethysmographic signal that was recorded by a reflectance pulse oximeter sensor mounted on the forehead and subsequently processed by a simple time domain filtering and frequency domain Fourier analysis.

  6. Extraction of land cover change information from ENVISAT-ASAR data in Chengdu Plain

    Science.gov (United States)

    Xu, Wenbo; Fan, Jinlong; Huang, Jianxi; Tian, Yichen; Zhang, Yong

    2006-10-01

    Land cover data are essential to most global change research objectives, including the assessment of current environmental conditions and the simulation of future environmental scenarios that ultimately lead to public policy development. Chinese Academy of Sciences generated a nationwide land cover database in order to carry out the quantification and spatial characterization of land use/cover changes (LUCC) in 1990s. In order to improve the reliability of the database, we will update the database anytime. But it is difficult to obtain remote sensing data to extract land cover change information in large-scale. It is hard to acquire optical remote sensing data in Chengdu plain, so the objective of this research was to evaluate multitemporal ENVISAT advanced synthetic aperture radar (ASAR) data for extracting land cover change information. Based on the fieldwork and the nationwide 1:100000 land cover database, the paper assesses several land cover changes in Chengdu plain, for example: crop to buildings, forest to buildings, and forest to bare land. The results show that ENVISAT ASAR data have great potential for the applications of extracting land cover change information.

  7. KneeTex: an ontology-driven system for information extraction from MRI reports.

    Science.gov (United States)

    Spasić, Irena; Zhao, Bo; Jones, Christopher B; Button, Kate

    2015-01-01

    In the realm of knee pathology, magnetic resonance imaging (MRI) has the advantage of visualising all structures within the knee joint, which makes it a valuable tool for increasing diagnostic accuracy and planning surgical treatments. Therefore, clinical narratives found in MRI reports convey valuable diagnostic information. A range of studies have proven the feasibility of natural language processing for information extraction from clinical narratives. However, no study focused specifically on MRI reports in relation to knee pathology, possibly due to the complexity of knee anatomy and a wide range of conditions that may be associated with different anatomical entities. In this paper we describe KneeTex, an information extraction system that operates in this domain. As an ontology-driven information extraction system, KneeTex makes active use of an ontology to strongly guide and constrain text analysis. We used automatic term recognition to facilitate the development of a domain-specific ontology with sufficient detail and coverage for text mining applications. In combination with the ontology, high regularity of the sublanguage used in knee MRI reports allowed us to model its processing by a set of sophisticated lexico-semantic rules with minimal syntactic analysis. The main processing steps involve named entity recognition combined with coordination, enumeration, ambiguity and co-reference resolution, followed by text segmentation. Ontology-based semantic typing is then used to drive the template filling process. We adopted an existing ontology, TRAK (Taxonomy for RehAbilitation of Knee conditions), for use within KneeTex. The original TRAK ontology expanded from 1,292 concepts, 1,720 synonyms and 518 relationship instances to 1,621 concepts, 2,550 synonyms and 560 relationship instances. This provided KneeTex with a very fine-grained lexico-semantic knowledge base, which is highly attuned to the given sublanguage. Information extraction results were evaluated

  8. SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.

    Science.gov (United States)

    Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen

    2012-07-23

    We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.

  9. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-01-01

    Full Text Available OBJECTIVES: A common method for conducting a quantitative systematic review (QSR for observational studies related to nutritional epidemiology is the “highest versus lowest intake” method (HLM, in which only the information concerning the effect size (ES of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM, a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES between the HLM and ICM. METHODS: A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. RESULTS: No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. CONCLUSIONS: The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  10. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    information. In particular, for feature extraction, we develop a new set of kernel descriptors−Context Kernel Descriptors (CKD), which enhance the original KDES by embedding the spatial context into the descriptors. Context cues contained in the context kernel enforce some degree of spatial consistency, thus...... improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting...... as the information about the underlying labels of the CKD using CSQMI. Thus the resulting codebook and reduced CKD are discriminative. We verify the effectiveness of our method on several public image benchmark datasets such as YaleB, Caltech-101 and CIFAR-10, as well as a challenging chicken feet dataset of our own...

  11. Method of extracting significant trouble information of nuclear power plants using probabilistic analysis technique

    International Nuclear Information System (INIS)

    Shimada, Yoshio; Miyazaki, Takamasa

    2005-01-01

    In order to analyze and evaluate large amounts of trouble information of overseas nuclear power plants, it is necessary to select information that is significant in terms of both safety and reliability. In this research, a method of efficiently and simply classifying degrees of importance of components in terms of safety and reliability while paying attention to root-cause components appearing in the information was developed. Regarding safety, the reactor core damage frequency (CDF), which is used in the probabilistic analysis of a reactor, was used. Regarding reliability, the automatic plant trip probability (APTP), which is used in the probabilistic analysis of automatic reactor trips, was used. These two aspects were reflected in the development of criteria for classifying degrees of importance of components. By applying these criteria, a simple method of extracting significant trouble information of overseas nuclear power plants was developed. (author)

  12. Automated concept-level information extraction to reduce the need for custom software and rules development.

    Science.gov (United States)

    D'Avolio, Leonard W; Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D

    2011-01-01

    Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. A 'learn by example' approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download.

  13. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  14. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  15. A cascade of classifiers for extracting medication information from discharge summaries

    Directory of Open Access Journals (Sweden)

    Halgrim Scott

    2011-07-01

    Full Text Available Abstract Background Extracting medication information from clinical records has many potential applications, and recently published research, systems, and competitions reflect an interest therein. Much of the early extraction work involved rules and lexicons, but more recently machine learning has been applied to the task. Methods We present a hybrid system consisting of two parts. The first part, field detection, uses a cascade of statistical classifiers to identify medication-related named entities. The second part uses simple heuristics to link those entities into medication events. Results The system achieved performance that is comparable to other approaches to the same task. This performance is further improved by adding features that reference external medication name lists. Conclusions This study demonstrates that our hybrid approach outperforms purely statistical or rule-based systems. The study also shows that a cascade of classifiers works better than a single classifier in extracting medication information. The system is available as is upon request from the first author.

  16. Three-dimensional information extraction from GaoFen-1 satellite images for landslide monitoring

    Science.gov (United States)

    Wang, Shixin; Yang, Baolin; Zhou, Yi; Wang, Futao; Zhang, Rui; Zhao, Qing

    2018-05-01

    To more efficiently use GaoFen-1 (GF-1) satellite images for landslide emergency monitoring, a Digital Surface Model (DSM) can be generated from GF-1 across-track stereo image pairs to build a terrain dataset. This study proposes a landslide 3D information extraction method based on the terrain changes of slope objects. The slope objects are mergences of segmented image objects which have similar aspects; and the terrain changes are calculated from the post-disaster Digital Elevation Model (DEM) from GF-1 and the pre-disaster DEM from GDEM V2. A high mountain landslide that occurred in Wenchuan County, Sichuan Province is used to conduct a 3D information extraction test. The extracted total area of the landslide is 22.58 ha; the displaced earth volume is 652,100 m3; and the average sliding direction is 263.83°. The accuracies of them are 0.89, 0.87 and 0.95, respectively. Thus, the proposed method expands the application of GF-1 satellite images to the field of landslide emergency monitoring.

  17. THE EXTRACTION OF INDOOR BUILDING INFORMATION FROM BIM TO OGC INDOORGML

    Directory of Open Access Journals (Sweden)

    T.-A. Teo

    2017-07-01

    Full Text Available Indoor Spatial Data Infrastructure (indoor-SDI is an important SDI for geosptial analysis and location-based services. Building Information Model (BIM has high degree of details in geometric and semantic information for building. This study proposed direct conversion schemes to extract indoor building information from BIM to OGC IndoorGML. The major steps of the research include (1 topological conversion from building model into indoor network model; and (2 generation of IndoorGML. The topological conversion is a major process of generating and mapping nodes and edges from IFC to indoorGML. Node represents every space (e.g. IfcSpace and objects (e.g. IfcDoor in the building while edge shows the relationships between nodes. According to the definition of IndoorGML, the topological model in the dual space is also represented as a set of nodes and edges. These definitions of IndoorGML are the same as in the indoor network. Therefore, we can extract the necessary data in the indoor network and easily convert them into IndoorGML based on IndoorGML Schema. The experiment utilized a real BIM model to examine the proposed method. The experimental results indicated that the 3D indoor model (i.e. IndoorGML model can be automatically imported from IFC model by the proposed procedure. In addition, the geometric and attribute of building elements are completely and correctly converted from BIM to indoor-SDI.

  18. Methods from Information Extraction from LIDAR Intensity Data and Multispectral LIDAR Technology

    Science.gov (United States)

    Scaioni, M.; Höfle, B.; Baungarten Kersting, A. P.; Barazzetti, L.; Previtali, M.; Wujanz, D.

    2018-04-01

    LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on `Information Extraction from LiDAR Intensity Data' has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  19. The effect of informed consent on stress levels associated with extraction of impacted mandibular third molars.

    Science.gov (United States)

    Casap, Nardy; Alterman, Michael; Sharon, Guy; Samuni, Yuval

    2008-05-01

    To evaluate the effect of informed consent on stress levels associated with removal of impacted mandibular third molars. A total of 60 patients scheduled for extraction of impacted mandibular third molars participated in this study. The patients were unaware of the study's objectives. Data from 20 patients established the baseline levels of electrodermal activity (EDA). The remaining 40 patients were randomly assigned into 2 equal groups receiving either a detailed document of informed consent, disclosing the possible risks involved with the surgery, or a simplified version. Pulse, blood pressure, and EDA were monitored before, during, and after completion of the consent document. Changes in EDA, but not in blood pressure, were measured on completion of either version of the consent document. A greater increase in EDA was associated with the detailed version of the consent document (P = .004). A similar concomitant increase (although nonsignificant) in pulse values was monitored on completion of both versions. Completion of overdisclosed document of informed consent is associated with changes in physiological parameters. The results suggest that overdetailed listing and disclosure before extraction of impacted mandibular third molars can increase patient stress.

  20. METHODS FROM INFORMATION EXTRACTION FROM LIDAR INTENSITY DATA AND MULTISPECTRAL LIDAR TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2018-04-01

    Full Text Available LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on ‘Information Extraction from LiDAR Intensity Data’ has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  1. About increasing informativity of diagnostic system of asynchronous electric motor by extracting additional information from values of consumed current parameter

    Science.gov (United States)

    Zhukovskiy, Y.; Korolev, N.; Koteleva, N.

    2018-05-01

    This article is devoted to expanding the possibilities of assessing the technical state of the current consumption of asynchronous electric drives, as well as increasing the information capacity of diagnostic methods, in conditions of limited access to equipment and incompleteness of information. The method of spectral analysis of the electric drive current can be supplemented by an analysis of the components of the current of the Park's vector. The research of the hodograph evolution in the moment of appearance and development of defects was carried out using the example of current asymmetry in the phases of an induction motor. The result of the study is the new diagnostic parameters of the asynchronous electric drive. During the research, it was proved that the proposed diagnostic parameters allow determining the type and level of the defect. At the same time, there is no need to stop the equipment and taky it out of service for repair. Modern digital control and monitoring systems can use the proposed parameters based on the stator current of an electrical machine to improve the accuracy and reliability of obtaining diagnostic patterns and predicting their changes in order to improve the equipment maintenance systems. This approach can also be used in systems and objects where there are significant parasitic vibrations and unsteady loads. The extraction of useful information can be carried out in electric drive systems in the structure of which there is a power electric converter.

  2. Multi-Paradigm and Multi-Lingual Information Extraction as Support for Medical Web Labelling Authorities

    Directory of Open Access Journals (Sweden)

    Martin Labsky

    2010-10-01

    Full Text Available Until recently, quality labelling of medical web content has been a pre-dominantly manual activity. However, the advances in automated text processing opened the way to computerised support of this activity. The core enabling technology is information extraction (IE. However, the heterogeneity of websites offering medical content imposes particular requirements on the IE techniques to be applied. In the paper we discuss these requirements and describe a multi-paradigm approach to IE addressing them. Experiments on multi-lingual data are reported. The research has been carried out within the EU MedIEQ project.

  3. Scholarly Information Extraction Is Going to Make a Quantum Leap with PubMed Central (PMC).

    Science.gov (United States)

    Matthies, Franz; Hahn, Udo

    2017-01-01

    With the increasing availability of complete full texts (journal articles), rather than their surrogates (titles, abstracts), as resources for text analytics, entirely new opportunities arise for information extraction and text mining from scholarly publications. Yet, we gathered evidence that a range of problems are encountered for full-text processing when biomedical text analytics simply reuse existing NLP pipelines which were developed on the basis of abstracts (rather than full texts). We conducted experiments with four different relation extraction engines all of which were top performers in previous BioNLP Event Extraction Challenges. We found that abstract-trained engines loose up to 6.6% F-score points when run on full-text data. Hence, the reuse of existing abstract-based NLP software in a full-text scenario is considered harmful because of heavy performance losses. Given the current lack of annotated full-text resources to train on, our study quantifies the price paid for this short cut.

  4. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  5. Developing an Approach to Prioritize River Restoration using Data Extracted from Flood Risk Information System Databases.

    Science.gov (United States)

    Vimal, S.; Tarboton, D. G.; Band, L. E.; Duncan, J. M.; Lovette, J. P.; Corzo, G.; Miles, B.

    2015-12-01

    Prioritizing river restoration requires information on river geometry. In many states in the US detailed river geometry has been collected for floodplain mapping and is available in Flood Risk Information Systems (FRIS). In particular, North Carolina has, for its 100 Counties, developed a database of numerous HEC-RAS models which are available through its Flood Risk Information System (FRIS). These models that include over 260 variables were developed and updated by numerous contractors. They contain detailed surveyed or LiDAR derived cross-sections and modeled flood extents for different extreme event return periods. In this work, over 4700 HEC-RAS models' data was integrated and upscaled to utilize detailed cross-section information and 100-year modelled flood extent information to enable river restoration prioritization for the entire state of North Carolina. We developed procedures to extract geomorphic properties such as entrenchment ratio, incision ratio, etc. from these models. Entrenchment ratio quantifies the vertical containment of rivers and thereby their vulnerability to flooding and incision ratio quantifies the depth per unit width. A map of entrenchment ratio for the whole state was derived by linking these model results to a geodatabase. A ranking of highly entrenched counties enabling prioritization for flood allowance and mitigation was obtained. The results were shared through HydroShare and web maps developed for their visualization using Google Maps Engine API.

  6. Extracting Low-Frequency Information from Time Attenuation in Elastic Waveform Inversion

    Science.gov (United States)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong

    2017-03-01

    Low-frequency information is crucial for recovering background velocity, but the lack of low-frequency information in field data makes inversion impractical without accurate initial models. Laplace-Fourier domain waveform inversion can recover a smooth model from real data without low-frequency information, which can be used for subsequent inversion as an ideal starting model. In general, it also starts with low frequencies and includes higher frequencies at later inversion stages, while the difference is that its ultralow frequency information comes from the Laplace-Fourier domain. Meanwhile, a direct implementation of the Laplace-transformed wavefield using frequency domain inversion is also very convenient. However, because broad frequency bands are often used in the pure time domain waveform inversion, it is difficult to extract the wavefields dominated by low frequencies in this case. In this paper, low-frequency components are constructed by introducing time attenuation into the recorded residuals, and the rest of the method is identical to the traditional time domain inversion. Time windowing and frequency filtering are also applied to mitigate the ambiguity of the inverse problem. Therefore, we can start at low frequencies and to move to higher frequencies. The experiment shows that the proposed method can achieve a good inversion result in the presence of a linear initial model and records without low-frequency information.

  7. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  8. Approaching the largest ‘API’: extracting information from the Internet with Python

    Directory of Open Access Journals (Sweden)

    Jonathan E. Germann

    2018-02-01

    Full Text Available This article explores the need for libraries to algorithmically access and manipulate the world’s largest API: the Internet. The billions of pages on the ‘Internet API’ (HTTP, HTML, CSS, XPath, DOM, etc. are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.

  9. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  10. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  11. Inexperienced clinicians can extract pathoanatomic information from MRI narrative reports with high reproducability for use in research/quality assurance

    DEFF Research Database (Denmark)

    Kent, Peter; Briggs, Andrew M; Albert, Hanne Birgit

    2011-01-01

    Background Although reproducibility in reading MRI images amongst radiologists and clinicians has been studied previously, no studies have examined the reproducibility of inexperienced clinicians in extracting pathoanatomic information from magnetic resonance imaging (MRI) narrative reports and t...

  12. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    Science.gov (United States)

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  13. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  14. Overview of image processing tools to extract physical information from JET videos

    International Nuclear Information System (INIS)

    Craciunescu, T; Tiseanu, I; Zoita, V; Murari, A; Gelfusa, M

    2014-01-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  15. Extraction and Analysis of Information Related to Research & Development Declared Under an Additional Protocol

    International Nuclear Information System (INIS)

    Idinger, J.; Labella, R.; Rialhe, A.; Teller, N.

    2015-01-01

    The additional protocol (AP) provides important tools to strengthen and improve the effectiveness and efficiency of the safeguards system. Safeguards are designed to verify that States comply with their international commitments not to use nuclear material or to engage in nuclear-related activities for the purpose of developing nuclear weapons or other nuclear explosive devices. Under an AP based on INFCIRC/540, a State must provide to the IAEA additional information about, and inspector access to, all parts of its nuclear fuel cycle. In addition, the State has to supply information about its nuclear fuel cycle-related research and development (R&D) activities. The majority of States declare their R&D activities under the AP Articles 2.a.(i), 2.a.(x), and 2.b.(i) as part of initial declarations and their annual updates under the AP. In order to verify consistency and completeness of information provided under the AP by States, the Agency has started to analyze declared R&D information by identifying interrelationships between States in different R&D areas relevant to safeguards. The paper outlines the quality of R&D information provided by States to the Agency, describes how the extraction and analysis of relevant declarations are currently carried out at the Agency and specifies what kinds of difficulties arise during evaluation in respect to cross-linking international projects and finding gaps in reporting. In addition, the paper tries to elaborate how the reporting quality of AP information with reference to R&D activities and the assessment process of R&D information could be improved. (author)

  16. Zone analysis in biology articles as a basis for information extraction.

    Science.gov (United States)

    Mizuta, Yoko; Korhonen, Anna; Mullen, Tony; Collier, Nigel

    2006-06-01

    In the field of biomedicine, an overwhelming amount of experimental data has become available as a result of the high throughput of research in this domain. The amount of results reported has now grown beyond the limits of what can be managed by manual means. This makes it increasingly difficult for the researchers in this area to keep up with the latest developments. Information extraction (IE) in the biological domain aims to provide an effective automatic means to dynamically manage the information contained in archived journal articles and abstract collections and thus help researchers in their work. However, while considerable advances have been made in certain areas of IE, pinpointing and organizing factual information (such as experimental results) remains a challenge. In this paper we propose tackling this task by incorporating into IE information about rhetorical zones, i.e. classification of spans of text in terms of argumentation and intellectual attribution. As the first step towards this goal, we introduce a scheme for annotating biological texts for rhetorical zones and provide a qualitative and quantitative analysis of the data annotated according to this scheme. We also discuss our preliminary research on automatic zone analysis, and its incorporation into our IE framework.

  17. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    Directory of Open Access Journals (Sweden)

    Li Yao

    2016-01-01

    Full Text Available Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.

  18. MedEx: a medication information extraction system for clinical narratives

    Science.gov (United States)

    Stenner, Shane P; Doan, Son; Johnson, Kevin B; Waitman, Lemuel R; Denny, Joshua C

    2010-01-01

    Medication information is one of the most important types of clinical data in electronic medical records. It is critical for healthcare safety and quality, as well as for clinical research that uses electronic medical record data. However, medication data are often recorded in clinical notes as free-text. As such, they are not accessible to other computerized applications that rely on coded data. We describe a new natural language processing system (MedEx), which extracts medication information from clinical notes. MedEx was initially developed using discharge summaries. An evaluation using a data set of 50 discharge summaries showed it performed well on identifying not only drug names (F-measure 93.2%), but also signature information, such as strength, route, and frequency, with F-measures of 94.5%, 93.9%, and 96.0% respectively. We then applied MedEx unchanged to outpatient clinic visit notes. It performed similarly with F-measures over 90% on a set of 25 clinic visit notes. PMID:20064797

  19. Videomicroscopic extraction of specific information on cell proliferation and migration in vitro

    International Nuclear Information System (INIS)

    Debeir, Olivier; Megalizzi, Veronique; Warzee, Nadine; Kiss, Robert; Decaestecker, Christine

    2008-01-01

    In vitro cell imaging is a useful exploratory tool for cell behavior monitoring with a wide range of applications in cell biology and pharmacology. Combined with appropriate image analysis techniques, this approach has been shown to provide useful information on the detection and dynamic analysis of cell events. In this context, numerous efforts have been focused on cell migration analysis. In contrast, the cell division process has been the subject of fewer investigations. The present work focuses on this latter aspect and shows that, in complement to cell migration data, interesting information related to cell division can be extracted from phase-contrast time-lapse image series, in particular cell division duration, which is not provided by standard cell assays using endpoint analyses. We illustrate our approach by analyzing the effects induced by two sigma-1 receptor ligands (haloperidol and 4-IBP) on the behavior of two glioma cell lines using two in vitro cell models, i.e., the low-density individual cell model and the high-density scratch wound model. This illustration also shows that the data provided by our approach are suggestive as to the mechanism of action of compounds, and are thus capable of informing the appropriate selection of further time-consuming and more expensive biological evaluations required to elucidate a mechanism

  20. 5W1H Information Extraction with CNN-Bidirectional LSTM

    Science.gov (United States)

    Nurdin, A.; Maulidevi, N. U.

    2018-03-01

    In this work, information about who, did what, when, where, why, and how on Indonesian news articles were extracted by combining Convolutional Neural Network and Bidirectional Long Short-Term Memory. Convolutional Neural Network can learn semantically meaningful representations of sentences. Bidirectional LSTM can analyze the relations among words in the sequence. We also use word embedding word2vec for word representation. By combining these algorithms, we obtained F-measure 0.808. Our experiments show that CNN-BLSTM outperforms other shallow methods, namely IBk, C4.5, and Naïve Bayes with the F-measure 0.655, 0.645, and 0.595, respectively.

  1. Metaproteomics: extracting and mining proteome information to characterize metabolic activities in microbial communities.

    Science.gov (United States)

    Abraham, Paul E; Giannone, Richard J; Xiong, Weili; Hettich, Robert L

    2014-06-17

    Contemporary microbial ecology studies usually employ one or more "omics" approaches to investigate the structure and function of microbial communities. Among these, metaproteomics aims to characterize the metabolic activities of the microbial membership, providing a direct link between the genetic potential and functional metabolism. The successful deployment of metaproteomics research depends on the integration of high-quality experimental and bioinformatic techniques for uncovering the metabolic activities of a microbial community in a way that is complementary to other "meta-omic" approaches. The essential, quality-defining informatics steps in metaproteomics investigations are: (1) construction of the metagenome, (2) functional annotation of predicted protein-coding genes, (3) protein database searching, (4) protein inference, and (5) extraction of metabolic information. In this article, we provide an overview of current bioinformatic approaches and software implementations in metaproteome studies in order to highlight the key considerations needed for successful implementation of this powerful community-biology tool. Copyright © 2014 John Wiley & Sons, Inc.

  2. Developing a Process Model for the Forensic Extraction of Information from Desktop Search Applications

    Directory of Open Access Journals (Sweden)

    Timothy Pavlic

    2008-03-01

    Full Text Available Desktop search applications can contain cached copies of files that were deleted from the file system. Forensic investigators see this as a potential source of evidence, as documents deleted by suspects may still exist in the cache. Whilst there have been attempts at recovering data collected by desktop search applications, there is no methodology governing the process, nor discussion on the most appropriate means to do so. This article seeks to address this issue by developing a process model that can be applied when developing an information extraction application for desktop search applications, discussing preferred methods and the limitations of each. This work represents a more structured approach than other forms of current research.

  3. An innovative method for extracting isotopic information from low-resolution gamma spectra

    International Nuclear Information System (INIS)

    Miko, D.; Estep, R.J.; Rawool-Sullivan, M.W.

    1998-01-01

    A method is described for the extraction of isotopic information from attenuated gamma ray spectra using the gross-count material basis set (GC-MBS) model. This method solves for the isotopic composition of an unknown mixture of isotopes attenuated through an absorber of unknown material. For binary isotopic combinations the problem is nonlinear in only one variable and is easily solved using standard line optimization techniques. Results are presented for NaI spectrum analyses of various binary combinations of enriched uranium, depleted uranium, low burnup Pu, 137 Cs, and 133 Ba attenuated through a suite of absorbers ranging in Z from polyethylene through lead. The GC-MBS method results are compared to those computed using ordinary response function fitting and with a simple net peak area method. The GC-MBS method was found to be significantly more accurate than the other methods over the range of absorbers and isotopic blends studied

  4. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  5. Extraction of prospecting information of uranium deposit based on high spatial resolution satellite data. Taking bashibulake region as an example

    International Nuclear Information System (INIS)

    Yang Xu; Liu Dechang; Zhang Jielin

    2008-01-01

    In this study, the signification and content of prospecting information of uranium deposit are expounded. Quickbird high spatial resolution satellite data are used to extract the prospecting information of uranium deposit in Bashibulake area in the north of Tarim Basin. By using the pertinent methods of image processing, the information of ore-bearing bed, ore-control structure and mineralized alteration have been extracted. The results show a high consistency with the field survey. The aim of this study is to explore practicability of high spatial resolution satellite data for prospecting minerals, and to broaden the thinking of prospectation at similar area. (authors)

  6. Extracting chemical information from high-resolution Kβ X-ray emission spectroscopy

    Science.gov (United States)

    Limandri, S.; Robledo, J.; Tirao, G.

    2018-06-01

    High-resolution X-ray emission spectroscopy allows studying the chemical environment of a wide variety of materials. Chemical information can be obtained by fitting the X-ray spectra and observing the behavior of some spectral features. Spectral changes can also be quantified by means of statistical parameters calculated by considering the spectrum as a probability distribution. Another possibility is to perform statistical multivariate analysis, such as principal component analysis. In this work the performance of these procedures for extracting chemical information in X-ray emission spectroscopy spectra for mixtures of Mn2+ and Mn4+ oxides are studied. A detail analysis of the parameters obtained, as well as the associated uncertainties is shown. The methodologies are also applied for Mn oxidation state characterization of double perovskite oxides Ba1+xLa1-xMnSbO6 (with 0 ≤ x ≤ 0.7). The results show that statistical parameters and multivariate analysis are the most suitable for the analysis of this kind of spectra.

  7. Information Extraction of Tourist Geological Resources Based on 3d Visualization Remote Sensing Image

    Science.gov (United States)

    Wang, X.

    2018-04-01

    Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.

  8. Information Management Processes for Extraction of Student Dropout Indicators in Courses in Distance Mode

    Directory of Open Access Journals (Sweden)

    Renata Maria Abrantes Baracho

    2016-04-01

    Full Text Available This research addresses the use of information management processes in order to extract student dropout indicators in distance mode courses. Distance education in Brazil aims to facilitate access to information. The MEC (Ministry of Education announced, in the second semester of 2013, that the main obstacles faced by institutions offering courses in this mode were students dropping out and the resistance of both educators and students to this mode. The research used a mixed methodology, qualitative and quantitative, to obtain student dropout indicators. The factors found and validated in this research were: the lack of interest from students, insufficient training in the use of the virtual learning environment for students, structural problems in the schools that were chosen to offer the course, students without e-mail, incoherent answers to activities to the course, lack of knowledge on the part of the student when using the computer tool. The scenario considered was a course offered in distance mode called Aluno Integrado (Integrated Student

  9. Measuring nuclear reaction cross sections to extract information on neutrinoless double beta decay

    Science.gov (United States)

    Cavallaro, M.; Cappuzzello, F.; Agodi, C.; Acosta, L.; Auerbach, N.; Bellone, J.; Bijker, R.; Bonanno, D.; Bongiovanni, D.; Borello-Lewin, T.; Boztosun, I.; Branchina, V.; Bussa, M. P.; Calabrese, S.; Calabretta, L.; Calanna, A.; Calvo, D.; Carbone, D.; Chávez Lomelí, E. R.; Coban, A.; Colonna, M.; D'Agostino, G.; De Geronimo, G.; Delaunay, F.; Deshmukh, N.; de Faria, P. N.; Ferraresi, C.; Ferreira, J. L.; Finocchiaro, P.; Fisichella, M.; Foti, A.; Gallo, G.; Garcia, U.; Giraudo, G.; Greco, V.; Hacisalihoglu, A.; Kotila, J.; Iazzi, F.; Introzzi, R.; Lanzalone, G.; Lavagno, A.; La Via, F.; Lay, J. A.; Lenske, H.; Linares, R.; Litrico, G.; Longhitano, F.; Lo Presti, D.; Lubian, J.; Medina, N.; Mendes, D. R.; Muoio, A.; Oliveira, J. R. B.; Pakou, A.; Pandola, L.; Petrascu, H.; Pinna, F.; Reito, S.; Rifuggiato, D.; Rodrigues, M. R. D.; Russo, A. D.; Russo, G.; Santagati, G.; Santopinto, E.; Sgouros, O.; Solakci, S. O.; Souliotis, G.; Soukeras, V.; Spatafora, A.; Torresi, D.; Tudisco, S.; Vsevolodovna, R. I. M.; Wheadon, R. J.; Yildirin, A.; Zagatto, V. A. B.

    2018-02-01

    Neutrinoless double beta decay (0vββ) is considered the best potential resource to access the absolute neutrino mass scale. Moreover, if observed, it will signal that neutrinos are their own anti-particles (Majorana particles). Presently, this physics case is one of the most important research “beyond Standard Model” and might guide the way towards a Grand Unified Theory of fundamental interactions. Since the 0vββ decay process involves nuclei, its analysis necessarily implies nuclear structure issues. In the NURE project, supported by a Starting Grant of the European Research Council (ERC), nuclear reactions of double charge-exchange (DCE) are used as a tool to extract information on the 0vββ Nuclear Matrix Elements. In DCE reactions and ββ decay indeed the initial and final nuclear states are the same and the transition operators have similar structure. Thus the measurement of the DCE absolute cross-sections can give crucial information on ββ matrix elements. In a wider view, the NUMEN international collaboration plans a major upgrade of the INFN-LNS facilities in the next years in order to increase the experimental production of nuclei of at least two orders of magnitude, thus making feasible a systematic study of all the cases of interest as candidates for 0vββ.

  10. Unsupervised Symbolization of Signal Time Series for Extraction of the Embedded Information

    Directory of Open Access Journals (Sweden)

    Yue Li

    2017-03-01

    Full Text Available This paper formulates an unsupervised algorithm for symbolization of signal time series to capture the embedded dynamic behavior. The key idea is to convert time series of the digital signal into a string of (spatially discrete symbols from which the embedded dynamic information can be extracted in an unsupervised manner (i.e., no requirement for labeling of time series. The main challenges here are: (1 definition of the symbol assignment for the time series; (2 identification of the partitioning segment locations in the signal space of time series; and (3 construction of probabilistic finite-state automata (PFSA from the symbol strings that contain temporal patterns. The reported work addresses these challenges by maximizing the mutual information measures between symbol strings and PFSA states. The proposed symbolization method has been validated by numerical simulation as well as by experimentation in a laboratory environment. Performance of the proposed algorithm has been compared to that of two commonly used algorithms of time series partitioning.

  11. A methodology for the extraction of quantitative information from electron microscopy images at the atomic level

    International Nuclear Information System (INIS)

    Galindo, P L; Pizarro, J; Guerrero, E; Guerrero-Lebrero, M P; Scavello, G; Yáñez, A; Sales, D L; Herrera, M; Molina, S I; Núñez-Moraleda, B M; Maestre, J M

    2014-01-01

    In this paper we describe a methodology developed at the University of Cadiz (Spain) in the past few years for the extraction of quantitative information from electron microscopy images at the atomic level. This work is based on a coordinated and synergic activity of several research groups that have been working together over the last decade in two different and complementary fields: Materials Science and Computer Science. The aim of our joint research has been to develop innovative high-performance computing techniques and simulation methods in order to address computationally challenging problems in the analysis, modelling and simulation of materials at the atomic scale, providing significant advances with respect to existing techniques. The methodology involves several fundamental areas of research including the analysis of high resolution electron microscopy images, materials modelling, image simulation and 3D reconstruction using quantitative information from experimental images. These techniques for the analysis, modelling and simulation allow optimizing the control and functionality of devices developed using materials under study, and have been tested using data obtained from experimental samples

  12. Dual-wavelength phase-shifting digital holography selectively extracting wavelength information from wavelength-multiplexed holograms.

    Science.gov (United States)

    Tahara, Tatsuki; Mori, Ryota; Kikunaga, Shuhei; Arai, Yasuhiko; Takaki, Yasuhiro

    2015-06-15

    Dual-wavelength phase-shifting digital holography that selectively extracts wavelength information from five wavelength-multiplexed holograms is presented. Specific phase shifts for respective wavelengths are introduced to remove the crosstalk components and extract only the object wave at the desired wavelength from the holograms. Object waves in multiple wavelengths are selectively extracted by utilizing 2π ambiguity and the subtraction procedures based on phase-shifting interferometry. Numerical results show the validity of the proposed technique. The proposed technique is also experimentally demonstrated.

  13. Information Extraction and Dependency on Open Government Data (ogd) for Environmental Monitoring

    Science.gov (United States)

    Abdulmuttalib, Hussein

    2016-06-01

    Environmental monitoring practices support decision makers of different government / private institutions, besides environmentalists and planners among others. This support helps them act towards the sustainability of our environment, and also take efficient measures for protecting human beings in general, but it is difficult to explore useful information from 'OGD' and assure its quality for the purpose. On the other hand, Monitoring itself comprises detecting changes as happens, or within the mitigation period range, which means that any source of data, that is to be used for monitoring, should replicate the information related to the period of environmental monitoring, or otherwise it's considered almost useless or history. In this paper the assessment of information extraction and structuring from Open Government Data 'OGD', that can be useful to environmental monitoring is performed, looking into availability, usefulness to environmental monitoring of a certain type, checking its repetition period and dependences. The particular assessment is being performed on a small sample selected from OGD, bearing in mind the type of the environmental change monitored, such as the increase and concentrations of built up areas, and reduction of green areas, or monitoring the change of temperature in a specific area. The World Bank mentioned in its blog that Data is open if it satisfies both conditions of, being technically open, and legally open. The use of Open Data thus, is regulated by published terms of use, or an agreement which implies some conditions without violating the above mentioned two conditions. Within the scope of the paper I wish to share the experience of using some OGD for supporting an environmental monitoring work, that is performed to mitigate the production of carbon dioxide, by regulating energy consumption, and by properly designing the test area's landscapes, thus using Geodesign tactics, meanwhile wish to add to the results achieved by many

  14. Machine learning classification of surgical pathology reports and chunk recognition for information extraction noise reduction.

    Science.gov (United States)

    Napolitano, Giulio; Marshall, Adele; Hamilton, Peter; Gavin, Anna T

    2016-06-01

    Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging. The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: 'semi-structured' and 'unstructured'. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry. The best result of 99.4% accuracy - which included only one semi-structured report predicted as unstructured - was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured. These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought. Copyright

  15. Study of time-frequency characteristics of single snores: extracting new information for sleep apnea diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Castillo Escario, Y.; Blanco Almazan, D.; Camara Vazquez, M.A.; Jane Campos, R.

    2016-07-01

    Obstructive sleep apnea (OSA) is a highly prevalent chronic disease, especially in elderly and obese population. Despite constituting a huge health and economic problem, most patients remain undiagnosed due to limitations in current strategies. Therefore, it is essential to find cost-effective diagnostic alternatives. One of these novel approaches is the analysis of acoustic snoring signals. Snoring is an early symptom of OSA which carries pathophysiological information of high diagnostic value. For this reason, the main objective of this work is to study the characteristics of single snores of different types, from healthy and OSA subjects. To do that, we analyzed snoring signals from previous databases and developed an experimental protocol to record simulated OSA-related sounds and characterize the response of two commercial tracheal microphones. Automatic programs for filtering, downsampling, event detection and time-frequency analysis were built in MATLAB. We found that time-frequency maps and spectral parameters (central, mean and peak frequency and energy in the 100-500 Hz band) allow distinguishing regular snores of healthy subjects from non-regular snores and snores of OSA subjects. Regarding the two commercial microphones, we found that one of them was a suitable snoring sensor, while the other had a too restricted frequency response. Future work shall include a higher number of episodes and subjects, but our study has contributed to show how important the differences between regular and non-regular snores can be for OSA diagnosis, and how much clinically relevant information can be extracted from time-frequency maps and spectral parameters of single snores. (Author)

  16. Quantum measurement information as a key to energy extraction from local vacuums

    International Nuclear Information System (INIS)

    Hotta, Masahiro

    2008-01-01

    In this paper, a protocol is proposed in which energy extraction from local vacuum states is possible by using quantum measurement information for the vacuum state of quantum fields. In the protocol, Alice, who stays at a spatial point, excites the ground state of the fields by a local measurement. Consequently, wave packets generated by Alice's measurement propagate the vacuum to spatial infinity. Let us assume that Bob stays away from Alice and fails to catch the excitation energy when the wave packets pass in front of him. Next Alice announces her local measurement result to Bob by classical communication. Bob performs a local unitary operation depending on the measurement result. In this process, positive energy is released from the fields to Bob's apparatus of the unitary operation. In the field systems, wave packets are generated with negative energy around Bob's location. Soon afterwards, the negative-energy wave packets begin to chase after the positive-energy wave packets generated by Alice and form loosely bound states.

  17. Oxygen octahedra picker: A software tool to extract quantitative information from STEM images

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yi, E-mail: y.wang@fkf.mpg.de; Salzberger, Ute; Sigle, Wilfried; Eren Suyolcu, Y.; Aken, Peter A. van

    2016-09-15

    In perovskite oxide based materials and hetero-structures there are often strong correlations between oxygen octahedral distortions and functionality. Thus, atomistic understanding of the octahedral distortion, which requires accurate measurements of atomic column positions, will greatly help to engineer their properties. Here, we report the development of a software tool to extract quantitative information of the lattice and of BO{sub 6} octahedral distortions from STEM images. Center-of-mass and 2D Gaussian fitting methods are implemented to locate positions of individual atom columns. The precision of atomic column distance measurements is evaluated on both simulated and experimental images. The application of the software tool is demonstrated using practical examples. - Highlights: • We report a software tool for mapping atomic positions from HAADF and ABF images. • It enables quantification of both crystal lattice and oxygen octahedral distortions. • We test the measurement accuracy and precision on simulated and experimental images. • It works well for different orientations of perovskite structures and interfaces.

  18. Note on difference spectra for fast extraction of global image information.

    CSIR Research Space (South Africa)

    Van Wyk, BJ

    2007-06-01

    Full Text Available FOR FAST EXTRACTION OF GLOBAL IMAGE INFORMATION. B.J van Wyk* M.A. van Wyk* and F. van den Bergh** * c29c55c48c51c46c4bc03 c36c52c58c57c4bc03 c24c49c55c4cc46c44c51c03 c37c48c46c4bc51c4cc46c44c4fc03 c2cc51c56c57c4cc57c58c57c48c03 c4cc51c03 c28c4fc48c...46c57c55c52c51c4cc46c56c03 c0bc29cb6c36c24c37c2cc28c0cc03 c44c57c03 c57c4bc48c03 c37c56c4bc5ac44c51c48c03 c38c51c4cc59c48c55c56c4cc57c5cc03 c52c49c03 Technology, Private Bag X680, Pretoria 0001. ** Remote Sensing Research Group, Meraka Institute...

  19. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  20. Extraction as a source of additional information when concentrations in multicomponent systems are simultaneously determined

    International Nuclear Information System (INIS)

    Perkov, I.G.

    1988-01-01

    Using as an example photometric determination of Nd and Sm in their joint presence, the possibility to use the influence of extraction on analytic signal increase is considered. It is shown that interligand exchange in extracts in combination with simultaneous determination of concentrations can be used as a simple means increasing the accuracy of determination. 5 refs.; 2 figs.; 3 tabs

  1. Validation and extraction of molecular-geometry information from small-molecule databases.

    Science.gov (United States)

    Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N

    2017-02-01

    A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.

  2. Extracting respiratory information from seismocardiogram signals acquired on the chest using a miniature accelerometer

    International Nuclear Information System (INIS)

    Pandia, Keya; Inan, Omer T; Kovacs, Gregory T A; Giovangrandi, Laurent

    2012-01-01

    Seismocardiography (SCG) is a non-invasive measurement of the vibrations of the chest caused by the heartbeat. SCG signals can be measured using a miniature accelerometer attached to the chest, and are thus well-suited for unobtrusive and long-term patient monitoring. Additionally, SCG contains information relating to both cardiovascular and respiratory systems. In this work, algorithms were developed for extracting three respiration-dependent features of the SCG signal: intensity modulation, timing interval changes within each heartbeat, and timing interval changes between successive heartbeats. Simultaneously with a reference respiration belt, SCG signals were measured from 20 healthy subjects and a respiration rate was estimated using each of the three SCG features and the reference signal. The agreement between each of the three accelerometer-derived respiration rate measurements was computed with respect to the respiration rate derived from the reference respiration belt. The respiration rate obtained from the intensity modulation in the SCG signal was found to be in closest agreement with the respiration rate obtained from the reference respiration belt: the bias was found to be 0.06 breaths per minute with a 95% confidence interval of −0.99 to 1.11 breaths per minute. The limits of agreement between the respiration rates estimated using SCG (intensity modulation) and the reference were within the clinically relevant ranges given in existing literature, demonstrating that SCG could be used for both cardiovascular and respiratory monitoring. Furthermore, phases of each of the three SCG parameters were investigated at four instances of a respiration cycle—start inspiration, peak inspiration, start expiration, and peak expiration—and during breath hold (apnea). The phases of the three SCG parameters observed during the respiration cycle were congruent with existing literature and physiologically expected trends. (paper)

  3. Extracting key information from historical data to quantify the transmission dynamics of smallpox

    Directory of Open Access Journals (Sweden)

    Brockmann Stefan O

    2008-08-01

    Full Text Available Abstract Background Quantification of the transmission dynamics of smallpox is crucial for optimizing intervention strategies in the event of a bioterrorist attack. This article reviews basic methods and findings in mathematical and statistical studies of smallpox which estimate key transmission parameters from historical data. Main findings First, critically important aspects in extracting key information from historical data are briefly summarized. We mention different sources of heterogeneity and potential pitfalls in utilizing historical records. Second, we discuss how smallpox spreads in the absence of interventions and how the optimal timing of quarantine and isolation measures can be determined. Case studies demonstrate the following. (1 The upper confidence limit of the 99th percentile of the incubation period is 22.2 days, suggesting that quarantine should last 23 days. (2 The highest frequency (61.8% of secondary transmissions occurs 3–5 days after onset of fever so that infected individuals should be isolated before the appearance of rash. (3 The U-shaped age-specific case fatality implies a vulnerability of infants and elderly among non-immune individuals. Estimates of the transmission potential are subsequently reviewed, followed by an assessment of vaccination effects and of the expected effectiveness of interventions. Conclusion Current debates on bio-terrorism preparedness indicate that public health decision making must account for the complex interplay and balance between vaccination strategies and other public health measures (e.g. case isolation and contact tracing taking into account the frequency of adverse events to vaccination. In this review, we summarize what has already been clarified and point out needs to analyze previous smallpox outbreaks systematically.

  4. Evaluation of needle trap micro-extraction and solid-phase micro-extraction: Obtaining comprehensive information on volatile emissions from in vitro cultures.

    Science.gov (United States)

    Oertel, Peter; Bergmann, Andreas; Fischer, Sina; Trefz, Phillip; Küntzel, Anne; Reinhold, Petra; Köhler, Heike; Schubert, Jochen K; Miekisch, Wolfram

    2018-05-14

    Volatile organic compounds (VOCs) emitted from in vitro cultures may reveal information on species and metabolism. Owing to low nmol L -1 concentration ranges, pre-concentration techniques are required for gas chromatography-mass spectrometry (GC-MS) based analyses. This study was intended to compare the efficiency of established micro-extraction techniques - solid-phase micro-extraction (SPME) and needle-trap micro-extraction (NTME) - for the analysis of complex VOC patterns. For SPME, a 75 μm Carboxen®/polydimethylsiloxane fiber was used. The NTME needle was packed with divinylbenzene, Carbopack X and Carboxen 1000. The headspace was sampled bi-directionally. Seventy-two VOCs were calibrated by reference standard mixtures in the range of 0.041-62.24 nmol L -1 by means of GC-MS. Both pre-concentration methods were applied to profile VOCs from cultures of Mycobacterium avium ssp. paratuberculosis. Limits of detection ranged from 0.004 to 3.93 nmol L -1 (median = 0.030 nmol L -1 ) for NTME and from 0.001 to 5.684 nmol L -1 (median = 0.043 nmol L -1 ) for SPME. NTME showed advantages in assessing polar compounds such as alcohols. SPME showed advantages in reproducibility but disadvantages in sensitivity for N-containing compounds. Micro-extraction techniques such as SPME and NTME are well suited for trace VOC profiling over cultures if the limitations of each technique is taken into account. Copyright © 2018 John Wiley & Sons, Ltd.

  5. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    Directory of Open Access Journals (Sweden)

    J. Sharmila

    2016-01-01

    Full Text Available Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodology in the exploration with the aid of Bayesian Networks (BN. In their methodology, they were learning on separating the web data and characteristic revelation in view of the Bayesian approach. Roused from their investigation, we mean to propose a web content mining methodology, in view of a Deep Learning Algorithm. The Deep Learning Algorithm gives the interest over BN on the basis that BN is not considered in any learning architecture planning like to propose system. The main objective of this investigation is web document extraction utilizing different grouping algorithm and investigation. This work extricates the data from the web URL. This work shows three classification algorithms, Deep Learning Algorithm, Bayesian Algorithm and BPNN Algorithm. Deep Learning is a capable arrangement of strategies for learning in neural system which is connected like computer vision, speech recognition, and natural language processing and biometrics framework. Deep Learning is one of the simple classification technique and which is utilized for subset of extensive field furthermore Deep Learning has less time for classification. Naive Bayes classifiers are a group of basic probabilistic classifiers in view of applying Bayes hypothesis with concrete independence assumptions between the features. At that point the BPNN algorithm is utilized for classification. Initially training and testing dataset contains more URL. We extract the content presently from the dataset. The

  6. Information extraction from dynamic PS-InSAR time series using machine learning

    Science.gov (United States)

    van de Kerkhof, B.; Pankratius, V.; Chang, L.; van Swol, R.; Hanssen, R. F.

    2017-12-01

    Due to the increasing number of SAR satellites, with shorter repeat intervals and higher resolutions, SAR data volumes are exploding. Time series analyses of SAR data, i.e. Persistent Scatterer (PS) InSAR, enable the deformation monitoring of the built environment at an unprecedented scale, with hundreds of scatterers per km2, updated weekly. Potential hazards, e.g. due to failure of aging infrastructure, can be detected at an early stage. Yet, this requires the operational data processing of billions of measurement points, over hundreds of epochs, updating this data set dynamically as new data come in, and testing whether points (start to) behave in an anomalous way. Moreover, the quality of PS-InSAR measurements is ambiguous and heterogeneous, which will yield false positives and false negatives. Such analyses are numerically challenging. Here we extract relevant information from PS-InSAR time series using machine learning algorithms. We cluster (group together) time series with similar behaviour, even though they may not be spatially close, such that the results can be used for further analysis. First we reduce the dimensionality of the dataset in order to be able to cluster the data, since applying clustering techniques on high dimensional datasets often result in unsatisfying results. Our approach is to apply t-distributed Stochastic Neighbor Embedding (t-SNE), a machine learning algorithm for dimensionality reduction of high-dimensional data to a 2D or 3D map, and cluster this result using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The results show that we are able to detect and cluster time series with similar behaviour, which is the starting point for more extensive analysis into the underlying driving mechanisms. The results of the methods are compared to conventional hypothesis testing as well as a Self-Organising Map (SOM) approach. Hypothesis testing is robust and takes the stochastic nature of the observations into account

  7. Synthesis of High-Frequency Ground Motion Using Information Extracted from Low-Frequency Ground Motion

    Science.gov (United States)

    Iwaki, A.; Fujiwara, H.

    2012-12-01

    Broadband ground motion computations of scenario earthquakes are often based on hybrid methods that are the combinations of deterministic approach in lower frequency band and stochastic approach in higher frequency band. Typical computation methods for low-frequency and high-frequency (LF and HF, respectively) ground motions are the numerical simulations, such as finite-difference and finite-element methods based on three-dimensional velocity structure model, and the stochastic Green's function method, respectively. In such hybrid methods, LF and HF wave fields are generated through two different methods that are completely independent of each other, and are combined at the matching frequency. However, LF and HF wave fields are essentially not independent as long as they are from the same event. In this study, we focus on the relation among acceleration envelopes at different frequency bands, and attempt to synthesize HF ground motion using the information extracted from LF ground motion, aiming to propose a new method for broad-band strong motion prediction. Our study area is Kanto area, Japan. We use the K-NET and KiK-net surface acceleration data and compute RMS envelope at four frequency bands: 0.5-1.0 Hz, 1.0-2.0 Hz, 2.0-4.0 Hz, .0-8.0 Hz, and 8.0-16.0 Hz. Taking the ratio of the envelopes of adjacent bands, we find that the envelope ratios have stable shapes at each site. The empirical envelope-ratio characteristics are combined with low-frequency envelope of the target earthquake to synthesize HF ground motion. We have applied the method to M5-class earthquakes and a M7 target earthquake that occurred in the vicinity of Kanto area, and successfully reproduced the observed HF ground motion of the target earthquake. The method can be applied to a broad band ground motion simulation for a scenario earthquake by combining numerically-computed low-frequency (~1 Hz) ground motion with the empirical envelope ratio characteristics to generate broadband ground motion

  8. TempoWordNet : une ressource lexicale pour l'extraction d'information temporelle

    OpenAIRE

    Hasanuzzaman , Mohammed

    2016-01-01

    The ability to capture the time information conveyed in natural language, where that information is expressed either explicitly, or implicitly, or connotative, is essential to many natural language processing applications such as information retrieval, question answering, automatic summarization, targeted marketing, loan repayment forecasting, and understanding economic patterns. Associating word senses with temporal orientation to grasp the temporal information in language is relatively stra...

  9. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    OpenAIRE

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scien...

  10. The BEL information extraction workflow (BELIEF): evaluation in the BioCreative V BEL and IAT track

    OpenAIRE

    Madan, Sumit; Hodapp, Sven; Senger, Philipp; Ansari, Sam; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel; Fluck, Juliane

    2016-01-01

    Network-based approaches have become extremely important in systems biology to achieve a better understanding of biological mechanisms. For network representation, the Biological Expression Language (BEL) is well designed to collate findings from the scientific literature into biological network models. To facilitate encoding and biocuration of such findings in BEL, a BEL Information Extraction Workflow (BELIEF) was developed. BELIEF provides a web-based curation interface, the BELIEF Dashboa...

  11. An Investigation of the Relationship Between Automated Machine Translation Evaluation Metrics and User Performance on an Information Extraction Task

    Science.gov (United States)

    2007-01-01

    more reliable than BLEU and that it is easier to understand in terms familiar to NLP researchers. 19 2.2.3 METEOR Researchers at Carnegie Mellon...essential elements of infor- mation from output generated by three types of Arabic -English MT engines. The information extraction experiment was one of three...reviewing the task hierarchy and examining the MT output of several engines. A small, prior pilot experiment to evaluate Arabic -English MT engines for

  12. Comparison of Qinzhou bay wetland landscape information extraction by three methods

    Directory of Open Access Journals (Sweden)

    X. Chang

    2014-04-01

    and OO is 219 km2, 193.70 km2, 217.40 km2 respectively. The result indicates that SC is in the f irst place, followed by OO approach, and the third DT method when used to extract Qingzhou Bay coastal wetland.

  13. Extracting topographic structure from digital elevation data for geographic information-system analysis

    Science.gov (United States)

    Jenson, Susan K.; Domingue, Julia O.

    1988-01-01

    Software tools have been developed at the U.S. Geological Survey's EROS Data Center to extract topographic structure and to delineate watersheds and overland flow paths from digital elevation models. The tools are specialpurpose FORTRAN programs interfaced with general-purpose raster and vector spatial analysis and relational data base management packages.

  14. Systematically extracting metal- and solvent-related occupational information from free-text responses to lifetime occupational history questionnaires.

    Science.gov (United States)

    Friesen, Melissa C; Locke, Sarah J; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A; Purdue, Mark; Colt, Joanne S

    2014-06-01

    Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants' jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Our study population comprised 2408 subjects, reporting 11991 jobs, from a case-control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert's independently assigned probability ratings to evaluate whether we missed identifying

  15. MIDAS. An algorithm for the extraction of modal information from experimentally determined transfer functions

    International Nuclear Information System (INIS)

    Durrans, R.F.

    1978-12-01

    In order to design reactor structures to withstand the large flow and acoustic forces present it is necessary to know something of their dynamic properties. In many cases these properties cannot be predicted theoretically and it is necessary to determine them experimentally. The algorithm MIDAS (Modal Identification for the Dynamic Analysis of Structures) which has been developed at B.N.L. for extracting these structural properties from experimental data is described. (author)

  16. Extracting Information about the Initial State from the Black Hole Radiation.

    Science.gov (United States)

    Lochan, Kinjalk; Padmanabhan, T

    2016-02-05

    The crux of the black hole information paradox is related to the fact that the complete information about the initial state of a quantum field in a collapsing spacetime is not available to future asymptotic observers, belying the expectations from a unitary quantum theory. We study the imprints of the initial quantum state contained in a specific class of distortions of the black hole radiation and identify the classes of in states that can be partially or fully reconstructed from the information contained within. Even for the general in state, we can uncover some specific information. These results suggest that a classical collapse scenario ignores this richness of information in the resulting spectrum and a consistent quantum treatment of the entire collapse process might allow us to retrieve much more information from the spectrum of the final radiation.

  17. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    Science.gov (United States)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  18. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  19. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  20. Unsupervised improvement of named entity extraction in short informal context using disambiguation clues

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2012-01-01

    Short context messages (like tweets and SMS’s) are a potentially rich source of continuously and instantly updated information. Shortness and informality of such messages are challenges for Natural Language Processing tasks. Most efforts done in this direction rely on machine learning techniques

  1. Automated Methods to Extract Patient New Information from Clinical Notes in Electronic Health Record Systems

    Science.gov (United States)

    Zhang, Rui

    2013-01-01

    The widespread adoption of Electronic Health Record (EHR) has resulted in rapid text proliferation within clinical care. Clinicians' use of copying and pasting functions in EHR systems further compounds this by creating a large amount of redundant clinical information in clinical documents. A mixture of redundant information (especially outdated…

  2. Extracting principles for information management adaptability during crisis response : A dynamic capability view

    NARCIS (Netherlands)

    Bharosa, N.; Janssen, M.F.W.H.A.

    2010-01-01

    During crises, relief agency commanders have to make decisions in a complex and uncertain environment, requiring them to continuously adapt to unforeseen environmental changes. In the process of adaptation, the commanders depend on information management systems for information. Yet there are still

  3. Extracting protein dynamics information from overlapped NMR signals using relaxation dispersion difference NMR spectroscopy.

    Science.gov (United States)

    Konuma, Tsuyoshi; Harada, Erisa; Sugase, Kenji

    2015-12-01

    Protein dynamics plays important roles in many biological events, such as ligand binding and enzyme reactions. NMR is mostly used for investigating such protein dynamics in a site-specific manner. Recently, NMR has been actively applied to large proteins and intrinsically disordered proteins, which are attractive research targets. However, signal overlap, which is often observed for such proteins, hampers accurate analysis of NMR data. In this study, we have developed a new methodology called relaxation dispersion difference that can extract conformational exchange parameters from overlapped NMR signals measured using relaxation dispersion spectroscopy. In relaxation dispersion measurements, the signal intensities of fluctuating residues vary according to the Carr-Purcell-Meiboon-Gill pulsing interval, whereas those of non-fluctuating residues are constant. Therefore, subtraction of each relaxation dispersion spectrum from that with the highest signal intensities, measured at the shortest pulsing interval, leaves only the signals of the fluctuating residues. This is the principle of the relaxation dispersion difference method. This new method enabled us to extract exchange parameters from overlapped signals of heme oxygenase-1, which is a relatively large protein. The results indicate that the structural flexibility of a kink in the heme-binding site is important for efficient heme binding. Relaxation dispersion difference requires neither selectively labeled samples nor modification of pulse programs; thus it will have wide applications in protein dynamics analysis.

  4. Extracting protein dynamics information from overlapped NMR signals using relaxation dispersion difference NMR spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Konuma, Tsuyoshi [Icahn School of Medicine at Mount Sinai, Department of Structural and Chemical Biology (United States); Harada, Erisa [Suntory Foundation for Life Sciences, Bioorganic Research Institute (Japan); Sugase, Kenji, E-mail: sugase@sunbor.or.jp, E-mail: sugase@moleng.kyoto-u.ac.jp [Kyoto University, Department of Molecular Engineering, Graduate School of Engineering (Japan)

    2015-12-15

    Protein dynamics plays important roles in many biological events, such as ligand binding and enzyme reactions. NMR is mostly used for investigating such protein dynamics in a site-specific manner. Recently, NMR has been actively applied to large proteins and intrinsically disordered proteins, which are attractive research targets. However, signal overlap, which is often observed for such proteins, hampers accurate analysis of NMR data. In this study, we have developed a new methodology called relaxation dispersion difference that can extract conformational exchange parameters from overlapped NMR signals measured using relaxation dispersion spectroscopy. In relaxation dispersion measurements, the signal intensities of fluctuating residues vary according to the Carr-Purcell-Meiboon-Gill pulsing interval, whereas those of non-fluctuating residues are constant. Therefore, subtraction of each relaxation dispersion spectrum from that with the highest signal intensities, measured at the shortest pulsing interval, leaves only the signals of the fluctuating residues. This is the principle of the relaxation dispersion difference method. This new method enabled us to extract exchange parameters from overlapped signals of heme oxygenase-1, which is a relatively large protein. The results indicate that the structural flexibility of a kink in the heme-binding site is important for efficient heme binding. Relaxation dispersion difference requires neither selectively labeled samples nor modification of pulse programs; thus it will have wide applications in protein dynamics analysis.

  5. Wavelet analysis of molecular dynamics: Efficient extraction of time-frequency information in ultrafast optical processes

    International Nuclear Information System (INIS)

    Prior, Javier; Castro, Enrique; Chin, Alex W.; Almeida, Javier; Huelga, Susana F.; Plenio, Martin B.

    2013-01-01

    New experimental techniques based on nonlinear ultrafast spectroscopies have been developed over the last few years, and have been demonstrated to provide powerful probes of quantum dynamics in different types of molecular aggregates, including both natural and artificial light harvesting complexes. Fourier transform-based spectroscopies have been particularly successful, yet “complete” spectral information normally necessitates the loss of all information on the temporal sequence of events in a signal. This information though is particularly important in transient or multi-stage processes, in which the spectral decomposition of the data evolves in time. By going through several examples of ultrafast quantum dynamics, we demonstrate that the use of wavelets provide an efficient and accurate way to simultaneously acquire both temporal and frequency information about a signal, and argue that this greatly aids the elucidation and interpretation of physical process responsible for non-stationary spectroscopic features, such as those encountered in coherent excitonic energy transport

  6. Extracting information from an ensemble of GCMs to reliably assess future global runoff change

    NARCIS (Netherlands)

    Sperna Weiland, F.C.; Beek, L.P.H. van; Weerts, A.H.; Bierkens, M.F.P.

    2011-01-01

    Future runoff projections derived from different global climate models (GCMs) show large differences. Therefore, within this study the, information from multiple GCMs has been combined to better assess hydrological changes. For projections of precipitation and temperature the Reliability ensemble

  7. Investigation of the Impact of Extracting and Exchanging Health Information by Using Internet and Social Networks.

    Science.gov (United States)

    Pistolis, John; Zimeras, Stelios; Chardalias, Kostas; Roupa, Zoe; Fildisis, George; Diomidous, Marianna

    2016-06-01

    Social networks (1) have been embedded in our daily life for a long time. They constitute a powerful tool used nowadays for both searching and exchanging information on different issues by using Internet searching engines (Google, Bing, etc.) and Social Networks (Facebook, Twitter etc.). In this paper, are presented the results of a research based on the frequency and the type of the usage of the Internet and the Social Networks by the general public and the health professionals. The objectives of the research were focused on the investigation of the frequency of seeking and meticulously searching for health information in the social media by both individuals and health practitioners. The exchanging of information is a procedure that involves the issues of reliability and quality of information. In this research, by using advanced statistical techniques an effort is made to investigate the participant's profile in using social networks for searching and exchanging information on health issues. Based on the answers 93 % of the people, use the Internet to find information on health-subjects. Considering principal component analysis, the most important health subjects were nutrition (0.719 %), respiratory issues (0.79 %), cardiological issues (0.777%), psychological issues (0.667%) and total (73.8%). The research results, based on different statistical techniques revealed that the 61.2% of the males and 56.4% of the females intended to use the social networks for searching medical information. Based on the principal components analysis, the most important sources that the participants mentioned, were the use of the Internet and social networks for exchanging information on health issues. These sources proved to be of paramount importance to the participants of the study. The same holds for nursing, medical and administrative staff in hospitals.

  8. Amplitude extraction in pseudoscalar-meson photoproduction: towards a situation of complete information

    International Nuclear Information System (INIS)

    Nys, Jannes; Vrancx, Tom; Ryckebusch, Jan

    2015-01-01

    A complete set for pseudoscalar-meson photoproduction is a minimum set of observables from which one can determine the underlying reaction amplitudes unambiguously. The complete sets considered in this work involve single- and double-polarization observables. It is argued that for extracting amplitudes from data, the transversity representation of the reaction amplitudes offers advantages over alternate representations. It is shown that with the available single-polarization data for the p(γ,K + )Λ reaction, the energy and angular dependence of the moduli of the normalized transversity amplitudes in the resonance region can be determined to a fair accuracy. Determining the relative phases of the amplitudes from double-polarization observables is far less evident. (paper)

  9. The Analysis of Tree Species Distribution Information Extraction and Landscape Pattern Based on Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Yi Zeng

    2017-08-01

    Full Text Available The forest ecosystem is the largest land vegetation type, which plays the role of unreplacement with its unique value. And in the landscape scale, the research on forest landscape pattern has become the current hot spot, wherein the study of forest canopy structure is very important. They determines the process and the strength of forests energy flow, which influences the adjustments of ecosystem for climate and species diversity to some extent. The extraction of influencing factors of canopy structure and the analysis of the vegetation distribution pattern are especially important. To solve the problems, remote sensing technology, which is superior to other technical means because of its fine timeliness and large-scale monitoring, is applied to the study. Taking Lingkong Mountain as the study area, the paper uses the remote sensing image to analyze the forest distribution pattern and obtains the spatial characteristics of canopy structure distribution, and DEM data are as the basic data to extract the influencing factors of canopy structure. In this paper, pattern of trees distribution is further analyzed by using terrain parameters, spatial analysis tools and surface processes quantitative simulation. The Hydrological Analysis tool is used to build distributed hydrological model, and corresponding algorithm is applied to determine surface water flow path, rivers network and basin boundary. Results show that forest vegetation distribution of dominant tree species present plaque on the landscape scale and their distribution have spatial heterogeneity which is related to terrain factors closely. After the overlay analysis of aspect, slope and forest distribution pattern respectively, the most suitable area for stand growth and the better living condition are obtained.

  10. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  11. The Application of Chinese High-Spatial Remote Sensing Satellite Image in Land Law Enforcement Information Extraction

    Science.gov (United States)

    Wang, N.; Yang, R.

    2018-04-01

    Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2 m, 1 m and 2.1 m and the multispectral images (MS) resolution are 8 m, 4 m, 5.8 m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.

  12. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    International Nuclear Information System (INIS)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun; Sasaki, Masahide

    2004-01-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decoding in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques

  13. Extraction of basic roadway information for non-state roads in Florida : [summary].

    Science.gov (United States)

    2015-07-01

    The Florida Department of Transportation (FDOT) maintains a map of all the roads in Florida, : containing over one and a half million road links. For planning purposes, a wide variety : of information, such as stop lights, signage, lane number, and s...

  14. Extracting additional risk managers information from a risk assessment of Listeria monocytogenes in deli meats

    NARCIS (Netherlands)

    Pérez-Rodríguez, F.; Asselt, van E.D.; García-Gimeno, R.M.; Zurera, G.; Zwietering, M.H.

    2007-01-01

    The risk assessment study of Listeria monocytogenes in ready-to-eat foods conducted by the U.S. Food and Drug Administration is an example of an extensive quantitative microbiological risk assessment that could be used by risk analysts and other scientists to obtain information and by managers and

  15. Synthetic aperture radar ship discrimination, generation and latent variable extraction using information maximizing generative adversarial networks

    CSIR Research Space (South Africa)

    Schwegmann, Colin P

    2017-07-01

    Full Text Available such as Synthetic Aperture Radar imagery. To aid in the creation of improved machine learning-based ship detection and discrimination methods this paper applies a type of neural network known as an Information Maximizing Generative Adversarial Network. Generative...

  16. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Extraction of indirectly captured information for use in a comparison of offline pH measurement technologies.

    Science.gov (United States)

    Ritchie, Elspeth K; Martin, Elaine B; Racher, Andy; Jaques, Colin

    2017-06-10

    Understanding the causes of discrepancies in pH readings of a sample can allow more robust pH control strategies to be implemented. It was found that 59.4% of differences between two offline pH measurement technologies for an historical dataset lay outside an expected instrument error range of ±0.02pH. A new variable, Osmo Res , was created using multiple linear regression (MLR) to extract information indirectly captured in the recorded measurements for osmolality. Principal component analysis and time series analysis were used to validate the expansion of the historical dataset with the new variable Osmo Res . MLR was used to identify variables strongly correlated (p<0.05) with differences in pH readings by the two offline pH measurement technologies. These included concentrations of specific chemicals (e.g. glucose) and Osmo Res, indicating culture medium and bolus feed additions as possible causes of discrepancies between the offline pH measurement technologies. Temperature was also identified as statistically significant. It is suggested that this was a result of differences in pH-temperature compensations employed by the pH measurement technologies. In summary, a method for extracting indirectly captured information has been demonstrated, and it has been shown that competing pH measurement technologies were not necessarily interchangeable at the desired level of control (±0.02pH). Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    Science.gov (United States)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  19. Information extracting and processing with diffraction enhanced imaging of X-ray

    International Nuclear Information System (INIS)

    Chen Bo; Chinese Academy of Science, Beijing; Chen Chunchong; Jiang Fan; Chen Jie; Ming Hai; Shu Hang; Zhu Peiping; Wang Junyue; Yuan Qingxi; Wu Ziyu

    2006-01-01

    X-ray imaging at high energies has been used for many years in many fields. Conventional X-ray imaging is based on the different absorption within a sample. It is difficult to distinguish different tissues of a biological sample because of their small difference in absorption. The authors use the diffraction enhanced imaging (DEI) method. The authors took images of absorption, extinction, scattering and refractivity. In the end, the authors presented pictures of high resolution with all these information combined. (authors)

  20. Architecture and data processing alternatives for the TSE computer. Volume 2: Extraction of topological information from an image by the Tse computer

    Science.gov (United States)

    Jones, J. R.; Bodenheimer, R. E.

    1976-01-01

    A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.

  1. What do professional forecasters' stock market expectations tell us about herding, information extraction and beauty contests?

    DEFF Research Database (Denmark)

    Rangvid, Jesper; Schmeling, M.; Schrimpf, A.

    2013-01-01

    We study how professional forecasters form equity market expectations based on a new micro-level dataset which includes rich cross-sectional information about individual characteristics. We focus on testing whether agents rely on the beliefs of others, i.e., consensus expectations, when forming...... their own forecast. We find strong evidence that the average of all forecasters' beliefs influences an individual's own forecast. This effect is stronger for young and less experienced forecasters as well as forecasters whose pay depends more on performance relative to a benchmark. Further tests indicate...

  2. CLASSIFICATION OF INFORMAL SETTLEMENTS THROUGH THE INTEGRATION OF 2D AND 3D FEATURES EXTRACTED FROM UAV DATA

    Directory of Open Access Journals (Sweden)

    C. M. Gevaert

    2016-06-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  3. A method to extract quantitative information in analyzer-based x-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Pagot, E.; Cloetens, P.; Fiedler, S.; Bravin, A.; Coan, P.; Baruchel, J.; Haertwig, J.; Thomlinson, W.

    2003-01-01

    Analyzer-based imaging is a powerful phase-sensitive technique that generates improved contrast compared to standard absorption radiography. Combining numerically two images taken on either side at ±1/2 of the full width at half-maximum (FWHM) of the rocking curve provides images of 'pure refraction' and of 'apparent absorption'. In this study, a similar approach is made by combining symmetrical images with respect to the peak of the analyzer rocking curve but at general positions, ±α·FWHM. These two approaches do not consider the ultrasmall angle scattering produced by the object independently, which can lead to inconsistent results. An accurate way to separately retrieve the quantitative information intrinsic to the object is proposed. It is based on a statistical analysis of the local rocking curve, and allows one to overcome the problems encountered using the previous approaches

  4. Breast cancer and quality of life: medical information extraction from health forums.

    Science.gov (United States)

    Opitz, Thomas; Aze, Jérome; Bringay, Sandra; Joutard, Cyrille; Lavergne, Christian; Mollevi, Caroline

    2014-01-01

    Internet health forums are a rich textual resource with content generated through free exchanges among patients and, in certain cases, health professionals. We tackle the problem of retrieving clinically relevant information from such forums, with relevant topics being defined from clinical auto-questionnaires. Texts in forums are largely unstructured and noisy, calling for adapted preprocessing and query methods. We minimize the number of false negatives in queries by using a synonym tool to achieve query expansion of initial topic keywords. To avoid false positives, we propose a new measure based on a statistical comparison of frequent co-occurrences in a large reference corpus (Web) to keep only relevant expansions. Our work is motivated by a study of breast cancer patients' health-related quality of life (QoL). We consider topics defined from a breast-cancer specific QoL-questionnaire. We quantify and structure occurrences in posts of a specialized French forum and outline important future developments.

  5. EXTRACTION OF BENTHIC COVER INFORMATION FROM VIDEO TOWS AND PHOTOGRAPHS USING OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. T. L. Estomata

    2012-07-01

    Full Text Available Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU, which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA, which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05.

  6. Information Extraction and Interpretation Analysis of Mineral Potential Targets Based on ETM+ Data and GIS technology: A Case Study of Copper and Gold Mineralization in Burma

    International Nuclear Information System (INIS)

    Wenhui, Du; Yongqing, Chen; Nana, Guo; Yinglong, Hao; Pengfei, Zhao; Gongwen, Wang

    2014-01-01

    Mineralization-alteration and structure information extraction plays important roles in mineral resource prospecting and assessment using remote sensing data and the Geographical Information System (GIS) technology. Choosing copper and gold mines in Burma as example, the authors adopt band ratio, threshold segmentation and principal component analysis (PCA) to extract the hydroxyl alteration information using ETM+ remote sensing images. Digital elevation model (DEM) (30m spatial resolution) and ETM+ data was used to extract linear and circular faults that are associated with copper and gold mineralization. Combining geological data and the above information, the weights of evidence method and the C-A fractal model was used to integrate and identify the ore-forming favourable zones in this area. Research results show that the high grade potential targets are located with the known copper and gold deposits, and the integrated information can be used to the next exploration for the mineral resource decision-making

  7. The BEL information extraction workflow (BELIEF): evaluation in the BioCreative V BEL and IAT track.

    Science.gov (United States)

    Madan, Sumit; Hodapp, Sven; Senger, Philipp; Ansari, Sam; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel; Fluck, Juliane

    2016-01-01

    Network-based approaches have become extremely important in systems biology to achieve a better understanding of biological mechanisms. For network representation, the Biological Expression Language (BEL) is well designed to collate findings from the scientific literature into biological network models. To facilitate encoding and biocuration of such findings in BEL, a BEL Information Extraction Workflow (BELIEF) was developed. BELIEF provides a web-based curation interface, the BELIEF Dashboard, that incorporates text mining techniques to support the biocurator in the generation of BEL networks. The underlying UIMA-based text mining pipeline (BELIEF Pipeline) uses several named entity recognition processes and relationship extraction methods to detect concepts and BEL relationships in literature. The BELIEF Dashboard allows easy curation of the automatically generated BEL statements and their context annotations. Resulting BEL statements and their context annotations can be syntactically and semantically verified to ensure consistency in the BEL network. In summary, the workflow supports experts in different stages of systems biology network building. Based on the BioCreative V BEL track evaluation, we show that the BELIEF Pipeline automatically extracts relationships with an F-score of 36.4% and fully correct statements can be obtained with an F-score of 30.8%. Participation in the BioCreative V Interactive task (IAT) track with BELIEF revealed a systems usability scale (SUS) of 67. Considering the complexity of the task for new users-learning BEL, working with a completely new interface, and performing complex curation-a score so close to the overall SUS average highlights the usability of BELIEF.Database URL: BELIEF is available at http://www.scaiview.com/belief/. © The Author(s) 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Citizen-Centric Urban Planning through Extracting Emotion Information from Twitter in an Interdisciplinary Space-Time-Linguistics Algorithm

    Directory of Open Access Journals (Sweden)

    Bernd Resch

    2016-07-01

    Full Text Available Traditional urban planning processes typically happen in offices and behind desks. Modern types of civic participation can enhance those processes by acquiring citizens’ ideas and feedback in participatory sensing approaches like “People as Sensors”. As such, citizen-centric planning can be achieved by analysing Volunteered Geographic Information (VGI data such as Twitter tweets and posts from other social media channels. These user-generated data comprise several information dimensions, such as spatial and temporal information, and textual content. However, in previous research, these dimensions were generally examined separately in single-disciplinary approaches, which does not allow for holistic conclusions in urban planning. This paper introduces TwEmLab, an interdisciplinary approach towards extracting citizens’ emotions in different locations within a city. More concretely, we analyse tweets in three dimensions (space, time, and linguistics, based on similarities between each pair of tweets as defined by a specific set of functional relationships in each dimension. We use a graph-based semi-supervised learning algorithm to classify the data into discrete emotions (happiness, sadness, fear, anger/disgust, none. Our proposed solution allows tweets to be classified into emotion classes in a multi-parametric approach. Additionally, we created a manually annotated gold standard that can be used to evaluate TwEmLab’s performance. Our experimental results show that we are able to identify tweets carrying emotions and that our approach bears extensive potential to reveal new insights into citizens’ perceptions of the city.

  9. Eodataservice.org: Big Data Platform to Enable Multi-disciplinary Information Extraction from Geospatial Data

    Science.gov (United States)

    Natali, S.; Mantovani, S.; Barboni, D.; Hogan, P.

    2017-12-01

    In 1999, US Vice-President Al Gore outlined the concept of `Digital Earth' as a multi-resolution, three-dimensional representation of the planet to find, visualise and make sense of vast amounts of geo- referenced information on physical and social environments, allowing to navigate through space and time, accessing historical and forecast data to support scientists, policy-makers, and any other user. The eodataservice platform (http://eodataservice.org/) implements the Digital Earth Concept: eodatasevice is a cross-domain platform that makes available a large set of multi-year global environmental collections allowing data discovery, visualization, combination, processing and download. It implements a "virtual datacube" approach where data stored on distributed data centers are made available via standardized OGC-compliant interfaces. Dedicated web-based Graphic User Interfaces (based on the ESA-NASA WebWorldWind technology) as well as web-based notebooks (e.g. Jupyter notebook), deskop GIS tools and command line interfaces can be used to access and manipulate the data. The platform can be fully customized on users' needs. So far eodataservice has been used for the following thematic applications: High resolution satellite data distribution Land surface monitoring using SAR surface deformation data Atmosphere, ocean and climate applications Climate-health applications Urban Environment monitoring Safeguard of cultural heritage sites Support to farmers and (re)-insurances in the agriculturés field In the current work, the EO Data Service concept is presented as key enabling technology; furthermore various examples are provided to demonstrate the high level of interdisciplinarity of the platform.

  10. A METHOD OF EXTRACTING SHORELINE BASED ON SEMANTIC INFORMATION USING DUAL-LENGTH LiDAR DATA

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available Shoreline is a spatial varying separation between water and land. By utilizing dual-wavelength LiDAR point data together with semantic information that shoreline often appears beyond water surface profile and is observable on the beach, the paper generates the shoreline and the details are as follows: (1 Gain the water surface profile: first we obtain water surface by roughly selecting water points based on several features of water body, then apply least square fitting method to get the whole water trend surface. Then we get the ground surface connecting the under -water surface by both TIN progressive filtering method and surface interpolation method. After that, we have two fitting surfaces intersected to get water surface profile of the island. (2 Gain the sandy beach: we grid all points and select the water surface profile grids points as seeds, then extract sandy beach points based on eight-neighborhood method and features, then we get all sandy beaches. (3 Get the island shoreline: first we get the sandy beach shoreline based on intensity information, then we get a threshold value to distinguish wet area and dry area, therefore we get the shoreline of several sandy beaches. In some extent, the shoreline has the same height values within a small area, by using all the sandy shoreline points to fit a plane P, and the intersection line of the ground surface and the shoreline plane P can be regarded as the island shoreline. By comparing with the surveying shoreline, the results show that the proposed method can successfully extract shoreline.

  11. An audit of the reliability of influenza vaccination and medical information extracted from eHealth records in general practice.

    Science.gov (United States)

    Regan, Annette K; Gibbs, Robyn A; Effler, Paul V

    2018-05-31

    To evaluate the reliability of information in general practice (GP) electronic health records (EHRs), 2100 adult patients were randomly selected for interview regarding the presence of specific medical conditions and recent influenza vaccination. Agreement between self-report and data extracted from EHRs was compared using Cohen's kappa coefficient (k) and interpreted in accordance with Altman's Kappa Benchmarking criteria; 377 (18%) patients declined participation, and 608 (29%) could not be contacted. Of 1115 (53%) remaining, 856 (77%) were active patients (≥3 visits to the GP practice in the last two years) who provided complete information for analysis. Although a higher proportion of patients self-reported being vaccinated or having a medical condition compared to the EHR (50.7% vs 36.9%, and 39.4% vs 30.3%, respectively), there was "good" agreement between self-report and EHR for both vaccination status (κ = 0.67) and medical conditions (κ = 0.66). These findings suggest EHR may be useful for public health surveillance. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  12. Extracting information on the spatial variability in erosion rate stored in detrital cooling age distributions in river sands

    Science.gov (United States)

    Braun, Jean; Gemignani, Lorenzo; van der Beek, Peter

    2018-03-01

    One of the main purposes of detrital thermochronology is to provide constraints on the regional-scale exhumation rate and its spatial variability in actively eroding mountain ranges. Procedures that use cooling age distributions coupled with hypsometry and thermal models have been developed in order to extract quantitative estimates of erosion rate and its spatial distribution, assuming steady state between tectonic uplift and erosion. This hypothesis precludes the use of these procedures to assess the likely transient response of mountain belts to changes in tectonic or climatic forcing. Other methods are based on an a priori knowledge of the in situ distribution of ages to interpret the detrital age distributions. In this paper, we describe a simple method that, using the observed detrital mineral age distributions collected along a river, allows us to extract information about the relative distribution of erosion rates in an eroding catchment without relying on a steady-state assumption, the value of thermal parameters or an a priori knowledge of in situ age distributions. The model is based on a relatively low number of parameters describing lithological variability among the various sub-catchments and their sizes and only uses the raw ages. The method we propose is tested against synthetic age distributions to demonstrate its accuracy and the optimum conditions for it use. In order to illustrate the method, we invert age distributions collected along the main trunk of the Tsangpo-Siang-Brahmaputra river system in the eastern Himalaya. From the inversion of the cooling age distributions we predict present-day erosion rates of the catchments along the Tsangpo-Siang-Brahmaputra river system, as well as some of its tributaries. We show that detrital age distributions contain dual information about present-day erosion rate, i.e., from the predicted distribution of surface ages within each catchment and from the relative contribution of any given catchment to the

  13. Extracting information on the spatial variability in erosion rate stored in detrital cooling age distributions in river sands

    Directory of Open Access Journals (Sweden)

    J. Braun

    2018-03-01

    Full Text Available One of the main purposes of detrital thermochronology is to provide constraints on the regional-scale exhumation rate and its spatial variability in actively eroding mountain ranges. Procedures that use cooling age distributions coupled with hypsometry and thermal models have been developed in order to extract quantitative estimates of erosion rate and its spatial distribution, assuming steady state between tectonic uplift and erosion. This hypothesis precludes the use of these procedures to assess the likely transient response of mountain belts to changes in tectonic or climatic forcing. Other methods are based on an a priori knowledge of the in situ distribution of ages to interpret the detrital age distributions. In this paper, we describe a simple method that, using the observed detrital mineral age distributions collected along a river, allows us to extract information about the relative distribution of erosion rates in an eroding catchment without relying on a steady-state assumption, the value of thermal parameters or an a priori knowledge of in situ age distributions. The model is based on a relatively low number of parameters describing lithological variability among the various sub-catchments and their sizes and only uses the raw ages. The method we propose is tested against synthetic age distributions to demonstrate its accuracy and the optimum conditions for it use. In order to illustrate the method, we invert age distributions collected along the main trunk of the Tsangpo–Siang–Brahmaputra river system in the eastern Himalaya. From the inversion of the cooling age distributions we predict present-day erosion rates of the catchments along the Tsangpo–Siang–Brahmaputra river system, as well as some of its tributaries. We show that detrital age distributions contain dual information about present-day erosion rate, i.e., from the predicted distribution of surface ages within each catchment and from the relative contribution of

  14. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    OpenAIRE

    J. Sharmila; A. Subramani

    2016-01-01

    Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodolog...

  15. An analytical framework for extracting hydrological information from time series of small reservoirs in a semi-arid region

    Science.gov (United States)

    Annor, Frank; van de Giesen, Nick; Bogaard, Thom; Eilander, Dirk

    2013-04-01

    small reservoirs in the Upper East Region of Ghana. Reservoirs without obvious large seepage losses (field survey) were selected. To verify this, stable water isotopic samples are collected from groundwater upstream and downstream from the reservoir. By looking at possible enrichment of downstream groundwater, a good estimate of seepage can be made in addition to estimates on evaporation. We estimated the evaporative losses and compared those with field measurements using eddy correlation measurements. Lastly, we determined the cumulative surface runoff curves for the small reservoirs .We will present this analytical framework for extracting hydrological information from time series of small reservoirs and show the first results for our study region of northern Ghana.

  16. Extraction of wind and temperature information from hybrid 4D-Var assimilation of stratospheric ozone using NAVGEM

    Science.gov (United States)

    Allen, Douglas R.; Hoppel, Karl W.; Kuhl, David D.

    2018-03-01

    Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM) hybrid 4-D variational assimilation (4D-Var) data assimilation (DA) system. Ozone can improve the wind and temperature through two different DA mechanisms: (1) through the flow-of-the-day ensemble background error covariance that is blended together with the static background error covariance and (2) via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE), which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error) global hourly ozone data constrains the stratospheric wind and temperature to within ˜ 2 m s-1 and ˜ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study), rather than 0.0 (no ensemble background error covariance) or 1.0 (no static background error covariance), which is consistent with other hybrid DA studies. When perfect global ozone is

  17. Extraction of wind and temperature information from hybrid 4D-Var assimilation of stratospheric ozone using NAVGEM

    Directory of Open Access Journals (Sweden)

    D. R. Allen

    2018-03-01

    Full Text Available Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM hybrid 4-D variational assimilation (4D-Var data assimilation (DA system. Ozone can improve the wind and temperature through two different DA mechanisms: (1 through the flow-of-the-day ensemble background error covariance that is blended together with the static background error covariance and (2 via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE, which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error global hourly ozone data constrains the stratospheric wind and temperature to within ∼ 2 m s−1 and ∼ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study, rather than 0.0 (no ensemble background error covariance or 1.0 (no static background error covariance, which is consistent with other hybrid DA studies. When

  18. Extraction and analysis of reducing alteration information of oil-gas in Bashibulake uranium ore district based on ASTER remote sensing data

    International Nuclear Information System (INIS)

    Ye Fawang; Liu Dechang; Zhao Yingjun; Yang Xu

    2008-01-01

    Beginning with the analysis of the spectral characteristics of sandstone with reducing alteration of oil-gas in Bashibulake ore district, the extract technology of reducing alteration information based on ASTER data is presented. Several remote sensing anomaly zones of reducing alteration information similar with that in uranium deposit are interpreted in study area. On the basis of above study, these alteration anomaly information are further classified by using the advantage of ASTER data with multi-band in SWIR, the geological significance for alteration anomaly information is respectively discussed. As a result, alteration anomalies good for uranium prospecting are really selected, which provides some important information for uranium exploration in outland of Bashibulake uranium ore area. (authors)

  19. A simple method to extract information on anisotropy of particle fluxes from spin-modulated counting rates of cosmic ray telescopes

    International Nuclear Information System (INIS)

    Hsieh, K.C.; Lin, Y.C.; Sullivan, J.D.

    1975-01-01

    A simple method to extract information on anisotropy of particle fluxes from data collected by cosmic ray telescopes on spinning spacecraft but without sectored accumulators is presented. Application of this method to specific satellite data demonstrates that it requires no prior assumption on the form of angular distribution of the fluxes; furthermore, self-consistency ensures the validity of the results thus obtained. The examples show perfect agreement with the corresponding magnetic field directions

  20. Computerized extraction of information on the quality of diabetes care from free text in electronic patient records of general practitioners

    NARCIS (Netherlands)

    Voorham, Jaco; Denig, Petra

    2007-01-01

    Objective: This study evaluated a computerized method for extracting numeric clinical measurements related to diabetes care from free text in electronic patient records (EPR) of general practitioners. Design and Measurements: Accuracy of this number-oriented approach was compared to manual chart

  1. An image-processing strategy to extract important information suitable for a low-size stimulus pattern in a retinal prosthesis.

    Science.gov (United States)

    Chen, Yili; Fu, Jixiang; Chu, Dawei; Li, Rongmao; Xie, Yaoqin

    2017-11-27

    A retinal prosthesis is designed to help the blind to obtain some sight. It consists of an external part and an internal part. The external part is made up of a camera, an image processor and an RF transmitter. The internal part is made up of an RF receiver, implant chip and microelectrode. Currently, the number of microelectrodes is in the hundreds, and we do not know the mechanism for using an electrode to stimulate the optic nerve. A simple hypothesis is that the pixels in an image correspond to the electrode. The images captured by the camera should be processed by suitable strategies to correspond to stimulation from the electrode. Thus, it is a question of how to obtain the important information from the image captured in the picture. Here, we use the region of interest (ROI), a useful algorithm for extracting the ROI, to retain the important information, and to remove the redundant information. This paper explains the details of the principles and functions of the ROI. Because we are investigating a real-time system, we need a fast processing ROI as a useful algorithm to extract the ROI. Thus, we simplified the ROI algorithm and used it in an outside image-processing digital signal processing (DSP) system of the retinal prosthesis. The results show that our image-processing strategies are suitable for a real-time retinal prosthesis and can eliminate redundant information and provide useful information for expression in a low-size image.

  2. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    Science.gov (United States)

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  3. Extraction of Pluvial Flood Relevant Volunteered Geographic Information (VGI by Deep Learning from User Generated Texts and Photos

    Directory of Open Access Journals (Sweden)

    Yu Feng

    2018-01-01

    Full Text Available In recent years, pluvial floods caused by extreme rainfall events have occurred frequently. Especially in urban areas, they lead to serious damages and endanger the citizens’ safety. Therefore, real-time information about such events is desirable. With the increasing popularity of social media platforms, such as Twitter or Instagram, information provided by voluntary users becomes a valuable source for emergency response. Many applications have been built for disaster detection and flood mapping using crowdsourcing. Most of the applications so far have merely used keyword filtering or classical language processing methods to identify disaster relevant documents based on user generated texts. As the reliability of social media information is often under criticism, the precision of information retrieval plays a significant role for further analyses. Thus, in this paper, high quality eyewitnesses of rainfall and flooding events are retrieved from social media by applying deep learning approaches on user generated texts and photos. Subsequently, events are detected through spatiotemporal clustering and visualized together with these high quality eyewitnesses in a web map application. Analyses and case studies are conducted during flooding events in Paris, London and Berlin.

  4. Extracting Information about the Electronic Quality of Organic Solar-Cell Absorbers from Fill Factor and Thickness

    Science.gov (United States)

    Kaienburg, Pascal; Rau, Uwe; Kirchartz, Thomas

    2016-08-01

    Understanding the fill factor in organic solar cells remains challenging due to its complex dependence on a multitude of parameters. By means of drift-diffusion simulations, we thoroughly analyze the fill factor of such low-mobility systems and demonstrate its dependence on a collection coefficient defined in this work. We systematically discuss the effect of different recombination mechanisms, space-charge regions, and contact properties. Based on these findings, we are able to interpret the thickness dependence of the fill factor for different experimental studies from the literature. The presented model provides a facile method to extract the photoactive layer's electronic quality which is of particular importance for the fill factor. We illustrate that over the past 15 years, the electronic quality has not been continuously improved, although organic solar-cell efficiencies increased steadily over the same period of time. Only recent reports show the synthesis of polymers for semiconducting films of high electronic quality that are able to produce new efficiency records.

  5. Investigating the feasibility of using partial least squares as a method of extracting salient information for the evaluation of digital breast tomosynthesis

    Science.gov (United States)

    Zhang, George Z.; Myers, Kyle J.; Park, Subok

    2013-03-01

    Digital breast tomosynthesis (DBT) has shown promise for improving the detection of breast cancer, but it has not yet been fully optimized due to a large space of system parameters to explore. A task-based statistical approach1 is a rigorous method for evaluating and optimizing this promising imaging technique with the use of optimal observers such as the Hotelling observer (HO). However, the high data dimensionality found in DBT has been the bottleneck for the use of a task-based approach in DBT evaluation. To reduce data dimensionality while extracting salient information for performing a given task, efficient channels have to be used for the HO. In the past few years, 2D Laguerre-Gauss (LG) channels, which are a complete basis for stationary backgrounds and rotationally symmetric signals, have been utilized for DBT evaluation2, 3 . But since background and signal statistics from DBT data are neither stationary nor rotationally symmetric, LG channels may not be efficient in providing reliable performance trends as a function of system parameters. Recently, partial least squares (PLS) has been shown to generate efficient channels for the Hotelling observer in detection tasks involving random backgrounds and signals.4 In this study, we investigate the use of PLS as a method for extracting salient information from DBT in order to better evaluate such systems.

  6. Extraction of information on macromolecular interactions from fluorescence micro-spectroscopy measurements in the presence and absence of FRET

    Science.gov (United States)

    Raicu, Valerică

    2018-06-01

    Investigations of static or dynamic interactions between proteins or other biological macromolecules in living cells often rely on the use of fluorescent tags with two different colors in conjunction with adequate theoretical descriptions of Förster Resonance Energy Transfer (FRET) and molecular-level micro-spectroscopic technology. One such method based on these general principles is FRET spectrometry, which allows determination of the quaternary structure of biomolecules from cell-level images of the distributions, or spectra of occurrence frequency of FRET efficiencies. Subsequent refinements allowed combining FRET frequency spectra with molecular concentration information, thereby providing the proportion of molecular complexes with various quaternary structures as well as their binding/dissociation energies. In this paper, we build on the mathematical principles underlying FRET spectrometry to propose two new spectrometric methods, which have distinct advantages compared to other methods. One of these methods relies on statistical analysis of color mixing in subpopulations of fluorescently tagged molecules to probe molecular association stoichiometry, while the other exploits the color shift induced by FRET to also derive geometric information in addition to stoichiometry. The appeal of the first method stems from its sheer simplicity, while the strength of the second consists in its ability to provide structural information.

  7. Information

    International Nuclear Information System (INIS)

    Boyard, Pierre.

    1981-01-01

    The fear for nuclear energy and more particularly for radioactive wastes is analyzed in the sociological context. Everybody agree on the information need, information is available but there is a problem for their diffusion. Reactions of the public are analyzed and journalists, scientists and teachers have a role to play [fr

  8. 基于决策树的戈壁信息提取研究%Gobi information extraction based on decision tree classification method

    Institute of Scientific and Technical Information of China (English)

    冯益明; 智长贵; 姚爱冬

    2013-01-01

    Gobi is one of the main landscape types of earth' s surface in the arid region of northwestern parts of China, with the total area of 458 000-757 000 km2, accounting for the 4.8%-7.9% of China's total land area. The gobi holds abundant natural resources such as minerals, wind energy and solar power. Meanwhile, many modern cities and towns and some important traffic routes were also constructed on the gobi region. The gobi region plays an important role in the construction of western economy. Therefore, it is important to launch the gobi research under current social and economic conditions, and accurately revealing the distribution and area of gobi is the base and premise of launching the gobi research. At present, it is difficult to do fieldwork due to the execrable natural conditions and the sparse dweller in the gobi region, which leads to the scarcity of research documents on the situation, distribution, type classification, transformation and utilization of gobi. The studied region of this paper is a typical gobi distribution region, locating in Ejina County in Inner Mongolia, China, and its climatic characteristics include lack of rain, more evaporation, full sunshine, large temperature difference and frequent windy sand weather. Using Remote Sensing imageries Landsat TM5 and TM7 of plant growth season of 2005-2010, the DEM with 30 m spatial resolution, administrative map, present land use map, field investigation data and related documents as the basic data resource. Firstly, the non-gobi distribution regions were extracted in GIS software by analyzing DEM. Then, based on the analysis of spectral characteristics of difference typical ground objects, the information extraction model of Decision Tree based on knowledge was constructed to classify the remote sensing imageries, and eroded gobi and cumulated gobi were relatively accurately separated. The general accuracy of the extracted gobi information reached 91.57%. There were few materials in China on using

  9. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo, E-mail: thiagoreis@usp.b, E-mail: barroso@ipen.b, E-mail: kimakuma@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  10. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo

    2011-01-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  11. Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010.

    Science.gov (United States)

    de Bruijn, Berry; Cherry, Colin; Kiritchenko, Svetlana; Martin, Joel; Zhu, Xiaodan

    2011-01-01

    As clinical text mining continues to mature, its potential as an enabling technology for innovations in patient care and clinical research is becoming a reality. A critical part of that process is rigid benchmark testing of natural language processing methods on realistic clinical narrative. In this paper, the authors describe the design and performance of three state-of-the-art text-mining applications from the National Research Council of Canada on evaluations within the 2010 i2b2 challenge. The three systems perform three key steps in clinical information extraction: (1) extraction of medical problems, tests, and treatments, from discharge summaries and progress notes; (2) classification of assertions made on the medical problems; (3) classification of relations between medical concepts. Machine learning systems performed these tasks using large-dimensional bags of features, as derived from both the text itself and from external sources: UMLS, cTAKES, and Medline. Performance was measured per subtask, using micro-averaged F-scores, as calculated by comparing system annotations with ground-truth annotations on a test set. The systems ranked high among all submitted systems in the competition, with the following F-scores: concept extraction 0.8523 (ranked first); assertion detection 0.9362 (ranked first); relationship detection 0.7313 (ranked second). For all tasks, we found that the introduction of a wide range of features was crucial to success. Importantly, our choice of machine learning algorithms allowed us to be versatile in our feature design, and to introduce a large number of features without overfitting and without encountering computing-resource bottlenecks.

  12. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    Science.gov (United States)

    Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.

    2015-06-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.

  13. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    International Nuclear Information System (INIS)

    Hosseini-Golgoo, S M; Bozorgi, H; Saberkari, A

    2015-01-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively. (paper)

  14. Extracting drug mechanism and pharmacodynamic information from clinical electroencephalographic data using generalised semi-linear canonical correlation analysis

    International Nuclear Information System (INIS)

    Brain, P; Strimenopoulou, F; Ivarsson, M; Wilson, F J; Diukova, A; Wise, R G; Berry, E; Jolly, A; Hall, J E

    2014-01-01

    Conventional analysis of clinical resting electroencephalography (EEG) recordings typically involves assessment of spectral power in pre-defined frequency bands at specific electrodes. EEG is a potentially useful technique in drug development for measuring the pharmacodynamic (PD) effects of a centrally acting compound and hence to assess the likelihood of success of a novel drug based on pharmacokinetic–pharmacodynamic (PK–PD) principles. However, the need to define the electrodes and spectral bands to be analysed a priori is limiting where the nature of the drug-induced EEG effects is initially not known. We describe the extension to human EEG data of a generalised semi-linear canonical correlation analysis (GSLCCA), developed for small animal data. GSLCCA uses data from the whole spectrum, the entire recording duration and multiple electrodes. It provides interpretable information on the mechanism of drug action and a PD measure suitable for use in PK–PD modelling. Data from a study with low (analgesic) doses of the μ-opioid agonist, remifentanil, in 12 healthy subjects were analysed using conventional spectral edge analysis and GSLCCA. At this low dose, the conventional analysis was unsuccessful but plausible results consistent with previous observations were obtained using GSLCCA, confirming that GSLCCA can be successfully applied to clinical EEG data. (paper)

  15. Extracting the Beat: An Experience-dependent Complex Integration of Multisensory Information Involving Multiple Levels of the Nervous System

    Directory of Open Access Journals (Sweden)

    Laurel J. Trainor

    2009-04-01

    Full Text Available In a series of studies we have shown that movement (or vestibular stimulation that is synchronized to every second or every third beat of a metrically ambiguous rhythm pattern biases people to perceive the meter as a march or as a waltz, respectively. Riggle (this volume claims that we postulate an "innate", "specialized brain unit" for beat perception that is "directly" influenced by vestibular input. In fact, to the contrary, we argue that experience likely plays a large role in the development of rhythmic auditory-movement interactions, and that rhythmic processing in the brain is widely distributed and includes subcortical and cortical areas involved in sound processing and movement. Further, we argue that vestibular and auditory information are integrated at various subcortical and cortical levels along with input from other sensory modalities, and it is not clear which levels are most important for rhythm processing or, indeed, what a "direct" influence of vestibular input would mean. Finally, we argue that vestibular input to sound location mechanisms may be involved, but likely cannot explain the influence of vestibular input on the perception of auditory rhythm. This remains an empirical question for future research.

  16. Improving the extraction of crisis information in the context of flood, fire, and landslide rapid mapping using SAR and optical remote sensing data

    Science.gov (United States)

    Martinis, Sandro; Clandillon, Stephen; Twele, André; Huber, Claire; Plank, Simon; Maxant, Jérôme; Cao, Wenxi; Caspard, Mathilde; May, Stéphane

    2016-04-01

    Optical and radar satellite remote sensing have proven to provide essential crisis information in case of natural disasters, humanitarian relief activities and civil security issues in a growing number of cases through mechanisms such as the Copernicus Emergency Management Service (EMS) of the European Commission or the International Charter 'Space and Major Disasters'. The aforementioned programs and initiatives make use of satellite-based rapid mapping services aimed at delivering reliable and accurate crisis information after natural hazards. Although these services are increasingly operational, they need to be continuously updated and improved through research and development (R&D) activities. The principal objective of ASAPTERRA (Advancing SAR and Optical Methods for Rapid Mapping), the ESA-funded R&D project being described here, is to improve, automate and, hence, speed-up geo-information extraction procedures in the context of natural hazards response. This is performed through the development, implementation, testing and validation of novel image processing methods using optical and Synthetic Aperture Radar (SAR) data. The methods are mainly developed based on data of the German radar satellites TerraSAR-X and TanDEM-X, the French satellite missions Pléiades-1A/1B as well as the ESA missions Sentinel-1/2 with the aim to better characterize the potential and limitations of these sensors and their synergy. The resulting algorithms and techniques are evaluated in real case applications during rapid mapping activities. The project is focussed on three types of natural hazards: floods, landslides and fires. Within this presentation an overview of the main methodological developments in each topic is given and demonstrated in selected test areas. The following developments are presented in the context of flood mapping: a fully automated Sentinel-1 based processing chain for detecting open flood surfaces, a method for the improved detection of flooded vegetation

  17. The duality of gaze: Eyes extract and signal social information during sustained cooperative and competitive dyadic gaze

    Directory of Open Access Journals (Sweden)

    Michelle eJarick

    2015-09-01

    Full Text Available In contrast to nonhuman primate eyes, which have a dark sclera surrounding a dark iris, human eyes have a white sclera that surrounds a dark iris. This high contrast morphology allows humans to determine quickly and easily where others are looking and infer what they are attending to. In recent years an enormous body of work has used photos and schematic images of faces to study these aspects of social attention, e.g., the selection of the eyes of others and the shift of attention to where those eyes are directed. However, evolutionary theory holds that humans did not develop a high contrast morphology simply to use the eyes of others as attentional cues; rather they sacrificed camouflage for communication, that is, to signal their thoughts and intentions to others. In the present study we demonstrate the importance of this by taking as our starting point the hypothesis that a cornerstone of nonverbal communication is the eye contact between individuals and the time that it is held. In a single simple study we show experimentally that the effect of eye contact can be quickly and profoundly altered merely by having participants, who had never met before, play a game in a cooperative or competitive manner. After the game participants were asked to make eye contact for a prolonged period of time (10 minutes. Those who had played the game cooperatively found this terribly difficult to do, repeatedly talking and breaking gaze. In contrast, those who had played the game competitively were able to stare quietly at each other for a sustained period. Collectively these data demonstrate that when looking at the eyes of a real person one both acquires and signals information to the other person. This duality of gaze is critical to nonverbal communication, with the nature of that communication shaped by the relationship between individuals, e.g., cooperative or competitive.

  18. Monitoring Strategies of Earth Dams by Ground-Based Radar Interferometry: How to Extract Useful Information for Seismic Risk Assessment.

    Science.gov (United States)

    Di Pasquale, Andrea; Nico, Giovanni; Pitullo, Alfredo; Prezioso, Giuseppina

    2018-01-16

    The aim of this paper is to describe how ground-based radar interferometry can provide displacement measurements of earth dam surfaces and of vibration frequencies of its main concrete infrastructures. In many cases, dams were built many decades ago and, at that time, were not equipped with in situ sensors embedded in the structure when they were built. Earth dams have scattering properties similar to landslides for which the Ground-Based Synthetic Aperture Radar (GBSAR) technique has been so far extensively applied to study ground displacements. In this work, SAR and Real Aperture Radar (RAR) configurations are used for the measurement of earth dam surface displacements and vibration frequencies of concrete structures, respectively. A methodology for the acquisition of SAR data and the rendering of results is described. The geometrical correction factor, needed to transform the Line-of-Sight (LoS) displacement measurements of GBSAR into an estimate of the horizontal displacement vector of the dam surface, is derived. Furthermore, a methodology for the acquisition of RAR data and the representation of displacement temporal profiles and vibration frequency spectra of dam concrete structures is presented. For this study a Ku-band ground-based radar, equipped with horn antennas having different radiation patterns, has been used. Four case studies, using different radar acquisition strategies specifically developed for the monitoring of earth dams, are examined. The results of this work show the information that a Ku-band ground-based radar can provide to structural engineers for a non-destructive seismic assessment of earth dams.

  19. Informe

    Directory of Open Access Journals (Sweden)

    Egon Lichetenberger

    1950-10-01

    Full Text Available Informe del doctor Egon Lichetenberger ante el Consejo Directivo de la Facultad, sobre el  curso de especialización en Anatomía Patológica patrocinado por la Kellogg Foundation (Departamento de Patología

  20. BioCreative V track 4: a shared task for the extraction of causal network information using the Biological Expression Language.

    Science.gov (United States)

    Rinaldi, Fabio; Ellendorff, Tilia Renate; Madan, Sumit; Clematide, Simon; van der Lek, Adrian; Mevissen, Theo; Fluck, Juliane

    2016-01-01

    Automatic extraction of biological network information is one of the most desired and most complex tasks in biological and medical text mining. Track 4 at BioCreative V attempts to approach this complexity using fragments of large-scale manually curated biological networks, represented in Biological Expression Language (BEL), as training and test data. BEL is an advanced knowledge representation format which has been designed to be both human readable and machine processable. The specific goal of track 4 was to evaluate text mining systems capable of automatically constructing BEL statements from given evidence text, and of retrieving evidence text for given BEL statements. Given the complexity of the task, we designed an evaluation methodology which gives credit to partially correct statements. We identified various levels of information expressed by BEL statements, such as entities, functions, relations, and introduced an evaluation framework which rewards systems capable of delivering useful BEL fragments at each of these levels. The aim of this evaluation method is to help identify the characteristics of the systems which, if combined, would be most useful for achieving the overall goal of automatically constructing causal biological networks from text. © The Author(s) 2016. Published by Oxford University Press.

  1. Information extraction from airborne cameras

    NARCIS (Netherlands)

    Dijk, J.; Elands, P.J.M.; Burghouts, G.; Van Eekeren, A.W.M.

    2015-01-01

    In this paper we evaluate the added value of image interpretation techniques for EO sensors mounted on a UAV for operators in an operational setting. We start with evaluating the support by technology for strategic and tactical purposes in a real-time scenario. We discuss different variations

  2. A hybrid approach for robust multilingual toponym extraction and disambiguation

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    Toponym extraction and disambiguation are key topics recently addressed by fields of Information Extraction and Geographical Information Retrieval. Toponym extraction and disambiguation are highly dependent processes. Not only toponym extraction effectiveness affects disambiguation, but also

  3. Mapping Robinia Pseudoacacia Forest Health Conditions by Using Combined Spectral, Spatial, and Textural Information Extracted from IKONOS Imagery and Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Hong Wang

    2015-07-01

    Full Text Available The textural and spatial information extracted from very high resolution (VHR remote sensing imagery provides complementary information for applications in which the spectral information is not sufficient for identification of spectrally similar landscape features. In this study grey-level co-occurrence matrix (GLCM textures and a local statistical analysis Getis statistic (Gi, computed from IKONOS multispectral (MS imagery acquired from the Yellow River Delta in China, along with a random forest (RF classifier, were used to discriminate Robina pseudoacacia tree health levels. Specifically, eight GLCM texture features (mean, variance, homogeneity, dissimilarity, contrast, entropy, angular second moment, and correlation were first calculated from IKONOS NIR band (Band 4 to determine an optimal window size (13 × 13 and an optimal direction (45°. Then, the optimal window size and direction were applied to the three other IKONOS MS bands (blue, green, and red for calculating the eight GLCM textures. Next, an optimal distance value (5 and an optimal neighborhood rule (Queen’s case were determined for calculating the four Gi features from the four IKONOS MS bands. Finally, different RF classification results of the three forest health conditions were created: (1 an overall accuracy (OA of 79.5% produced using the four MS band reflectances only; (2 an OA of 97.1% created with the eight GLCM features calculated from IKONOS Band 4 with the optimal window size of 13 × 13 and direction 45°; (3 an OA of 93.3% created with the all 32 GLCM features calculated from the four IKONOS MS bands with a window size of 13 × 13 and direction of 45°; (4 an OA of 94.0% created using the four Gi features calculated from the four IKONOS MS bands with the optimal distance value of 5 and Queen’s neighborhood rule; and (5 an OA of 96.9% created with the combined 16 spectral (four, spatial (four, and textural (eight features. The most important feature ranked by RF

  4. Identification of "pathologs" (disease-related genes from the RIKEN mouse cDNA dataset using human curation plus FACTS, a new biological information extraction system

    Directory of Open Access Journals (Sweden)

    Socha Luis A

    2004-04-01

    Full Text Available Abstract Background A major goal in the post-genomic era is to identify and characterise disease susceptibility genes and to apply this knowledge to disease prevention and treatment. Rodents and humans have remarkably similar genomes and share closely related biochemical, physiological and pathological pathways. In this work we utilised the latest information on the mouse transcriptome as revealed by the RIKEN FANTOM2 project to identify novel human disease-related candidate genes. We define a new term "patholog" to mean a homolog of a human disease-related gene encoding a product (transcript, anti-sense or protein potentially relevant to disease. Rather than just focus on Mendelian inheritance, we applied the analysis to all potential pathologs regardless of their inheritance pattern. Results Bioinformatic analysis and human curation of 60,770 RIKEN full-length mouse cDNA clones produced 2,578 sequences that showed similarity (70–85% identity to known human-disease genes. Using a newly developed biological information extraction and annotation tool (FACTS in parallel with human expert analysis of 17,051 MEDLINE scientific abstracts we identified 182 novel potential pathologs. Of these, 36 were identified by computational tools only, 49 by human expert analysis only and 97 by both methods. These pathologs were related to neoplastic (53%, hereditary (24%, immunological (5%, cardio-vascular (4%, or other (14%, disorders. Conclusions Large scale genome projects continue to produce a vast amount of data with potential application to the study of human disease. For this potential to be realised we need intelligent strategies for data categorisation and the ability to link sequence data with relevant literature. This paper demonstrates the power of combining human expert annotation with FACTS, a newly developed bioinformatics tool, to identify novel pathologs from within large-scale mouse transcript datasets.

  5. Extraction of compositional and hydration information of sulfates from laser-induced plasma spectra recorded under Mars atmospheric conditions - Implications for ChemCam investigations on Curiosity rover

    Energy Technology Data Exchange (ETDEWEB)

    Sobron, Pablo, E-mail: pablo.sobron@asc-csa.gc.ca [Department of Earth and Planetary Sciences and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Wang, Alian [Department of Earth and Planetary Sciences and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Sobron, Francisco [Unidad Asociada UVa-CSIC a traves del Centro de Astrobiologia, Parque Tecnologico de Boecillo, Parcela 203, Boecillo (Valladolid), 47151 (Spain)

    2012-02-15

    Given the volume of spectral data required for providing accurate compositional information and thereby insight in mineralogy and petrology from laser-induced breakdown spectroscopy (LIBS) measurements, fast data processing tools are a must. This is particularly true during the tactical operations of rover-based planetary exploration missions such as the Mars Science Laboratory rover, Curiosity, which will carry a remote LIBS spectrometer in its science payload. We have developed: an automated fast pre-processing sequence of algorithms for converting a series of LIBS spectra (typically 125) recorded from a single target into a reliable SNR-enhanced spectrum; a dedicated routine to quantify its spectral features; and a set of calibration curves using standard hydrous and multi-cation sulfates. These calibration curves allow deriving the elemental compositions and the degrees of hydration of various hydrous sulfates, one of the two major types of secondary minerals found on Mars. Our quantitative tools are built upon calibration-curve modeling, through the correlation of the elemental concentrations and the peak areas of the atomic emission lines observed in the LIBS spectra of standard samples. At present, we can derive the elemental concentrations of K, Na, Ca, Mg, Fe, Al, S, O, and H in sulfates, as well as the hydration degrees of Ca- and Mg-sulfates, from LIBS spectra obtained in both Earth atmosphere and Mars atmospheric conditions in a Planetary Environment and Analysis Chamber (PEACh). In addition, structural information can be potentially obtained for various Fe-sulfates. - Highlights: Black-Right-Pointing-Pointer Routines for LIBS spectral data fast automated processing. Black-Right-Pointing-Pointer Identification of elements and determination of the elemental composition. Black-Right-Pointing-Pointer Calibration curves for sulfate samples in Earth and Mars atmospheric conditions. Black-Right-Pointing-Pointer Fe curves probably related to the crystalline

  6. The Curvelet Transform in the analysis of 2-D GPR data: Signal enhancement and extraction of orientation-and-scale-dependent information

    Science.gov (United States)

    Tzanis, Andreas

    2013-04-01

    wavelet transform: whereas discrete wavelets are designed to provide sparse representations of functions with point singularities, curvelets are designed to provide sparse representations of functions with singularities on curves. This work investigates the utility of the CT in processing noisy GPR data from geotechnical and archaeometric surveys. The analysis has been performed with the Fast Discrete CT (FDCT - Candès et al., 2006) available from http://www.curvelet.org/ and adapted for use by the matGPR software (Tzanis, 2010). The adaptation comprises a set of driver functions that compute and display the curvelet decomposition of the input GPR section and then allow for the interactive exclusion/inclusion of data (wavefront) components at different scales and angles by cancelation/restoration of the corresponding curvelet coefficients. In this way it is possible to selectively reconstruct the data so as to abstract/retain information of given scales and orientations. It is demonstrated that the CT can be used to: (a) Enhance the S/N ratio by cancelling directional noise wavefronts of any angle of emergence, with particular reference to clutter. (b) Extract geometric information for further scrutiny, e.g. distinguish signals from small and large aperture fractures, faults, bedding etc. (c) Investigate the characteristics of signal propagation (hence material properties), albeit indirectly. This is possible because signal attenuation and temporal localization are closely associated, so that scale and spatio-temporal localization are also closely related. Thus, interfaces embedded in low attenuation domains will tend to produce sharp reflections rich in high frequencies and fine-scale localization. Conversely, interfaces in high attenuation domains will tend to produce dull reflections rich in low frequencies and broad localization. At a single scale and with respect to points (a) and (b) above, the results of the CT processor are comparable to those of the tuneable

  7. Feature extraction from high resolution satellite imagery as an input to the development and rapid update of a METRANS geographic information system (GIS).

    Science.gov (United States)

    2011-06-01

    This report describes an accuracy assessment of extracted features derived from three : subsets of Quickbird pan-sharpened high resolution satellite image for the area of the : Port of Los Angeles, CA. Visual Learning Systems Feature Analyst and D...

  8. Extraction process

    International Nuclear Information System (INIS)

    Rendall, J.S.; Cahalan, M.J.

    1979-01-01

    A process is described for extracting at least two desired constituents from a mineral, using a liquid reagent which produces the constituents, or compounds thereof, in separable form and independently extracting those constituents, or compounds. The process is especially valuable for the extraction of phosphoric acid and metal values from acidulated phosphate rock, the slurry being contacted with selective extractants for phosphoric acid and metal (e.g. uranium) values. In an example, uranium values are oxidized to uranyl form and extracted using an ion exchange resin. (U.K.)

  9. Solvent extraction

    Energy Technology Data Exchange (ETDEWEB)

    Coombs, D.M.; Latimer, E.G.

    1988-01-05

    It is an object of this invention to provide for the demetallization and general upgrading of heavy oil via a solvent extracton process, and to improve the efficiency of solvent extraction operations. The yield and demetallization of product oil form heavy high-metal content oil is maximized by solvent extractions which employ either or all of the following techniques: premixing of a minor amount of the solvent with feed and using countercurrent flow for the remaining solvent; use of certain solvent/free ratios; use of segmental baffle tray extraction column internals and the proper extraction column residence time. The solvent premix/countercurrent flow feature of the invention substantially improves extractions where temperatures and pressures above the critical point of the solvent are used. By using this technique, a greater yield of extract oil can be obtained at the same metals content or a lower metals-containing extract oil product can be obtained at the same yield. Furthermore, the premixing of part of the solvent with the feed before countercurrent extraction gives high extract oil yields and high quality demetallization. The solvent/feed ratio features of the invention substanially lower the captial and operating costs for such processes while not suffering a loss in selectivity for metals rejection. The column internals and rsidence time features of the invention further improve the extractor metals rejection at a constant yield or allow for an increase in extract oil yield at a constant extract oil metals content. 13 figs., 3 tabs.

  10. Extraction method

    International Nuclear Information System (INIS)

    Stary, J.; Kyrs, M.; Navratil, J.; Havelka, S.; Hala, J.

    1975-01-01

    Definitions of the basic terms and of relations are given and the knowledge is described of the possibilities of the extraction of elements, oxides, covalent-bound halogenides and heteropolyacids. Greatest attention is devoted to the detailed analysis of the extraction of chelates and ion associates using diverse agents. For both types of compounds detailed conditions are given of the separation and the effects of the individual factors are listed. Attention is also devoted to extractions using mixtures of organic agents, the synergic effects thereof, and to extractions in non-aqueous solvents. The effects of radiation on extraction and the main types of apparatus used for extractions carried out in the laboratory are described. (L.K.)

  11. Gold mineralogy and extraction

    Energy Technology Data Exchange (ETDEWEB)

    Cashion, J.D.; Brown, L.J. [Monash University, Physics Department (Australia)

    1998-12-15

    Several examples are examined in which Moessbauer spectroscopic analysis of gold mineral samples, treated concentrates and extracted species has provided information not obtainable by competing techniques. Descriptions are given of current work on bacterial oxidation of pyritic ores and on the adsorbed species from gold extracted from cyanide and chloride solutions onto activated carbon and polyurethane foams. The potential benefits for the gold mining industry from Moessbauer studies and some limitations on the use of the technique are also discussed.

  12. Gold mineralogy and extraction

    International Nuclear Information System (INIS)

    Cashion, J.D.; Brown, L.J.

    1998-01-01

    Several examples are examined in which Moessbauer spectroscopic analysis of gold mineral samples, treated concentrates and extracted species has provided information not obtainable by competing techniques. Descriptions are given of current work on bacterial oxidation of pyritic ores and on the adsorbed species from gold extracted from cyanide and chloride solutions onto activated carbon and polyurethane foams. The potential benefits for the gold mining industry from Moessbauer studies and some limitations on the use of the technique are also discussed

  13. Vacuum extraction

    DEFF Research Database (Denmark)

    Maagaard, Mathilde; Oestergaard, Jeanett; Johansen, Marianne

    2012-01-01

    Objectives. To develop and validate an Objective Structured Assessment of Technical Skills (OSATS) scale for vacuum extraction. Design. Two-part study design: Primarily, development of a procedure-specific checklist for vacuum extraction. Hereafter, validation of the developed OSATS scale for vac...

  14. Electromembrane extraction

    DEFF Research Database (Denmark)

    Huang, Chuixiu; Chen, Zhiliang; Gjelstad, Astrid

    2017-01-01

    Electromembrane extraction (EME) was inspired by solid-phase microextraction and developed from hollow fiber liquid-phase microextraction in 2006 by applying an electric field over the supported liquid membrane (SLM). EME provides rapid extraction, efficient sample clean-up and selectivity based...

  15. Bevalac extraction

    International Nuclear Information System (INIS)

    Kalnins, J.G.; Krebs, G.; Tekawa, M.; Cowles, D.; Byrne, T.

    1992-02-01

    This report will describe some of the general features of the Bevatron extraction system, primarily the dependence of the beam parameters and extraction magnet currents on the Bevalac field. The extraction magnets considered are: PFW, XPl, XP2, XS1, XS2, XM1, XM2, XM3, XQ3A and X03B. This study is based on 84 past tunes (from 1987 to the present) of various ions (p,He,O,Ne,Si,S,Ar,Ca,Ti,Fe,Nb,La,Au and U), for Bevalac fields from 1.749 to 12.575 kG, where all tunes included a complete set of beam line wire chamber pictures. The circulating beam intensity inside the Bevalac is measured with Beam Induction Electrodes (BIE) in the South Tangent Tank. The extracted beam intensity is usually measured with the Secondary Emission Monitor (SEM) in the F1-Box. For most of the tunes the extraction efficiency, as given by the SEM/BIE ratio, was not recorded in the MCR Log Book, but plotting the available Log Book data as a function of the Bevalac field, see Fig.9, we find that the extraction efficiency is typically between 30->60% with feedback spill

  16. Informations et satisfactions de l'accouchement dystocique : à propos de 85 patientes ayant eu recours à une extraction instrumentale ou à une césarienne en urgence à terme

    OpenAIRE

    Dupré, Laura

    2012-01-01

    Introduction : Actuellement, en France, 33.1% des accouchements nécessitent l'intervention de l'obstétricien pour une extraction instrumentale ou une césarienne en urgence. Le caractère imprévisible et urgent des situations obstétricales per partum est susceptible de modifier la satisfaction maternelle et du couple. Les cours de PNP représentent un moyen d'information préalable des patientes face à ce type de naissance. Objectifs : L'objectif principal de notre étude était d'évaluer le nombre...

  17. The use of process information for verification of inventory in solvent extraction contactors in near-real-time accounting for reprocessing plants

    International Nuclear Information System (INIS)

    Hakkila, E.A.; Barnes, J.W.; Hafer, J.F.

    1988-01-01

    Near-real-time accounting is being studied as a technique for improving the timeliness of accounting in nuclear fuel reprocessing plants. A major criticism of near-real-time accounting is perceived disclosure of proprietary data for IAEA verification, particularly in verifying the inventory of solvent extraction contractors. This study indicates that the contribution of uncertainties in estimating the inventory of pulsed columns or mixer settlers may be insignificant compared to uncertainties in measured throughput and measurable inventory for most reprocessing plants, and verification may not be a serious problem. Verification can become a problem for plants with low throughput and low inventory in tasks if contactor inventory variations or uncertainties are greater than --25%. Each plant must be evaluated with respect to its specific inventory and throughput characteristics

  18. ReportSites - A Computational Method to Extract Positional and Physico- Chemical Information from Large-Scale Proteomic Post-Translational Modification Datasets

    DEFF Research Database (Denmark)

    Edwards, Alistair; Edwards, Gregory; Larsen, Martin Røssel

    2012-01-01

    -translational modification data sets, wherein patterns of sequence surrounding processed sites may reveal more about the functional and structural requirements of the modification and the biochemical processes that regulate them. Results: We developed Report Sites using a test set of phosphoproteomic data from rat......-chemical environment (local pI and hydrophobicity). These were then also compared to corresponding values extracted from the full database to allow comparison of phosphorylation trends. Conclusions: Report Sites enabled physico-chemical aspects of protein phosphorylation to be deciphered in a test set of eleven...... thousand phospho sites. Basic properties of modified proteins, such as site location in the context of the complete protein, were also documented. This program can be easily adapted to any post-translational modification (or, indeed, to any defined amino acid sequence), or expanded to include more...

  19. Extraction of chemical information of suspensions using radiative transfer theory to remove multiple scattering effects: application to a model multicomponent system.

    Science.gov (United States)

    Steponavičius, Raimundas; Thennadil, Suresh N

    2011-03-15

    The effectiveness of a scatter correction approach based on decoupling absorption and scattering effects through the use of the radiative transfer theory to invert a suitable set of measurements is studied by considering a model multicomponent suspension. The method was used in conjunction with partial least-squares regression to build calibration models for estimating the concentration of two types of analytes: an absorbing (nonscattering) species and a particulate (absorbing and scattering) species. The performances of the models built by this approach were compared with those obtained by applying empirical scatter correction approaches to diffuse reflectance, diffuse transmittance, and collimated transmittance measurements. It was found that the method provided appreciable improvement in model performance for the prediction of both types of analytes. The study indicates that, as long as the bulk absorption spectra are accurately extracted, no further empirical preprocessing to remove light scattering effects is required.

  20. EXPANDING EXTRACTIONS

    NARCIS (Netherlands)

    Dietzenbacher, Erik; Lahr, Michael L.

    2013-01-01

    In this paper, we generalize hypothetical extraction techniques. We suggest that the effect of certain economic phenomena can be measured by removing them from an input-output (I-O) table and by rebalancing the set of I-O accounts. The difference between the two sets of accounts yields the

  1. VT Mineral Resources - MRDS Extract

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) MRDSVT is an extract from the Mineral Resources Data System (MRDS) covering the State of Vermont only. MRDS database contains the records provided...

  2. Extractive Summarisation of Medical Documents

    OpenAIRE

    Abeed Sarker; Diego Molla; Cecile Paris

    2012-01-01

    Background Evidence Based Medicine (EBM) practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medic...

  3. How to employ {\\overline{B}}_d^0\\to J/ψ (π η, \\overline{K}K) decays to extract information on π η scattering

    Science.gov (United States)

    Albaladejo, M.; Daub, J. T.; Hanhart, C.; Kubis, B.; Moussallam, B.

    2017-04-01

    We demonstrate that dispersion theory allows one to deduce crucial information on π η scattering from the final-state interactions of the light mesons visible in the spectral distributions of the decays {\\overline{B}}_d^0\\to J/ψ ({π}^0η, {K}+{K}-,{K}^0{\\overline{K}}^0) . Thus high-quality measurements of these differential observables are highly desired. The corresponding rates are predicted to be of the same order of magnitude as those for {\\overline{B}}_d^0\\to J/ψ {π}+{π}- measured recently at LHCb, letting the corresponding measurement appear feasible.

  4. How to employ B̄d0→J/ψ(πη,K̄K) decays to extract information on πη scattering

    International Nuclear Information System (INIS)

    Albaladejo, M.; Daub, J.T.; Hanhart, C.; Kubis, B.; Moussallam, B.

    2017-01-01

    We demonstrate that dispersion theory allows one to deduce crucial information on πη scattering from the final-state interactions of the light mesons visible in the spectral distributions of the decays B̄ d 0 →J/ψ(π 0 η,K + K − ,K 0 K̄ 0 ). Thus high-quality measurements of these differential observables are highly desired. The corresponding rates are predicted to be of the same order of magnitude as those for B̄ d 0 →J/ψπ + π − measured recently at LHCb, letting the corresponding measurement appear feasible.

  5. Pressurized Hot Water Extraction of anthocyanins from red onion: A study on extraction and degradation rates

    Energy Technology Data Exchange (ETDEWEB)

    Petersson, Erik V.; Liu Jiayin; Sjoeberg, Per J.R.; Danielsson, Rolf [Uppsala University, Department of Physical and Analytical Chemistry, P.O. Box 599, SE-751 24, Uppsala (Sweden); Turner, Charlotta, E-mail: Charlotta.Turner@kemi.uu.se [Uppsala University, Department of Physical and Analytical Chemistry, P.O. Box 599, SE-751 24, Uppsala (Sweden)

    2010-03-17

    Pressurized Hot Water Extraction (PHWE) is a quick, efficient and environmentally friendly technique for extractions. However, when using PHWE to extract thermally unstable analytes, extraction and degradation effects occur at the same time, and thereby compete. At first, the extraction effect dominates, but degradation effects soon take over. In this paper, extraction and degradation rates of anthocyanins from red onion were studied with experiments in a static batch reactor at 110 deg. C. A total extraction curve was calculated with data from the actual extraction and degradation curves, showing that more anthocyanins, 21-36% depending on the species, could be extracted if no degradation occurred, but then longer extraction times would be required than those needed to reach the peak level in the apparent extraction curves. The results give information about the different kinetic processes competing during an extraction procedure.

  6. How to employ B̄{sub d}{sup 0}→J/ψ(πη,K̄K) decays to extract information on πη scattering

    Energy Technology Data Exchange (ETDEWEB)

    Albaladejo, M. [Instituto de Física Corpuscular (IFIC), Centro Mixto CSIC-Universidad de Valencia,Institutos de Investigación de Paterna,Aptdo. 22085, 46071 Valencia (Spain); Daub, J.T. [Helmholtz-Institut für Strahlen- und Kernphysik (Theorie) andBethe Center for Theoretical Physics, Universität Bonn,53115 Bonn (Germany); Hanhart, C. [Institut für Kernphysik, Institute for Advanced Simulation and Jülich Center for Hadron Physics,Forschungszentrum Jülich, 52425 Jülich (Germany); Kubis, B. [Helmholtz-Institut für Strahlen- und Kernphysik (Theorie) andBethe Center for Theoretical Physics, Universität Bonn,53115 Bonn (Germany); Moussallam, B. [Groupe de Physique Théorique IPN (UMR8608), Université Paris-Sud 11,91406 Orsay (France)

    2017-04-04

    We demonstrate that dispersion theory allows one to deduce crucial information on πη scattering from the final-state interactions of the light mesons visible in the spectral distributions of the decays B̄{sub d}{sup 0}→J/ψ(π{sup 0}η,K{sup +}K{sup −},K{sup 0}K̄{sup 0}). Thus high-quality measurements of these differential observables are highly desired. The corresponding rates are predicted to be of the same order of magnitude as those for B̄{sub d}{sup 0}→J/ψπ{sup +}π{sup −} measured recently at LHCb, letting the corresponding measurement appear feasible.

  7. Extractive metallurgy. Recent advances

    International Nuclear Information System (INIS)

    Stevenson, E.J.

    1977-01-01

    Detailed technical information derived from patents issued since 1975 on extractive metallurgy is presented. In part one, concerning copper, the major areas covered are: smelting and roasting; acid leaching; ammonia leach processes; cuprous chloride and ferric chloride; and recovery of copper values from solution. Part two covers other metals, including: nickel and cobalt; ocean floor nodules; lead, zinc, molybdenum and manganese; precious metals; and uranium titanium, tantalum, rhenium, gallium, and other metals

  8. DISCOVERING OPTIMUM METHOD TO EXTRACT DEPTH INFORMATION FOR NEARSHORE COASTAL WATERS FROM SENTINEL-2A IMAGERY- CASE STUDY: NAYBAND BAY, IRAN

    Directory of Open Access Journals (Sweden)

    K. Kabiri

    2017-09-01

    Full Text Available The capabilities of Sentinel-2A imagery to determine bathymetric information in shallow coastal waters were examined. In this regard, two Sentinel-2A images (acquired on February and March 2016 in calm weather and relatively low turbidity were selected from Nayband Bay, located in the northern Persian Gulf. In addition, a precise and accurate bathymetric map for the study area were obtained and used for both calibrating the models and validating the results. Traditional linear and ratio transform techniques, as well as a novel integrated method, were employed to determine depth values. All possible combinations of the three bands (Band 2: blue (458-523 nm, Band 3: green (543-578 nm, and Band 4: red (650-680 nm, spatial resolution: 10 m have been considered (11 options using the traditional linear and ratio transform techniques, together with 10 model options for the integrated method. The accuracy of each model was assessed by comparing the determined bathymetric information with field measured values. The correlation coefficients (R2, and root mean square errors (RMSE for validation points were calculated for all models and for two satellite images. When compared with the linear transform method, the method employing ratio transformation with a combination of all three bands yielded more accurate results (R2Mac = 0.795, R2Feb = 0.777, RMSEMac = 1.889 m, and RMSEFeb =2.039 m. Although most of the integrated transform methods (specifically the method including all bands and band ratios have yielded the highest accuracy, these increments were not significant, hence the ratio transformation has selected as optimum method.

  9. A Two-Radius Circular Array Method: Extracting Independent Information on Phase Velocities of Love Waves From Microtremor Records From a Simple Seismic Array

    Science.gov (United States)

    Tada, T.; Cho, I.; Shinozaki, Y.

    2005-12-01

    We have invented a Two-Radius (TR) circular array method of microtremor exploration, an algorithm that enables to estimate phase velocities of Love waves by analyzing horizontal-component records of microtremors that are obtained with an array of seismic sensors placed around circumferences of two different radii. The data recording may be done either simultaneously around the two circles or in two separate sessions with sensors distributed around each circle. Both Rayleigh and Love waves are present in the horizontal components of microtremors, but in the data processing of our TR method, all information on the Rayleigh waves ends up cancelled out, and information on the Love waves alone are left to be analyzed. Also, unlike the popularly used frequency-wavenumber spectral (F-K) method, our TR method does not resolve individual plane-wave components arriving from different directions and analyze their "vector" phase velocities, but instead directly evaluates their "scalar" phase velocities --- phase velocities that contain no information on the arrival direction of waves --- through a mathematical procedure which involves azimuthal averaging. The latter feature leads us to expect that, with our TR method, it is possible to conduct phase velocity analysis with smaller numbers of sensors, with higher stability, and up to longer-wavelength ranges than with the F-K method. With a view to investigating the capabilities and limitations of our TR method in practical implementation to real data, we have deployed circular seismic arrays of different sizes at a test site in Japan where the underground structure is well documented through geophysical exploration. Ten seismic sensors were placed equidistantly around two circumferences, five around each circle, with varying combinations of radii ranging from several meters to several tens of meters, and simultaneous records of microtremors around circles of two different radii were analyzed with our TR method to produce

  10. AUTOMATIC LUNG NODULE SEGMENTATION USING AUTOSEED REGION GROWING WITH MORPHOLOGICAL MASKING (ARGMM AND FEATURE EX-TRACTION THROUGH COMPLETE LOCAL BINARY PATTERN AND MICROSCOPIC INFORMATION PATTERN

    Directory of Open Access Journals (Sweden)

    Senthil Kumar

    2015-04-01

    Full Text Available An efficient Autoseed Region Growing with Morphological Masking(ARGMM is imple-mented in this paper on the Lung CT Slice to segment the 'Lung Nodules',which may be the potential indicator for the Lung Cancer. The segmentation of lung nodules car-ried out in this paper through Multi-Thresholding, ARGMM and Level Set Evolution. ARGMM takes twice the time compared to Level Set, but still the number of suspected segmented nodules are doubled, which make sure that no potential cancerous nodules go unnoticed at the earlier stages of diagnosis. It is very important not to panic the patient by finding the presence of nodules from Lung CT scan. Only 40 percent of nod-ules can be cancerous. Hence, in this paper an efficient Shape and Texture analysis is computed to quantitatively describe the segmented lung nodules. The Frequency spectrum of the lung nodules is developed and its frequency domain features are com-puted. The Complete Local binary pattern of lung nodules is computed in this paper by constructing the combine histogram of Sign and Magnitude Local Binary Patterns. Lo-cal Configuration Pattern is also determined in this work for lung nodules to numeri-cally model the microscopic information of nodules pattern.

  11. Extracting oils

    Energy Technology Data Exchange (ETDEWEB)

    Patart, G

    1926-03-15

    In the hydrogenation or extraction of by-products from organic substances at high temperatures and pressures, the gases or liquids, or both, used are those which are already heated and compressed during industrial operations such as exothermic synthesizing reactions such as the production of methanol from hydrogen and carbon monoxide in a catalytic process. Gases from this reaction may be passed upwardly through a digester packed with pine wood while liquid from the same catalytic process is passed downwardly through the material. The issuing liquid contains methanol, pine oil, acetone, isopropyl alcohol, and acetic acid. The gases contain additional hydrogen, carbon monoxide, methane, ethylene, and its homologs which are condensed upon the catalyser to liquid hydrocarbons. Petroleum oils and coal may be treated similarly.

  12. Extractable Work from Correlations

    Directory of Open Access Journals (Sweden)

    Martí Perarnau-Llobet

    2015-10-01

    Full Text Available Work and quantum correlations are two fundamental resources in thermodynamics and quantum information theory. In this work, we study how to use correlations among quantum systems to optimally store work. We analyze this question for isolated quantum ensembles, where the work can be naturally divided into two contributions: a local contribution from each system and a global contribution originating from correlations among systems. We focus on the latter and consider quantum systems that are locally thermal, thus from which any extractable work can only come from correlations. We compute the maximum extractable work for general entangled states, separable states, and states with fixed entropy. Our results show that while entanglement gives an advantage for small quantum ensembles, this gain vanishes for a large number of systems.

  13. Extracting Macroscopic Information from Web Links.

    Science.gov (United States)

    Thelwall, Mike

    2001-01-01

    Discussion of Web-based link analysis focuses on an evaluation of Ingversen's proposed external Web Impact Factor for the original use of the Web, namely the interlinking of academic research. Studies relationships between academic hyperlinks and research activities for British universities and discusses the use of search engines for Web link…

  14. Compressive Information Extraction: A Dynamical Systems Approach

    Science.gov (United States)

    2016-01-24

    significantly benefit society. Systems endowed with activity analysis capabilities can prevent crime, allow elderly people to continue living independently...keynote speaker at the 2014 IEEE International Conference on Distributed Smart Cameras. 9 2.5 Transitions The theoretical framework developed under...FBI Deputy Director, June 2013) and the Hon. Theresa May (U.K. Home Secretary, Sept. 2014). It was also covered in a N.Y. Times article that appeared

  15. Respiratory Information Extraction from Electrocardiogram Signals

    KAUST Repository

    Amin, Gamal El Din Fathy

    2010-01-01

    The Electrocardiogram (ECG) is a tool measuring the electrical activity of the heart, and it is extensively used for diagnosis and monitoring of heart diseases. The ECG signal reflects not only the heart activity but also many other physiological

  16. Corpora and Data Preparation for Information Extraction

    Science.gov (United States)

    1993-09-01

    technical publications in fields such as communications, airline transportation, rubber & plas- tics, and food marketing . The Japanese-language...types in the U. S., for example, avocado farms, electric popcorn popper sales, management consulting. The template-filling task required that products

  17. Extraction of compositional and hydration information of sulfates from laser-induced plasma spectra recorded under Mars atmospheric conditions — Implications for ChemCam investigations on Curiosity rover

    International Nuclear Information System (INIS)

    Sobron, Pablo; Wang, Alian; Sobron, Francisco

    2012-01-01

    Given the volume of spectral data required for providing accurate compositional information and thereby insight in mineralogy and petrology from laser-induced breakdown spectroscopy (LIBS) measurements, fast data processing tools are a must. This is particularly true during the tactical operations of rover-based planetary exploration missions such as the Mars Science Laboratory rover, Curiosity, which will carry a remote LIBS spectrometer in its science payload. We have developed: an automated fast pre-processing sequence of algorithms for converting a series of LIBS spectra (typically 125) recorded from a single target into a reliable SNR-enhanced spectrum; a dedicated routine to quantify its spectral features; and a set of calibration curves using standard hydrous and multi-cation sulfates. These calibration curves allow deriving the elemental compositions and the degrees of hydration of various hydrous sulfates, one of the two major types of secondary minerals found on Mars. Our quantitative tools are built upon calibration-curve modeling, through the correlation of the elemental concentrations and the peak areas of the atomic emission lines observed in the LIBS spectra of standard samples. At present, we can derive the elemental concentrations of K, Na, Ca, Mg, Fe, Al, S, O, and H in sulfates, as well as the hydration degrees of Ca- and Mg-sulfates, from LIBS spectra obtained in both Earth atmosphere and Mars atmospheric conditions in a Planetary Environment and Analysis Chamber (PEACh). In addition, structural information can be potentially obtained for various Fe-sulfates. - Highlights: ► Routines for LIBS spectral data fast automated processing. ► Identification of elements and determination of the elemental composition. ► Calibration curves for sulfate samples in Earth and Mars atmospheric conditions. ► Fe curves probably related to the crystalline structure of Fe-sulfates. ► Extraction of degree of hydration in hydrous Mg-, Ca-, and Fe-sulfates.

  18. Domain-independent planning for services in uncertain and dynamic environments

    NARCIS (Netherlands)

    Kaldeli, Eirini

    2013-01-01

    Slim plannen maakt netwerk flexibel Artificial Intelligence planningssystemen kunnen ertoe bijdragen dat bepaalde netwerkdiensten op een meer flexibele manier geautomatiseerd worden. Denk daarbij bijvoorbeeld aan het Smart Home vol slimme apparaten. De meeste bestaande planningsbenaderingen voor Web

  19. Conversational Interfaces: A Domain-Independent Architecture for Task-Oriented Dialogues

    Science.gov (United States)

    2002-12-12

    possible ways that I ould get there; for instan e, I ould y,drive, walk, bi y le, or skateboard there. Now imagine, further, thatyou don’t are...your house. Of ourse the situation an rapidly get more ompli ated. It might be that you live too faraway for me to skateboard or walk, and that I

  20. Rules Extraction with an Immune Algorithm

    Directory of Open Access Journals (Sweden)

    Deqin Yan

    2007-12-01

    Full Text Available In this paper, a method of extracting rules with immune algorithms from information systems is proposed. Designing an immune algorithm is based on a sharing mechanism to extract rules. The principle of sharing and competing resources in the sharing mechanism is consistent with the relationship of sharing and rivalry among rules. In order to extract rules efficiently, a new concept of flexible confidence and rule measurement is introduced. Experiments demonstrate that the proposed method is effective.

  1. Phytochemical and antimicrobial screening of extracts of Aquilaria ...

    African Journals Online (AJOL)

    STORAGESEVER

    2008-10-20

    Oct 20, 2008 ... ... Academy of Management and Information Technology, ... and fats and glycosides in methanol extracts whereas saponins, fixed oils ... The methanol extract of the leaf gave the highest zone of inhibition against B. subtilis.

  2. Toponym Extraction and Disambiguation Enhancement Using Loops of Feedback

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice; Fred, A.; Dietz, J.L.G.; Liu, K.; Filipe, J.

    2013-01-01

    Toponym extraction and disambiguation have received much attention in recent years. Typical fields addressing these topics are information retrieval, natural language processing, and semantic web. This paper addresses two problems with toponym extraction and disambiguation. First, almost no existing

  3. Extracting Tag Hierarchies

    Science.gov (United States)

    Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely

    2013-01-01

    Tagging items with descriptive annotations or keywords is a very natural way to compress and highlight information about the properties of the given entity. Over the years several methods have been proposed for extracting a hierarchy between the tags for systems with a "flat", egalitarian organization of the tags, which is very common when the tags correspond to free words given by numerous independent people. Here we present a complete framework for automated tag hierarchy extraction based on tag occurrence statistics. Along with proposing new algorithms, we are also introducing different quality measures enabling the detailed comparison of competing approaches from different aspects. Furthermore, we set up a synthetic, computer generated benchmark providing a versatile tool for testing, with a couple of tunable parameters capable of generating a wide range of test beds. Beside the computer generated input we also use real data in our studies, including a biological example with a pre-defined hierarchy between the tags. The encouraging similarity between the pre-defined and reconstructed hierarchy, as well as the seemingly meaningful hierarchies obtained for other real systems indicate that tag hierarchy extraction is a very promising direction for further research with a great potential for practical applications. Tags have become very prevalent nowadays in various online platforms ranging from blogs through scientific publications to protein databases. Furthermore, tagging systems dedicated for voluntary tagging of photos, films, books, etc. with free words are also becoming popular. The emerging large collections of tags associated with different objects are often referred to as folksonomies, highlighting their collaborative origin and the “flat” organization of the tags opposed to traditional hierarchical categorization. Adding a tag hierarchy corresponding to a given folksonomy can very effectively help narrowing or broadening the scope of search

  4. Extracting tag hierarchies.

    Directory of Open Access Journals (Sweden)

    Gergely Tibély

    Full Text Available Tagging items with descriptive annotations or keywords is a very natural way to compress and highlight information about the properties of the given entity. Over the years several methods have been proposed for extracting a hierarchy between the tags for systems with a "flat", egalitarian organization of the tags, which is very common when the tags correspond to free words given by numerous independent people. Here we present a complete framework for automated tag hierarchy extraction based on tag occurrence statistics. Along with proposing new algorithms, we are also introducing different quality measures enabling the detailed comparison of competing approaches from different aspects. Furthermore, we set up a synthetic, computer generated benchmark providing a versatile tool for testing, with a couple of tunable parameters capable of generating a wide range of test beds. Beside the computer generated input we also use real data in our studies, including a biological example with a pre-defined hierarchy between the tags. The encouraging similarity between the pre-defined and reconstructed hierarchy, as well as the seemingly meaningful hierarchies obtained for other real systems indicate that tag hierarchy extraction is a very promising direction for further research with a great potential for practical applications. Tags have become very prevalent nowadays in various online platforms ranging from blogs through scientific publications to protein databases. Furthermore, tagging systems dedicated for voluntary tagging of photos, films, books, etc. with free words are also becoming popular. The emerging large collections of tags associated with different objects are often referred to as folksonomies, highlighting their collaborative origin and the "flat" organization of the tags opposed to traditional hierarchical categorization. Adding a tag hierarchy corresponding to a given folksonomy can very effectively help narrowing or broadening the scope of

  5. Extracting tag hierarchies.

    Science.gov (United States)

    Tibély, Gergely; Pollner, Péter; Vicsek, Tamás; Palla, Gergely

    2013-01-01

    Tagging items with descriptive annotations or keywords is a very natural way to compress and highlight information about the properties of the given entity. Over the years several methods have been proposed for extracting a hierarchy between the tags for systems with a "flat", egalitarian organization of the tags, which is very common when the tags correspond to free words given by numerous independent people. Here we present a complete framework for automated tag hierarchy extraction based on tag occurrence statistics. Along with proposing new algorithms, we are also introducing different quality measures enabling the detailed comparison of competing approaches from different aspects. Furthermore, we set up a synthetic, computer generated benchmark providing a versatile tool for testing, with a couple of tunable parameters capable of generating a wide range of test beds. Beside the computer generated input we also use real data in our studies, including a biological example with a pre-defined hierarchy between the tags. The encouraging similarity between the pre-defined and reconstructed hierarchy, as well as the seemingly meaningful hierarchies obtained for other real systems indicate that tag hierarchy extraction is a very promising direction for further research with a great potential for practical applications. Tags have become very prevalent nowadays in various online platforms ranging from blogs through scientific publications to protein databases. Furthermore, tagging systems dedicated for voluntary tagging of photos, films, books, etc. with free words are also becoming popular. The emerging large collections of tags associated with different objects are often referred to as folksonomies, highlighting their collaborative origin and the "flat" organization of the tags opposed to traditional hierarchical categorization. Adding a tag hierarchy corresponding to a given folksonomy can very effectively help narrowing or broadening the scope of search. Moreover

  6. Sequence complexity and work extraction

    International Nuclear Information System (INIS)

    Merhav, Neri

    2015-01-01

    We consider a simplified version of a solvable model by Mandal and Jarzynski, which constructively demonstrates the interplay between work extraction and the increase of the Shannon entropy of an information reservoir which is in contact with a physical system. We extend Mandal and Jarzynski’s main findings in several directions: first, we allow sequences of correlated bits rather than just independent bits. Secondly, at least for the case of binary information, we show that, in fact, the Shannon entropy is only one measure of complexity of the information that must increase in order for work to be extracted. The extracted work can also be upper bounded in terms of the increase in other quantities that measure complexity, like the predictability of future bits from past ones. Third, we provide an extension to the case of non-binary information (i.e. a larger alphabet), and finally, we extend the scope to the case where the incoming bits (before the interaction) form an individual sequence, rather than a random one. In this case, the entropy before the interaction can be replaced by the Lempel–Ziv (LZ) complexity of the incoming sequence, a fact that gives rise to an entropic meaning of the LZ complexity, not only in information theory, but also in physics. (paper)

  7. Ocean Thermal Extractable Energy Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Ascari, Matthew [Lockheed Martin Corporation, Bethesda, MD (United States)

    2012-10-28

    The Ocean Thermal Extractable Energy Visualization (OTEEV) project focuses on assessing the Maximum Practicably Extractable Energy (MPEE) from the world’s ocean thermal resources. MPEE is defined as being sustainable and technically feasible, given today’s state-of-the-art ocean energy technology. Under this project the OTEEV team developed a comprehensive Geospatial Information System (GIS) dataset and software tool, and used the tool to provide a meaningful assessment of MPEE from the global and domestic U.S. ocean thermal resources.

  8. Uranium extraction technology

    International Nuclear Information System (INIS)

    1993-01-01

    In 1983 the Nuclear Energy Agency of the Organisation for Economic Co-operation and Development (OECD/NEA) and the IAEA jointly published a book on Uranium Extraction Technology. A primary objective of this report was to document the significant technological developments that took place during the 1970s. The purpose of this present publication is to update and expand the original book. It includes background information about the principle of the unit operations used in uranium ore processing and summarizes the current state of the art. The publication also seeks to preserve the technology and the operating 'know-how' developed over the past ten years. This publication is one of a series of Technical Reports on uranium ore processing that have been prepared by the Division of Nuclear Fuel Cycle and Waste Management at the IAEA. A complete list of these reports is included as an addendum. Refs, figs and tabs

  9. VT Lake Champlain (extracted from VHDCARTO) - polygon

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) LKCH5K is an extract of Lake Champlain that is derived from VHDCARTO. The following metadata is from VHDCARTO.VHDCARTO is a simplified version of...

  10. Using the DOM Tree for Content Extraction

    Directory of Open Access Journals (Sweden)

    David Insa

    2012-10-01

    Full Text Available The main information of a webpage is usually mixed between menus, advertisements, panels, and other not necessarily related information; and it is often difficult to automatically isolate this information. This is precisely the objective of content extraction, a research area of widely interest due to its many applications. Content extraction is useful not only for the final human user, but it is also frequently used as a preprocessing stage of different systems that need to extract the main content in a web document to avoid the treatment and processing of other useless information. Other interesting application where content extraction is particularly used is displaying webpages in small screens such as mobile phones or PDAs. In this work we present a new technique for content extraction that uses the DOM tree of the webpage to analyze the hierarchical relations of the elements in the webpage. Thanks to this information, the technique achieves a considerable recall and precision. Using the DOM structure for content extraction gives us the benefits of other approaches based on the syntax of the webpage (such as characters, words and tags, but it also gives us a very precise information regarding the related components in a block, thus, producing very cohesive blocks.

  11. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  12. Extractive Summarisation of Medical Documents

    Directory of Open Access Journals (Sweden)

    Abeed Sarker

    2012-09-01

    Full Text Available Background Evidence Based Medicine (EBM practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medical documents. MethodWe use a corpus that is specifically designed for summarisation in the EBM domain. We use approximately half the corpus for deriving important statistics associated with the best possible extractive summaries. We take into account factors such as sentence position, length, sentence content, and the type of the query posed. Using the statistics from the first set, we evaluate our approach on a separate set. Evaluation of the qualities of the generated summaries is performed automatically using ROUGE, which is a popular tool for evaluating automatic summaries. Results Our summarisation approach outperforms all baselines (best baseline score: 0.1594; our score 0.1653. Further improvements are achieved when query types are taken into account. Conclusion The quality of extractive summarisation in the medical domain can be significantly improved by incorporating domain knowledge and statistics derived from a specialised corpus. Such techniques can therefore be applied for content selection in end-to-end summarisation systems.

  13. Extract of Acanthospermum hispidum

    African Journals Online (AJOL)

    Administrator

    quantitatively. Acute toxicity study of the extract was conducted, and diabetic rats induced using alloxan (80 mg/kg ... Type 2 diabetes is one of the leading causes of mortality and ..... (2011): Phytochemical screening and extraction - A review.

  14. Extracts against Various Pathogens

    Directory of Open Access Journals (Sweden)

    Ritika Chauhan

    2013-07-01

    The present study shows that tested lichen Parmotrema sp. extracts demonstrated a strong antimicrobial effect. That suggests the active components from methanol extracts of the investigated lichen Parmotrema sp. can be used as natural antimicrobial agent against pathogens.

  15. using Supercritical Fluid Extraction

    African Journals Online (AJOL)

    Methods: Supercritical CO2 extraction technology was adopted in this experiment to study the process of extraction of volatile oil from Polygonatum odoratum while gas chromatograph-mass spectrometer ..... Saponin rich fractions from.

  16. Inform@ed space

    DEFF Research Database (Denmark)

    Bjerrum, Peter; Olsen, Kasper Nefer

    2001-01-01

    Inform@ed space Sensorial Perception And Computer Enchancement - bidrag til Nordisk Arkitekturforskningsforenings IT-konference, AAA april 2001.......Inform@ed space Sensorial Perception And Computer Enchancement - bidrag til Nordisk Arkitekturforskningsforenings IT-konference, AAA april 2001....

  17. Equilibrium and extraction mechanism from monomeric and polimeric species of zirconium in solution. part. 2

    International Nuclear Information System (INIS)

    Azevedo, H.L.P. de.

    1980-01-01

    The mechanism of extraction and the equilibrium of chemical species from Zirconium solutions was studied. The multiple extraction method was used to show the species envolved in the extraction process and qualitative informations was obtained about the equilibrium between extractable species (monomers) and non-extractable species (polymers) in the aqueous phase. (M.J.C.) [pt

  18. Using Local Grammar for Entity Extraction from Clinical Reports

    Directory of Open Access Journals (Sweden)

    Aicha Ghoulam

    2015-06-01

    Full Text Available Information Extraction (IE is a natural language processing (NLP task whose aim is to analyze texts written in natural language to extract structured and useful information such as named entities and semantic relations linking these entities. Information extraction is an important task for many applications such as bio-medical literature mining, customer care, community websites, and personal information management. The increasing information available in patient clinical reports is difficult to access. As it is often in an unstructured text form, doctors need tools to enable them access to this information and the ability to search it. Hence, a system for extracting this information in a structured form can benefits healthcare professionals. The work presented in this paper uses a local grammar approach to extract medical named entities from French patient clinical reports. Experimental results show that the proposed approach achieved an F-Measure of 90. 06%.

  19. Optimization-based Method for Automated Road Network Extraction

    International Nuclear Information System (INIS)

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  20. Mining of the social network extraction

    Science.gov (United States)

    Nasution, M. K. M.; Hardi, M.; Syah, R.

    2017-01-01

    The use of Web as social media is steadily gaining ground in the study of social actor behaviour. However, information in Web can be interpreted in accordance with the ability of the method such as superficial methods for extracting social networks. Each method however has features and drawbacks: it cannot reveal the behaviour of social actors, but it has the hidden information about them. Therefore, this paper aims to reveal such information in the social networks mining. Social behaviour could be expressed through a set of words extracted from the list of snippets.

  1. Chemopreventive and Antiproliferative Effect of Andrographis Paniculata Extract

    Directory of Open Access Journals (Sweden)

    Agrawal RC

    2017-06-01

    Full Text Available An Andrographis paniculata leaf and stem extract was studied in Hela cells lines by In Vitro methods and anti promoting effect by skin tumour model. The dose dependent cytotoxicity was observed in HeLa cell lines by stem and leaves extracts of Andrographis paniculata extract. The prevention of bone marrow micronucleus formation by Andrographis paniculata leaves and stem extract was also observed. The reductions in tumour numbers were observed. The glutathione level was increased in the liver of animals which received the treatment of Andrographis extract along with DMBA + Croton Oil. The revealing information about the anticancer, antiproliferative and antimutagenic effect of an Andrographis paniculata extract was observed.

  2. Data Extraction Based on Page Structure Analysis

    Directory of Open Access Journals (Sweden)

    Ren Yichao

    2017-01-01

    Full Text Available The information we need has some confusing problems such as dispersion and different organizational structure. In addition, because of the existence of unstructured data like natural language and images, extracting local content pages is extremely difficult. In the light of of the problems above, this article will apply a method combined with page structure analysis algorithm and page data extraction algorithm to accomplish the gathering of network data. In this way, the problem that traditional complex extraction model behave poorly when dealing with large-scale data is perfectly solved and the page data extraction efficiency is also boosted to a new level. In the meantime, the article will also make a comparison about pages and content of different types between the methods of DOM structure based on the page and HTML regularities of distribution. After all of those, we may find a more efficient extract method.

  3. Audio feature extraction using probability distribution function

    Science.gov (United States)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  4. DESIGNING AN EVENT EXTRACTION SYSTEM

    Directory of Open Access Journals (Sweden)

    Botond BENEDEK

    2017-06-01

    Full Text Available In the Internet world, the amount of information available reaches very high quotas. In order to find specific information, some tools were created that automatically scroll through the existing web pages and update their databases with the latest information on the Internet. In order to systematize the search and achieve a result in a concrete form, another step is needed for processing the information returned by the search engine and generating the response in a more organized form. Centralizing events of a certain type is useful first of all for creating a news service. Through this system we are pursuing a knowledge - events from the Internet documents - extraction system. The system will recognize events of a certain type (weather, sports, politics, text data mining, etc. depending on how it will be trained (the concept it has in the dictionary. These events can be provided to the user, or it can also extract the context in which the event occurred, to indicate the initial form in which the event was embedded.

  5. Extraction with supercritical gases

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, G M; Wilke, G; Stahl, E

    1980-01-01

    The contents of this book derives from a symposium on the 5th and 6th of June 1978 in the ''Haus der Technik'' in Essen. Contributions were made to separation with supercritical gases, fluid extraction of hops, spices and tobacco, physicochemical principles of extraction, phase equilibria and critical curves of binary ammonia-hydrocarbon mixtures, a quick method for the microanalytical evaluation of the dissolving power of supercritical gases, chromatography with supercritical fluids, the separation of nonvolatile substances by means of compressed gases in countercurrent processes, large-scale industrial plant for extraction with supercritical gases, development and design of plant for high-pressure extraction of natural products.

  6. Incorporating Relation Paths in Neural Relation Extraction

    OpenAIRE

    Zeng, Wenyuan; Lin, Yankai; Liu, Zhiyuan; Sun, Maosong

    2016-01-01

    Distantly supervised relation extraction has been widely used to find novel relational facts from plain text. To predict the relation between a pair of two target entities, existing methods solely rely on those direct sentences containing both entities. In fact, there are also many sentences containing only one of the target entities, which provide rich and useful information for relation extraction. To address this issue, we build inference chains between two target entities via intermediate...

  7. Antioxidants: Characterization, natural sources, extraction and analysis

    OpenAIRE

    OROIAN, MIRCEA; Escriche Roberto, Mª Isabel

    2015-01-01

    [EN] Recently many review papers regarding antioxidants fromdifferent sources and different extraction and quantification procedures have been published. However none of them has all the information regarding antioxidants (chemistry, sources, extraction and quantification). This article tries to take a different perspective on antioxidants for the new researcher involved in this field. Antioxidants from fruit, vegetables and beverages play an important role in human health, fo...

  8. Rate phenomena in uranium extraction by amines

    International Nuclear Information System (INIS)

    Coleman, C.F.; McDowell, W.J.

    1979-01-01

    Kinetics studies and other rate measurements are reviewed in the amine extraction of uranium and of some other related and associated metal ions. Equilibration is relatively fast in the uranium sulfate systems most important to uranium hydrometallurgy. Significantly slow equilibration has been encountered in some other systems. Most of the recorded rate information, both qualitative and quantitative, has come from exploratory and process-development work, while some kinetics studies have been directed specifically toward elucidation of extraction mechanisms. 71 references

  9. Exhaustive extraction of peptides by electromembrane extraction

    DEFF Research Database (Denmark)

    Huang, Chuixiu; Gjelstad, Astrid; Pedersen-Bjergaard, Stig

    2015-01-01

    trifluoroacetate, and leu-enkephalin were extracted from 600 μL of 25 mM phosphate buffer (pH 3.5), through a supported liquid membrane (SLM) containing di-(2-ethylhexyl)-phosphate (DEHP) dissolved in an organic solvent, and into 600 μL of an acidified aqueous acceptor solution using a thin flat membrane-based EME...

  10. Diluent effects in solvent extraction. The Effects of Diluents in Solvent Extraction - a literature study

    International Nuclear Information System (INIS)

    Loefstroem-Engdahl, Elin; Aneheim, Emma; Ekberg, Christian; Foreman, Mark; Skarnemark, Gunnar

    2010-01-01

    The fact that the choice of organic diluent is important for a solvent extraction process goes without saying. Several factors, such as e.g. price, flash point, viscosity, polarity etc. each have their place in the planning of a solvent extraction system. This high number of variables makes the lack of compilations concerning diluent effects to an interesting topic. Often the interest for the research concerning a specific extraction system focuses on the extractant used and the complexes built up during an extraction. The diluents used are often classical ones, even if it has been shown that choice of diluent can affect extraction as well as separation in an extraction system. An attempt to point out important steps in the understanding of diluent effects in solvent extraction is here presented. This large field is, of course, not summarized in this article, but an attempt is made to present important steps in the understanding of diluents effects in solvent extraction. Trying to make the information concerning diluent effects and applications more easily accessible this review offers a selected summarizing of literature concerning diluents effects in solvent extraction. (authors)

  11. Foundations of a Marxist Theory of the Political Economy of Information: Trade Secrets and Intellectual Property, and the Production of Relative Surplus Value and the Extraction of Rent-Tribute

    Directory of Open Access Journals (Sweden)

    Jakob Rigi

    2014-12-01

    Full Text Available The aim of this article is to sketch a preliminary outline of a Marxist theory of the political economy of information. It defines information as a symbolic form that can be digitally copied. This definition is purely formal and disregards epistemological, ideological, and functional aspects. The article argues that the value of information defined in this sense tends to zero and therefore the price of information is rent. However, information plays a central role in the production of relative surplus value on the one hand, and the distribution of the total social surplus value in forms of surplus profits and rents, on the other. Thus, the hegemony of information technologies in contemporary productive forces has not made Marx’s theory of value irrelevant. On the contrary, the political economy of information can only be understood in the light of this theory. The article demonstrates that the capitalist production and distribution of surplus value at the global level forms the foundation of the political economy of information.

  12. Plant extraction process

    DEFF Research Database (Denmark)

    2006-01-01

    A method for producing a plant extract comprises incubating a plant material with an enzyme composition comprising a lipolytic enzyme.......A method for producing a plant extract comprises incubating a plant material with an enzyme composition comprising a lipolytic enzyme....

  13. AGS slow extraction improvements

    International Nuclear Information System (INIS)

    Glenn, J.W.; Smith, G.A.; Sandberg, J.N.; Repeta, L.; Weisberg, H.

    1979-01-01

    Improvement of the straightness of the F5 copper septum increased the AGS slow extraction efficiency from approx. 80% to approx. 90%. Installation of an electrostatic septum at H2O, 24 betatron wavelengths upstream of F5, further improved the extraction efficiency to approx. 97%

  14. Extraction of metal values

    Energy Technology Data Exchange (ETDEWEB)

    Dalton, R F

    1988-10-19

    Metal values (especially uranium values) are extracted from aqueous solutions of metal oxyions in the absence of halogen ion using an imidazole of defined formula. Especially preferred extractants are 1-alkyl imidazoles and benzimidazoles having from 7 to 25 carbon atoms in the alkyl group.

  15. (Lamiaceae) root extracts

    African Journals Online (AJOL)

    Purpose: To evaluate the larvicidal, nematicidal, antifeedant, and antifungal effects of 10 solvent extracts of Mentha spicata root. Methods: Ten solvent extracts were investigated for their total flavonoid and phenolic content and screened for larvicidal, nematicidal, antifeedant, and antifungal activities. The total phenolic ...

  16. Is there any need for domain-dependent control information? A reply

    Energy Technology Data Exchange (ETDEWEB)

    Minton, S. [USC Information Sciences Inst., Marina del Rey, CA (United States)

    1996-12-31

    In this paper, we consider the role that domain-dependent control knowledge plays in problem solving systems. Ginsberg and Geddis have claimed that domain-dependent control information has no place in declarative systems; instead, they say, such information should be derived from declarative facts about the domain plus domain-independent principles. We dispute their conclusion, arguing that it is impractical to generate control knowledge solely on the basis of logical derivations. We propose that simplifying abstractions are crucial for deriving control knowledge, and, as a result, empirical utility evaluation of the resulting rules will frequently be necessary to validate the utility of derived control knowledge. We illustrate our arguments with examples from two implemented systems.

  17. Analysis of geologic terrain models for determination of optimum SAR sensor configuration and optimum information extraction for exploration of global non-renewable resources. Pilot study: Arkansas Remote Sensing Laboratory, part 1, part 2, and part 3

    Science.gov (United States)

    Kaupp, V. H.; Macdonald, H. C.; Waite, W. P.; Stiles, J. A.; Frost, F. S.; Shanmugam, K. S.; Smith, S. A.; Narayanan, V.; Holtzman, J. C. (Principal Investigator)

    1982-01-01

    Computer-generated radar simulations and mathematical geologic terrain models were used to establish the optimum radar sensor operating parameters for geologic research. An initial set of mathematical geologic terrain models was created for three basic landforms and families of simulated radar images were prepared from these models for numerous interacting sensor, platform, and terrain variables. The tradeoffs between the various sensor parameters and the quantity and quality of the extractable geologic data were investigated as well as the development of automated techniques of digital SAR image analysis. Initial work on a texture analysis of SEASAT SAR imagery is reported. Computer-generated radar simulations are shown for combinations of two geologic models and three SAR angles of incidence.

  18. Extraction chromatography of actinides

    International Nuclear Information System (INIS)

    Muller, W.

    1978-01-01

    Extraction chromatography of actinides in the oxidation state from 2 to 6 is reviewed. Data on using neutral (tbp), basic (substituted ammonium salts) and acidic [di-(2-ethylhexyl)-phosphoric acid (D2EHPA)] extracting agents ketones, esters, alcohols and β-diketones in this method are given. Using the example of actinide separation using D2EHPA, discussed are factors influencing the efficiency of their chromatography separation (nature and particle size of the carrier materials, extracting agents amount on the carrier, temperature and elution rate)

  19. The organophosphorus extractants

    International Nuclear Information System (INIS)

    Zaoul, B.; Attou, M.; Azzouz, A.

    1989-07-01

    This work consists in a bibliographic review dealing with phosphorus and organophosphorus compounds chemistry and especially with the main extracting agents used in uranium ore treatment. In this context, a special interest is devoted to TBP, D 2 EHPA and TOPO. The content of this work is based on a large bibliography including cca. One hundred references related to many aspects concerning as well the nomenclature, the classification and the chemical structures of the organophosphorus compounds as synthesis methods, purification and analysis of the main extracting agents used in uranium extraction

  20. Substoichiometric extraction of phosphorus

    International Nuclear Information System (INIS)

    Shigematsu, T.; Kudo, K.

    1981-01-01

    A study of the substoichiometric extraction of phosphorus is described. Phosphorus was extracted in the form of ternary compounds such as ammonium phosphomolybdate, 8-hydroxyquinolinium phosphomolybdate, tetraphenylarsonium phosphomolybdate and tri-n-octylamine phosphomolybdate. Consequently, phosphorus was extracted substoichiometrically by the addition of a substoichiometric amount of molybdenum for the four phosphomolybdate compounds. On the other hand, phosphorus could be separated substoichiometrically with a substoichiometric amount of tetraphenylarsonium chloride or tri-n-octylamine. Stoichiometric ratios of these ternary compounds obtained substoichiometrically were 1:12:3 for phosphorus, molybdenum and organic reagent. The applicability of these compounds to phosphorus determination is also discussed. (author)

  1. Counter current extraction of phosphoric acid: Food grade acid production

    International Nuclear Information System (INIS)

    Shlewit, H.; AlIbrahim, M.

    2009-01-01

    Extraction, scrubbing and stripping of phosphoric acid from the Syrian wet-phosphoric acid was carried out using Micro-pilot plant of mixer settler type of 8 l/h capacity. Tributyl phosphate (TBP)/di-isopropyl ether (DIPE) in kerosene was used as extractant. Extraction and stripping equilibrium curves were evaluated. The number of extraction and stripping stages to achieve the convenient and feasible yield was determined. Detailed flow sheet was suggested for the proposed continuous process. Data obtained include useful information for the design of phosphoric acid extraction plant. The produced phosphoric acid was characterized using different analytical techniques. (author)

  2. Hydroalcohol Fruit Peel Extract

    African Journals Online (AJOL)

    L) fruit peel using 80 % ethanol-induced gastric ulcer model in rats. Methods: Male ... Conclusion: The study shows indicates the antiulcer properties of the methanol extracts of north white ... experimentation, Cimetidine was obtained from.

  3. CARICA PAPAYA EXTRACTS

    African Journals Online (AJOL)

    DR. AMINU

    2Department of Science Laboratory Technology, Hussaini Adamu Federal Polytechnic, Kazaure ... Powdered leaves of Carica papaya (L.) were extracted with ethanol and partitioned in .... about 15 minutes indicated the presence of saponins.

  4. Comparison of mentha extracts obtained by different extraction methods

    Directory of Open Access Journals (Sweden)

    Milić Slavica

    2006-01-01

    Full Text Available The different methods of mentha extraction, such as steam distillation, extraction by methylene chloride (Soxhlet extraction and supercritical fluid extraction (SFE by carbon dioxide (CO J were investigated. SFE by CO, was performed at pressure of 100 bar and temperature of40°C. The extraction yield, as well as qualitative and quantitative composition of obtained extracts, determined by GC-MS method, were compared.

  5. Biologically active extracts with kidney affections applications

    Science.gov (United States)

    Pascu (Neagu), Mihaela; Pascu, Daniela-Elena; Cozea, Andreea; Bunaciu, Andrei A.; Miron, Alexandra Raluca; Nechifor, Cristina Aurelia

    2015-12-01

    This paper is aimed to select plant materials rich in bioflavonoid compounds, made from herbs known for their application performances in the prevention and therapy of renal diseases, namely kidney stones and urinary infections (renal lithiasis, nephritis, urethritis, cystitis, etc.). This paper presents a comparative study of the medicinal plant extracts composition belonging to Ericaceae-Cranberry (fruit and leaves) - Vaccinium vitis-idaea L. and Bilberry (fruit) - Vaccinium myrtillus L. Concentrated extracts obtained from medicinal plants used in this work were analyzed from structural, morphological and compositional points of view using different techniques: chromatographic methods (HPLC), scanning electronic microscopy, infrared, and UV spectrophotometry, also by using kinetic model. Liquid chromatography was able to identify the specific compounds of the Ericaceae family, present in all three extracts, arbutosid, as well as specific components of each species, mostly from the class of polyphenols. The identification and quantitative determination of the active ingredients from these extracts can give information related to their therapeutic effects.

  6. Adaptive web data extraction policies

    Directory of Open Access Journals (Sweden)

    Provetti, Alessandro

    2008-12-01

    Full Text Available Web data extraction is concerned, among other things, with routine data accessing and downloading from continuously-updated dynamic Web pages. There is a relevant trade-off between the rate at which the external Web sites are accessed and the computational burden on the accessing client. We address the problem by proposing a predictive model, typical of the Operating Systems literature, of the rate-of-update of each Web source. The presented model has been implemented into a new version of the Dynamo project: a middleware that assists in generating informative RSS feeds out of traditional HTML Web sites. To be effective, i.e., make RSS feeds be timely and informative and to be scalable, Dynamo needs a careful tuning and customization of its polling policies, which are described in detail.

  7. Beam Extraction and Transport

    CERN Document Server

    Kalvas, T.

    2013-12-16

    This chapter gives an introduction to low-energy beam transport systems, and discusses the typically used magnetostatic elements (solenoid, dipoles and quadrupoles) and electrostatic elements (einzel lens, dipoles and quadrupoles). The ion beam emittance, beam space-charge effects and the physics of ion source extraction are introduced. Typical computer codes for analysing and designing ion optical systems are mentioned, and the trajectory tracking method most often used for extraction simulations is described in more detail.

  8. The Importance of Data Quality in Using Health Information Exchange (HIE) Networks to Improve Health Outcomes: Case Study of a HIE Extracted Dataset of Patients with Congestive Heart Failure Participating in a Regional HIE

    Science.gov (United States)

    Cartron-Mizeracki, Marie-Astrid

    2016-01-01

    Expenditures on health information technology (HIT) for healthcare organizations are growing exponentially and the value of it is the subject of criticism and skepticism. Because HIT is viewed as capable of improving major health care indicators, the government offers incentives to health care providers and organizations to implement solutions.…

  9. Residual and Destroyed Accessible Information after Measurements

    Science.gov (United States)

    Han, Rui; Leuchs, Gerd; Grassl, Markus

    2018-04-01

    When quantum states are used to send classical information, the receiver performs a measurement on the signal states. The amount of information extracted is often not optimal due to the receiver's measurement scheme and experimental apparatus. For quantum nondemolition measurements, there is potentially some residual information in the postmeasurement state, while part of the information has been extracted and the rest is destroyed. Here, we propose a framework to characterize a quantum measurement by how much information it extracts and destroys, and how much information it leaves in the residual postmeasurement state. The concept is illustrated for several receivers discriminating coherent states.

  10. Extracting knowledge from protein structure geometry

    DEFF Research Database (Denmark)

    Røgen, Peter; Koehl, Patrice

    2013-01-01

    potential from geometric knowledge extracted from native and misfolded conformers of protein structures. This new potential, Metric Protein Potential (MPP), has two main features that are key to its success. Firstly, it is composite in that it includes local and nonlocal geometric information on proteins...

  11. Legislative Districts, In order for others to use the information in the Census TIGER database in a geographic information system or for other geographic applications, the Census Bureau releases to the public extracts of the database in the form of TIGER/Line files., Published in 2006, 1:24000 (1in=2000ft) scale, Louisiana State University (LSU).

    Data.gov (United States)

    NSGIC Education | GIS Inventory — Legislative Districts dataset current as of 2006. In order for others to use the information in the Census TIGER database in a geographic information system or for...

  12. Information and Informality

    DEFF Research Database (Denmark)

    Larsson, Magnus; Segerstéen, Solveig; Svensson, Cathrin

    2011-01-01

    leaders on the basis of their possession of reliable knowledge in technical as well as organizational domains. The informal leaders engaged in interpretation and brokering of information and knowledge, as well as in mediating strategic values and priorities on both formal and informal arenas. Informal...... leaders were thus seen to function on the level of the organization as a whole, and in cooperation with formal leaders. Drawing on existing theory of leadership in creative and professional contexts, this cooperation can be specified to concern task structuring. The informal leaders in our study...... contributed to task structuring through sensemaking activities, while formal leaders focused on aspects such as clarifying output expectations, providing feedback, project structure, and diversity....

  13. Tevatron extraction microcomputer

    International Nuclear Information System (INIS)

    Chapman, L.; Finley, D.A.; Harrison, M.; Merz, W.; Batavia, IL)

    1985-01-01

    Extraction in the Fermilab Tevatron is controlled by a multi-processor Multibus microcomputer system called QXR (Quad eXtraction Regulator). QXR monitors several analog beam signals and controls three sets of power supplies: the ''bucker'' and ''pulse'' magnets at a rate of 5760 Hz, and the ''QXR'' magnets at 720 Hz. QXR supports multiple slow spills (up to a total of 35 seconds) with multiple fast pulses intermixed. It linearizes the slow spill and bucks out the high frequency components. Fast extraction is done by outputting a variable pulse waveform. Closed loop learning techniques are used to improve performance from cycle to cycle for both slow and fast extraction. The system is connected to the Tevatron clock system so that it can track the machine cycle. QXR is also connected to the rest of the Fermilab control system, ACNET. Through ACNET, human operators and central computers can monitor and control extraction through communications with QXR. The controls hardware and software both employ some standard and some specialized components. This paper gives an overview of QXR as a control system; another paper (1) summarizes performance

  14. Tevatron extraction microcomputer

    International Nuclear Information System (INIS)

    Chapman, L.; Finley, D.A.; Harrison, M.; Merz, W.

    1985-06-01

    Extraction in the Fermilab Tevatron is controlled by a multi-processor Multibus microcomputer system called QXR (Quad eXtraction Regulator). QXR monitors several analog beam signals and controls three sets of power supplies: the ''bucker'' and ''pulse'' magnets at a rate of 5760 Hz, and the ''QXR'' magnets at 720 Hz. QXR supports multiple slow spills (up to a total of 35 seconds) with multiple fast pulses intermixed. It linearizes the slow spill and bucks out the high frequency components. Fast extraction is done by outputting a variable pulse waveform. Closed loop learning techniques are used to improve performance from cycle to cycle for both slow and fast extraction. The system is connected to the Tevatron clock system so that it can track the machine cycle. QXR is also connected to the rest of the Fermilab control system, ACNET. Through ACNET, human operators and central computers can monitor and control extraction through communications with QXR. The controls hardware and software both employ some standard and some specialized components. This paper gives an overview of QXR as a control system; another paper summarizes performance

  15. Isoflavones hydrolisis and extraction

    Directory of Open Access Journals (Sweden)

    Jozilene Fernandes Farias dos Santos

    2012-12-01

    Full Text Available Isoflavones are found in leguminous species and are used as phytoestrogens widely used by industry for its beneficial effects as estrogens mimicked, antioxidant action and anti-cancer activity. The identification and quantification of isoflavones in plants is a need due to the high demand of industry. Several methods are used for its extraction, using organic solvents (methanol, ethanol and acetonitrile. Samples from five legumes species from Instituto de Zootecnia (IZ, Forage Gene Bank were tested. All seeds received a hydrothermic treatment immersed in pure water at 50°C for 12 hours. Seeds were then oven-dryed. In this work we tested the extraction using only the hydrothermic treatment and hyfrothermic treatment allied to methanol extaction protocol. Seeds were grinded and half of the samples were ressuspended in PBS (phosphate Buffer and the other half were submited to 4 mL of methanol and 1% of acetic acid, soaked for 5 hours, shaked every 15 minutes, at room temperature. The five legume species that we quantify isoflavones by enzyme immunoassay (EIA were: Calopogonium mucunoides, Bauhinia sp., Cajanus cajan, Galactia martii, Leucaena leucocephala. The extraction procedure is a recomendation of AOAC (Association of Official Analytical Chemists for isoflavone quantification. Ours results show an increase of extraction using methanol 80% plus acetic acid 1% and was obtained using solvent extraction in comparison to hydrothermic procedure alone (figure 1.

  16. Ultrasound-Assisted Extraction: Effect of Extraction Time and Solvent ...

    African Journals Online (AJOL)

    Purpose: To investigate the influence of extraction conditions assisted by ultrasound on the quality of extracts obtained from Mesembryanthemum edule shoots. Methods: The extraction procedure was carried out in an ultrasonic bath. The effect of two solvents (methanol and ethanol) and two extraction times (5 and 10 min) ...

  17. The dynamic information architecture system : a simulation framework to provide interoperability for process models

    International Nuclear Information System (INIS)

    Hummel, J. R.; Christiansen, J. H.

    2002-01-01

    As modeling and simulation becomes a more important part of the day-to-day activities in industry and government, organizations are being faced with the vexing problem of how to integrate a growing suite of heterogeneous models both within their own organizations and between organizations. The Argonne National Laboratory, which is operated by the University of Chicago for the United States Department of Energy, has developed the Dynamic Information Architecture System (DIAS) to address such problems. DIAS is an object-oriented, subject domain independent framework that is used to integrate legacy or custom-built models and applications. In this paper we will give an overview of the features of DIAS and give examples of how it has been used to integrate models in a number of applications. We shall also describe some of the key supporting DIAS tools that provide seamless interoperability between models and applications

  18. Substoichiometric extraction of chromium

    International Nuclear Information System (INIS)

    Shigematsu, T.; Kudo, K.

    1980-01-01

    Substoichiometric extraction of chromium with tetraphenylarsonium chloride (TPACl), tri-n-octylamine (TNOA), diethylammonium diethyldithiocarbamate (DDDC) and ammonium pyrrolidinedithiocarbamate (APDC) was examined in detail. Chromium can be extracted substoichiometrically in a pH range, which is 1.1-2.6 for the TPACl compound, 0.6-2.3 for the TNOA compound, 5.1-6.4 for the DDDC chelate and 3.9-4.9 for the APDC chelate. Chromium in high-purity calcium carbonate, Orchard Leaves (NBS SRM-1571) and Brewers Yeast (NBS SRM-1569) was determined by neutron activation analysis combined with substoichiometric extraction by DDDC and APDC. The values of 2.0+-0.02 ppm and 2.6+-0.2 ppm were obtained for Brewers Yeast and Orchard Leaves, respectively. These values were in good agreement with those reported by NBS. The reaction mechanism and the reaction ratio between hexavalent chromium and dithiocarbamate are also discussed. (author)

  19. Fully Convolutional Network Based Shadow Extraction from GF-2 Imagery

    Science.gov (United States)

    Li, Z.; Cai, G.; Ren, H.

    2018-04-01

    There are many shadows on the high spatial resolution satellite images, especially in the urban areas. Although shadows on imagery severely affect the information extraction of land cover or land use, they provide auxiliary information for building extraction which is hard to achieve a satisfactory accuracy through image classification itself. This paper focused on the method of building shadow extraction by designing a fully convolutional network and training samples collected from GF-2 satellite imagery in the urban region of Changchun city. By means of spatial filtering and calculation of adjacent relationship along the sunlight direction, the small patches from vegetation or bridges have been eliminated from the preliminary extracted shadows. Finally, the building shadows were separated. The extracted building shadow information from the proposed method in this paper was compared with the results from the traditional object-oriented supervised classification algorihtms. It showed that the deep learning network approach can improve the accuracy to a large extent.

  20. Genotoxicity of plant extracts

    Directory of Open Access Journals (Sweden)

    Vera M. F. Vargas

    1991-01-01

    Full Text Available Aqueous extracts of seven species used in Brazilian popular medicine (Achyrocline satureoides, Iodina rhombifolia, Desmodium incanum, Baccharis anomala, Tibouchina asperior, Luehea divaricata, Maytenus ilicifolia were screened to the presence of mutagenic activity in the Ames test (Salmonella/microsome. Positive results were obtained for A. satureoides, B anomala and L. divaricata with microsomal activation. As shown elsewhere (Vargas et al., 1990 the metabolites of A. satureoides extract also show the capacity to induce prophage and/or SOS response in microscreen phage induction assay and SOS spot chromotest.

  1. Nano-electromembrane extraction

    DEFF Research Database (Denmark)

    Payán, María D Ramos; Li, Bin; Petersen, Nickolaj J.

    2013-01-01

    as extraction selectivity. Compared with conventional EME, the acceptor phase volume in nano-EME was down-scaled by a factor of more than 1000. This resulted in a very high enrichment capacity. With loperamide as an example, an enrichment factor exceeding 500 was obtained in only 5 min of extraction...... electrophoresis (CE). In that way the sample preparation performed by nano-EME was coupled directly with a CE separation. Separation performance of 42,000-193,000 theoretical plates could easily be obtained by this direct sample preparation and injection technique that both provided enrichment as well...

  2. Uranium extraction at Rossing

    International Nuclear Information System (INIS)

    Kesler, S.B.; Fahrbach, D.O.E.

    1982-01-01

    Rossing Uranium Ltd. operates a large open pit uranium mine and extraction plant at a remote site in the Namib desert. Production started at the plant in 1978. A ferric leach process was introduced later, and the new leach plant began commissioning in October 1981. The process has proved to be reliable and easily controlled. Ferric iron is supplied through recovery from the acid plant calcine, and levels can be maintained above the design levels. Leach extractions were increased more than expected when this process was adopted, and the throughput has been considerably reduced, allowing cost savings in mining and milling

  3. Extraction spectrophotometric analyzer

    International Nuclear Information System (INIS)

    Batik, J.; Vitha, F.

    1985-01-01

    Automation is discussed of extraction spectrophotometric determination of uranium in a solution. Uranium is extracted from accompanying elements in an HCl medium with a solution of tributyl phosphate in benzene. The determination is performed by measuring absorbance at 655 nm in a single-phase ethanol-water-benzene-tributyl phosphate medium. The design is described of an analyzer consisting of an analytical unit and a control unit. The analyzer performance promises increased productivity of labour, improved operating and hygiene conditions, and mainly more accurate results of analyses. (J.C.)

  4. Extraction Methods, Variability Encountered in

    NARCIS (Netherlands)

    Bodelier, P.L.E.; Nelson, K.E.

    2014-01-01

    Synonyms Bias in DNA extractions methods; Variation in DNA extraction methods Definition The variability in extraction methods is defined as differences in quality and quantity of DNA observed using various extraction protocols, leading to differences in outcome of microbial community composition

  5. The methodology of semantic analysis for extracting physical effects

    Science.gov (United States)

    Fomenkova, M. A.; Kamaev, V. A.; Korobkin, D. M.; Fomenkov, S. A.

    2017-01-01

    The paper represents new methodology of semantic analysis for physical effects extracting. This methodology is based on the Tuzov ontology that formally describes the Russian language. In this paper, semantic patterns were described to extract structural physical information in the form of physical effects. A new algorithm of text analysis was described.

  6. Extracting information from 0νββ decay and LHC pp-cross sections: Limits on the left-right mixing angle and right-handed boson mass

    Science.gov (United States)

    Civitarese, O.; Suhonen, J.; Zuber, K.

    2015-10-01

    The existence of massive neutrinos forces the extension of the Standard Model of electroweak interactions, to accommodate them and/or right-handed currents. This is one of the fundamental questions in todays's physics. The consequences of it would reflect upon several decay processes, like the very exotic nuclear double-beta-decay. By the other hand, high-energy proton-proton reactions of the type performed at the LHC accelerator can provide information about the existence of a right-handed generation of the W and Z-bosons. Here we shall address the possibility of performing a joint analysis of the results reported by the ATLAS and CMS collaborations (σ(pp- > 2l + jets)) and the latest measurements of nuclear-double-beta decays reported by the GERDA and EXO collaborations.

  7. Extracting information from 0νββ decay and LHC pp-cross sections: Limits on the left-right mixing angle and right-handed boson mass

    Energy Technology Data Exchange (ETDEWEB)

    Civitarese, O., E-mail: osvaldo.civitarese@fisica.unlp.edu.ar [Departamento de Física, UNLP, C.C. 67, (1900) La Plata (Argentina); Suhonen, J. [University of Jyvaskyla, Department of Physics, P.O. Box 35, FI-40014 (Finland); Zuber, K. [Department of Physics, TU-University, Dresden (Germany)

    2015-10-28

    The existence of massive neutrinos forces the extension of the Standard Model of electroweak interactions, to accommodate them and/or right-handed currents. This is one of the fundamental questions in todays’s physics. The consequences of it would reflect upon several decay processes, like the very exotic nuclear double-beta-decay. By the other hand, high-energy proton-proton reactions of the type performed at the LHC accelerator can provide information about the existence of a right-handed generation of the W and Z-bosons. Here we shall address the possibility of performing a joint analysis of the results reported by the ATLAS and CMS collaborations (σ(pp− > 2l + jets)) and the latest measurements of nuclear-double-beta decays reported by the GERDA and EXO collaborations.

  8. Large datasets: Segmentation, feature extraction, and compression

    Energy Technology Data Exchange (ETDEWEB)

    Downing, D.J.; Fedorov, V.; Lawkins, W.F.; Morris, M.D.; Ostrouchov, G.

    1996-07-01

    Large data sets with more than several mission multivariate observations (tens of megabytes or gigabytes of stored information) are difficult or impossible to analyze with traditional software. The amount of output which must be scanned quickly dilutes the ability of the investigator to confidently identify all the meaningful patterns and trends which may be present. The purpose of this project is to develop both a theoretical foundation and a collection of tools for automated feature extraction that can be easily customized to specific applications. Cluster analysis techniques are applied as a final step in the feature extraction process, which helps make data surveying simple and effective.

  9. Grape Seed Extract

    Science.gov (United States)

    ... Greece people have used grapes, grape leaves, and sap for health purposes. Grape seed extract was developed ... sharing research results, and educating the public. Its resources include publications (such as Dietary ... Department of Health & Human Services, National Institutes of Health, National Center for ...

  10. SPS slow extraction septa

    CERN Multimedia

    CERN PhotoLab

    1979-01-01

    SPS long straight section (LSS) with a series of 5 septum tanks for slow extraction (view in the direction of the proton beam). There are 2 of these: in LSS2, towards the N-Area; in LSS6 towards the W-Area. See also Annual Report 1975, p.175.

  11. SPS extraction systems

    CERN Multimedia

    CERN PhotoLab

    1973-01-01

    One of the 3-m long electrostatics septa. The septum itself consists of 0.15 mm thick molybdenum wires with a 1.5 mm pitch. Each of the two SPS extraction systems will contain four of these electrostatic septa.

  12. LEAR: antiproton extraction lines

    CERN Multimedia

    Photographic Service

    1992-01-01

    Antiprotons, decelerated in LEAR to a momentum of 100 MeV/c (kinetic energy of 5.3 MeV), were delivered to the experiments in an "Ultra-Slow Extraction", dispensing some 1E9 antiprotons over times counted in hours. Beam-splitters and a multitude of beam-lines allowed several users to be supplied simultaneously.

  13. Concepts for immobilized extractants

    International Nuclear Information System (INIS)

    Paine, R.T.

    1993-01-01

    This paper addresses the problem of cleaning actinides from geomedia. In the past actinides were often released to the ground because of their tendency to bind tightly to forms of geomedia, and in addition spills have occurred over time. To remediate these areas involves finding ways to either guarantee the retention of the actinides in the geomedia, or finding ways to extract them and leave the soils clean. One possible way to clean soils is to wash them, which in order to extract actinides means the use of ligands which bind competitively with actinides in the presence of soil fractions. An array of organic ligands is known which bind with actinides, but the larger problem of handling these ligands in a manner which allows concentration of the actinides is still open. The author addresses work to bind such ligands to different types of matrices which can then be used in packed extraction columns to remove actindes from flow streams, and finally concentrated, by using minimal volume backflushing to extract the actinides from the column

  14. Information Space, Information Field, Information Environment

    Directory of Open Access Journals (Sweden)

    Victor Ya. Tsvetkov

    2014-08-01

    Full Text Available The article analyzes information space, information field and information environment; shows that information space can be natural and artificial; information field is substantive and processual object and articulates the space property; information environment is concerned with some object and acts as the surrounding in relation to it and is considered with regard to it. It enables to define information environment as a subset of information space. It defines its passive description. Information environment can also be defined as a subset of information field. It corresponds to its active description.

  15. Uranium extraction from phosphoric acid

    International Nuclear Information System (INIS)

    Araujo Figueiredo, C. de

    1984-01-01

    The recovery of uranium from phosphoric liquor by two extraction process is studied. First, uranium is reduced to tetravalent condition and is extracted by dioctypyrophosphoric acid. The re-extraction is made by concentrated phosphoric acid with an oxidizing agent. The re-extract is submitted to the second process and uranium is extracted by di-ethylhexilphosphoric acid and trioctylphosphine oxide. (M.A.C.) [pt

  16. Comparative analysis of extracted heights from topographic maps ...

    African Journals Online (AJOL)

    Topographic maps represent the three-dimensional landscape by providing relief information in the form of contours in addition to plan information on which natural and man-made landmarks are quite accurately represented. Height information, extractible from topographic maps, comes in handy for most land use planning.

  17. Academic Activities Transaction Extraction Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Xiangqian Wang

    2017-01-01

    Full Text Available Extracting information about academic activity transactions from unstructured documents is a key problem in the analysis of academic behaviors of researchers. The academic activities transaction includes five elements: person, activities, objects, attributes, and time phrases. The traditional method of information extraction is to extract shallow text features and then to recognize advanced features from text with supervision. Since the information processing of different levels is completed in steps, the error generated from various steps will be accumulated and affect the accuracy of final results. However, because Deep Belief Network (DBN model has the ability to automatically unsupervise learning of the advanced features from shallow text features, the model is employed to extract the academic activities transaction. In addition, we use character-based feature to describe the raw features of named entities of academic activity, so as to improve the accuracy of named entity recognition. In this paper, the accuracy of the academic activities extraction is compared by using character-based feature vector and word-based feature vector to express the text features, respectively, and with the traditional text information extraction based on Conditional Random Fields. The results show that DBN model is more effective for the extraction of academic activities transaction information.

  18. artery disease guidelines with extracted knowledge from data mining

    Directory of Open Access Journals (Sweden)

    Peyman Rezaei-Hachesu

    2017-06-01

    Conclusion: Guidelines confirm the achieved results from data mining (DM techniques and help to rank important risk factors based on national and local information. Evaluation of extracted rules determined new patterns for CAD patients.

  19. How Do Physicians Become Medical Experts? A Test of Three Competing Theories: Distinct Domains, Independent Influence and Encapsulation Models

    Science.gov (United States)

    Violato, Claudio; Gao, Hong; O'Brien, Mary Claire; Grier, David; Shen, E.

    2018-01-01

    The distinction between basic sciences and clinical knowledge which has led to a theoretical debate on how medical expertise is developed has implications for medical school and lifelong medical education. This longitudinal, population based observational study was conducted to test the fit of three theories--knowledge encapsulation, independent…

  20. Akt1 binds focal adhesion kinase via the Akt1 kinase domain independently of the pleckstrin homology domain.

    Science.gov (United States)

    Basson, M D; Zeng, B; Wang, S

    2015-10-01

    Akt1 and focal adhesion kinase (FAK) are protein kinases that play key roles in normal cell signaling. Individually, aberrant expression of these kinases has been linked to a variety of cancers. Together, Akt1/FAK interactions facilitate cancer metastasis by increasing cell adhesion under conditions of increased extracellular pressure. Pathological and iatrogenic sources of pressure arise from tumor growth against constraining stroma or direct perioperative manipulation. We previously reported that 15 mmHg increased extracellular pressure causes Akt1 to both directly interact with FAK and to phosphorylate and activate it. We investigated the nature of the Akt1/FAK binding by creating truncations of recombinant FAK, conjugated to glutathione S-transferase (GST), to pull down full-length Akt1. Western blots probing for Akt1 showed that FAK/Akt1 binding persisted in FAK truncations consisting of only amino acids 1-126, FAK(NT1), which contains the F1 subdomain of its band 4.1, ezrin, radixin, and moesin (FERM) domain. Using FAK(NT1) as bait, we then pulled down truncated versions of recombinant Akt1 conjugated to HA (human influenza hemagglutinin). Probes for GST-FAK(NT1) showed Akt1-FAK binding to occur in the absence of the both the Akt1 (N)-terminal pleckstrin homology (PH) domain and its adjacent hinge region. The Akt1 (C)-terminal regulatory domain was equally unnecessary for Akt1/FAK co-immunoprecipitation. Truncations involving the Akt1 catalytic domain showed that the domain by itself was enough to pull down FAK. Additionally, a fragment spanning from the PH domain to half way through the catalytic domain demonstrated increased FAK binding compared to full length Akt1. These results begin to delineate the Akt1/FAK interaction and can be used to manipulate their force-activated signal interactions. Furthermore, the finding that the N-terminal half of the Akt1 catalytic domain binds so strongly to FAK when cleaved from the rest of the protein may suggest a means for developing novel inhibitors that target this specific Akt1/FAK interaction.

  1. A Job with a Future? Delay Discounting, Magnitude Effects, and Domain Independence of Utility for Career Decisions.

    Science.gov (United States)

    Schoenfelder, Thomas E.; Hantula, Donald A.

    2003-01-01

    Seniors (n=20) assessed two job offers with differences in domain (salary/tasks), delay (career-long earnings), and magnitude (initial salary offer). Contrary to discounted utility theory, choices reflected nonconstant discount rates for future salary/tasks (delay effect), lower discount rates for salary/preferred tasks (magnitude effect), and a…

  2. Strategies for the extraction and analysis of non-extractable polyphenols from plants.

    Science.gov (United States)

    Domínguez-Rodríguez, Gloria; Marina, María Luisa; Plaza, Merichel

    2017-09-08

    The majority of studies based on phenolic compounds from plants are focused on the extractable fraction derived from an aqueous or aqueous-organic extraction. However, an important fraction of polyphenols is ignored due to the fact that they remain retained in the residue of extraction. They are the so-called non-extractable polyphenols (NEPs) which are high molecular weight polymeric polyphenols or individual low molecular weight phenolics associated to macromolecules. The scarce information available about NEPs shows that these compounds possess interesting biological activities. That is why the interest about the study of these compounds has been increasing in the last years. Furthermore, the extraction and characterization of NEPs are considered a challenge because the developed analytical methodologies present some limitations. Thus, the present literature review summarizes current knowledge of NEPs and the different methodologies for the extraction of these compounds, with a particular focus on hydrolysis treatments. Besides, this review provides information on the most recent developments in the purification, separation, identification and quantification of NEPs from plants. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Biological activity and chemical profile of Lavatera thuringiaca L. extracts obtained by different extraction approaches.

    Science.gov (United States)

    Mašković, Pavle Z; Veličković, Vesna; Đurović, Saša; Zeković, Zoran; Radojković, Marija; Cvetanović, Aleksandra; Švarc-Gajić, Jaroslava; Mitić, Milan; Vujić, Jelena

    2018-01-01

    Lavatera thuringiaca L. is herbaceous perennial plant from Malvaceae family, which is known for its biological activity and richness in polyphenolic compounds. Despite this, the information regarding the biological activity and chemical profile is still insufficient. Aim of this study was to investigate biological potential and chemical profile of Lavatera thuringiaca L., as well as influence of applied extraction technique on them. Two conventional and four non-conventional extraction techniques were applied in order to obtain extracts rich in bioactive compound. Extracts were further tested for total phenolics, flavonoids, condensed tannins, gallotannins and anthocyanins contents using spectrophotometric assays. Polyphenolic profile was established using HPLC-DAD analysis. Biological activity was investigated regarding antioxidant, cytotoxic and antibacterial activities. Four antioxidant assays were applied as well as three different cell lines for cytotoxic and fifteen bacterial strain for antibacterial activity. Results showed that subcritical water extraction (SCW) dominated over the other extraction techniques, where SCW extract exhibited the highest biological activity. Study indicates that plant Lavatera thuringiaca L. may be used as a potential source of biologically compounds. Copyright © 2017 Elsevier GmbH. All rights reserved.

  4. The extraction and chromatographic determination of the essentials oils from Ocimum basilicum L. by different techniques

    Energy Technology Data Exchange (ETDEWEB)

    Soran, Maria Loredana; Varodi, Codruta; Lung, Ildiko; Surducan, Emanoil; Surducan, Vasile [National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath, 400293 Cluj-Napoca (Romania); Cobzac, Simona Codruta, E-mail: loredana.soran@itim-cj.r [Babes-Bolyai University, Faculty of Chemistry and Chemical Engineering, 11 Arany Janos, 400028 Cluj-Napoca (Romania)

    2009-08-01

    Three different techniques (maceration, sonication and extraction in microwave field) were used for extraction of essential oils from Ocimum basilicum L. The extracts were analyzed by TLC/HPTLC technique and the fingerprint informations were obtained. The GC-FID was used to characterized the extraction efficiency and for identify the terpenic bioactive compounds. The most efficient extraction technique was maceration followed by microwave and ultrasound. The best extraction solvent system was ethyl ether + ethanol (1:1, v/v). The main compounds identified in Ocimum basilicum L. extracts were: {alpha} and {beta}-pinene (mixture), limonene, citronellol, and geraniol.

  5. The extraction and chromatographic determination of the essentials oils from Ocimum basilicum L. by different techniques

    International Nuclear Information System (INIS)

    Soran, Maria Loredana; Varodi, Codruta; Lung, Ildiko; Surducan, Emanoil; Surducan, Vasile; Cobzac, Simona Codruta

    2009-01-01

    Three different techniques (maceration, sonication and extraction in microwave field) were used for extraction of essential oils from Ocimum basilicum L. The extracts were analyzed by TLC/HPTLC technique and the fingerprint informations were obtained. The GC-FID was used to characterized the extraction efficiency and for identify the terpenic bioactive compounds. The most efficient extraction technique was maceration followed by microwave and ultrasound. The best extraction solvent system was ethyl ether + ethanol (1:1, v/v). The main compounds identified in Ocimum basilicum L. extracts were: α and β-pinene (mixture), limonene, citronellol, and geraniol.

  6. Antioxidants: Characterization, natural sources, extraction and analysis.

    Science.gov (United States)

    Oroian, Mircea; Escriche, Isabel

    2015-08-01

    Recently many review papers regarding antioxidants from different sources and different extraction and quantification procedures have been published. However none of them has all the information regarding antioxidants (chemistry, sources, extraction and quantification). This article tries to take a different perspective on antioxidants for the new researcher involved in this field. Antioxidants from fruit, vegetables and beverages play an important role in human health, for example preventing cancer and cardiovascular diseases, and lowering the incidence of different diseases. In this paper the main classes of antioxidants are presented: vitamins, carotenoids and polyphenols. Recently, many analytical methodologies involving diverse instrumental techniques have been developed for the extraction, separation, identification and quantification of these compounds. Antioxidants have been quantified by different researchers using one or more of these methods: in vivo, in vitro, electrochemical, chemiluminescent, electron spin resonance, chromatography, capillary electrophoresis, nuclear magnetic resonance, near infrared spectroscopy and mass spectrometry methods. Copyright © 2015. Published by Elsevier Ltd.

  7. The dramatic extraction construction in French

    Directory of Open Access Journals (Sweden)

    Anne Abeillé

    2009-01-01

    Full Text Available Relying on spoken corpora (Corpaix, CRFP and on previous studies (Sabio 1995, 1996, we identify a construction common in spoken French, which we analyze as a particular case of extraction:a. dix sept ans il a. (Seventeen years he has [Corpaix]b. deux cigarettes j'ai fumé. (Two cigarettes I smoked [on the fly]The construction can only be a root clause and a declarative clause. Its interpretation is that of a thetic proposition. On the other hand, it is not associated with a unique information structure, since it is compatible with a focus-ground partition, with the extracted constotunet as a narrow focus, or with an all focus interpretation. We call this construction ‘dramatic extraction’, and the extracted element a ‘center’ (i.e. a focus or a figure. We formalize our analysis in the HPSG grammar.

  8. Semantic Location Extraction from Crowdsourced Data

    Science.gov (United States)

    Koswatte, S.; Mcdougall, K.; Liu, X.

    2016-06-01

    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.

  9. SEMANTIC LOCATION EXTRACTION FROM CROWDSOURCED DATA

    Directory of Open Access Journals (Sweden)

    S. Koswatte

    2016-06-01

    Full Text Available Crowdsourced Data (CSD has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network. This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.

  10. Automated Water Extraction Index

    DEFF Research Database (Denmark)

    Feyisa, Gudina Legese; Meilby, Henrik; Fensholt, Rasmus

    2014-01-01

    Classifying surface cover types and analyzing changes are among the most common applications of remote sensing. One of the most basic classification tasks is to distinguish water bodies from dry land surfaces. Landsat imagery is among the most widely used sources of data in remote sensing of water...... resources; and although several techniques of surface water extraction using Landsat data are described in the literature, their application is constrained by low accuracy in various situations. Besides, with the use of techniques such as single band thresholding and two-band indices, identifying...... an appropriate threshold yielding the highest possible accuracy is a challenging and time consuming task, as threshold values vary with location and time of image acquisition. The purpose of this study was therefore to devise an index that consistently improves water extraction accuracy in the presence...

  11. Apparatus for extracting petroleum

    Energy Technology Data Exchange (ETDEWEB)

    Coogan, J

    1921-01-18

    An apparatus for extracting petroleum from petroleum bearing sand or shale is described comprising a container for liquids, the container being divided into a plurality of compartments, an agitator mounted within the container and below the liquid level and having its forward end opening into one of the compartments, means for delivering sand or shale to the forward end of the agitator, means for subjecting the sand or shale to the action of a solvent for the petroleum while the sand or shale is being agitated and is submerged, the first-mentioned compartment being adapted to receive the extracted petroleum and means for removing the treated sand or shale from adjacent the rear end of the agitator.

  12. Solid phase extraction membrane

    Science.gov (United States)

    Carlson, Kurt C [Nashville, TN; Langer, Roger L [Hudson, WI

    2002-11-05

    A wet-laid, porous solid phase extraction sheet material that contains both active particles and binder and that possesses excellent wet strength is described. The binder is present in a relatively small amount while the particles are present in a relatively large amount. The sheet material is sufficiently strong and flexible so as to be pleatable so that, for example, it can be used in a cartridge device.

  13. Gold and uranium extraction

    International Nuclear Information System (INIS)

    James, G.S.; Davidson, R.J.

    1977-01-01

    A process for extracting gold and uranium from an ore containing them both comprising the steps of pulping the finely comminuted ore with a suitable cyanide solution at an alkaline pH, acidifying the pulp for uranium dissolution, adding carbon activated for gold recovery to the pulp at a suitable stage, separating the loaded activated carbon from the pulp, and recovering gold from the activated carbon and uranium from solution

  14. Solvent extraction columns

    International Nuclear Information System (INIS)

    Middleton, P.; Smith, J.R.

    1979-01-01

    In pulsed columns for use in solvent extraction processes, e.g. the reprocessing of nuclear fuel, the horizontal perforated plates inside the column are separated by interplate spacers manufactured from metallic neutron absorbing material. The spacer may be in the form of a spiral or concentric circles separated by radial limbs, or may be of egg-box construction. Suitable neutron absorbing materials include stainless steel containing boron or gadolinium, hafnium metal or alloys of hafnium. (UK)

  15. Uso de ontologias para a extração de informações em atos jurídicos em uma instituição pública Use of ontologies for the automatic information extraction in legal acts in a state institution

    Directory of Open Access Journals (Sweden)

    Bruno Ventorim Gabrielli

    2005-01-01

    Full Text Available Com a expansão da Internet e a disponibilidade das informações em geral, surge um crescente anseio por parte de cidadãos e organizações de terem à sua disposição não só informações que dizem respeito a terceiros, mas também as informações a seu respeito ou que diretamente os afetem. Dentro deste contexto incluem-se as normas em geral e mais especificamente os atos emanados do serviço público. Este trabalho apresenta uma ferramenta automatizada, utilizando técnicas de extração automática de informações, com o intuito de extrair as principais informações contidas nos atos administrativos da Universidade Federal de Viçosa (UFV, visando a facilitar a utilização ampla dessas informações que, por serem de natureza pública, expandem seu interesse além das fronteiras do órgão emissor. Para isto se fez necessária a extração e estruturação das informações contidas nos mais diversos documentos eletrônicos dispersos pelos órgãos emissores. A ferramenta faz uso de uma ontologia construída especificamente para este propósito, possibilitando a geração de uma base de conhecimento cujo conteúdo reflete os campos obrigatórios e necessários para caracterizar um ato administrativo.Juridical documents possess their own form of language that may be “obscure” for the lay. However, in general, juridical documents are rich in information that, if shared can aid other professionals and Justice related organizations. This includes the norms and acts emanated from the public institutions. This work presents an automated tool, using techniques of automatic extraction of information, with the aim of extracting the main information contained in the administrative acts of the Federal University of Viçosa (UFV, seeking to aid the access to that information, once it has a public nature. In order to accomplish the task the tool makes use of an ontology built specifically for this purpose, making possible the generation of a

  16. Influence of Extraction Parameters on Hydroalcohol Extracts of the ...

    African Journals Online (AJOL)

    The parameter that had the greatest influence on extraction process was alcohol concentration ... rules and processing steps [2]. As part .... Table 1: Extractive batch nnumbers with the respective factors and levels studied in the factorial design.

  17. Plant extracts as radioprotectors

    Energy Technology Data Exchange (ETDEWEB)

    Baydoun, S; Al-Oudat, M [Atomic Energy Commission, Department of Radiation Protection and Nuclear Safety, Damascus (Syrian Arab Republic); Al-Achkar, W [Atomic Energy Commission, Department of Radiobiology and Health, Damascus (Syrian Arab Republic)

    1996-09-01

    Several studies show that the extracts of some plants, namely containing vitamins or sulfide components, have radioprotection properties against the effects of ionizing radiation. In Syria, many of hates plants are available. This experiment was conducted in order to test the ability of ten different plants to protect against the radiation damages. These plants are Daucus carota L., Brassica oleracea L, Aloe vera L., Opuntia ficus-indica, Allium cepa L., Capsicum annuum L., Scilla maritima L., Allium sativum L., Rubus sanctus L. and Rosa canina L.Their effects on the protection of E. Coli growth after the exposure to L.D 50 of gamma radiation (100 Gy) were investigated . Two concentrations to each plant extract were tested, both were than 1%. Our results are indicating that the protection depend on plant. The radioprotection factors were ranged between 1.42 to 2.39. The best results were obtained by using the extract of Allium sativum L. (2.01), Opuntia ficus-indica (2.14) and Capsiucum annuum L. (2.39). (author) 16 refs., 2 tabs., 4 figs.

  18. Plant extracts as radioprotectors

    International Nuclear Information System (INIS)

    Baydoun, S.; Al-Oudat, M.; Al-Achkar, W.

    1996-09-01

    Several studies show that the extracts of some plants, namely containing vitamins or sulfide components, have radioprotection properties against the effects of ionizing radiation. In Syria, many of hates plants are available. This experiment was conducted in order to test the ability of ten different plants to protect against the radiation damages. These plants are Daucus carota L., Brassica oleracea L, Aloe vera L., Opuntia ficus-indica, Allium cepa L., Capsicum annuum L., Scilla maritima L., Allium sativum L., Rubus sanctus L. and Rosa canina L.Their effects on the protection of E. Coli growth after the exposure to L.D 50 of gamma radiation (100 Gy) were investigated . Two concentrations to each plant extract were tested, both were than 1%. Our results are indicating that the protection depend on plant. The radioprotection factors were ranged between 1.42 to 2.39. The best results were obtained by using the extract of Allium sativum L. (2.01), Opuntia ficus-indica (2.14) and Capsiucum annuum L. (2.39). (author) 16 refs., 2 tabs., 4 figs

  19. Plant extracts as radioprotectors

    International Nuclear Information System (INIS)

    Baydoun, S.; Al-Oudat, M.; Al-Achkar, W.

    1997-01-01

    Several studies show that the extracts of some plants, namely containing vitamins or sulfide components, have radioprotection properties against the effects of ionizing radiation. In Syria, many of hates plants are available. This experiment was conducted in order to test the ability of ten different plants to protect against the radiation damages. These plants are Daucus carota L., Brassica oleracea L, Aloe vera L., Opuntia ficus-indica, Allium cepa L., Capsicum annuum L., Scilla maritima L., Allium sativum L., Rubus sanctus L. and Rosa canina L.Their effects on the protection of E. Coli growth after the exposure to L.D 50 of gamma radiation (100 Gy) were investigated . Two concentrations to each plant extract were tested, both were than 1%. Our results are indicating that the protection depend on plant. The radioprotection factors were ranged between 1.42 to 2.39. The best results were obtained by using the extract of Allium sativum L. (2.01), Opuntia ficus-indica (2.14) and Capsiucum annuum L. (2.39). (author)

  20. Solvent extraction of zirconium

    International Nuclear Information System (INIS)

    Kim, S.S.; Yoon, J.H.

    1981-01-01

    The extraction of zirconium(VI) from an aqueous solution of constant ionic strength with versatic acid-10 dissolved in benzen was studied as a function of pH and the concentration of zirconium(VI) and organic acid. The effects of sulphate and chlorine ions on the extraction of the zirconium(VI) were briefly examined. It was revealed that (ZrOR 2 .2RH) is the predominant species of extracted zirconium(VI) in the versatic acid-10. The chemical equation and the apparent equilibrium constants thereof have been determined as follows. (ZrOsup(2+))aq+ 2(R 2 H 2 )sub(org) = (ZrOR 2 .2RH)sub(org)+2(H + )aq Ksub(Zr) = (ZrOR 2 .2RH)sub(org)(H + ) 2 /(ZrOsup(2+))sub(aq)(R 2 H 2 )sup(2)sub(org) = 3.3 x 10 -7 . The synergistic effects of TBP and D2EHPA were also studied. In the mixed solvent with 0.1M TBP, the synergistic effect was observed, while the mixed solvent with D2EHPA showed the antisynergistic effect. (Author)

  1. EXTRACTION OF MONOAZO DYES BY HYDROPHILIC EXTRACTANTS FROM AQUEOUS SOLUTIONS

    Directory of Open Access Journals (Sweden)

    Y. I. Korenman

    2012-01-01

    Full Text Available The extraction of mono azo dyes E102, E122, E110, E124, E129 from aqueous solutions with hydrophilic solvents (alcohols, esters, ketones and polymers (poly-N-vinylamides, polyethylene glycol was studied. The main regularities of extraction are established. The distribution coefficients and degree of extraction of dyes was estimate. The influence of the nature of solvents and polymers on the extraction of dyes from aqueous solutions are established.

  2. Extraction of ultrashort DNA molecules from herbarium specimens.

    Science.gov (United States)

    Gutaker, Rafal M; Reiter, Ella; Furtwängler, Anja; Schuenemann, Verena J; Burbano, Hernán A

    2017-02-01

    DNA extracted from herbarium specimens is highly fragmented; therefore, it is crucial to use extraction protocols that retrieve short DNA molecules. Improvements in extraction and DNA library preparation protocols for animal remains have allowed efficient retrieval of molecules shorter than 50 bp. Here, we applied these improvements to DNA extraction protocols for herbarium specimens and evaluated extraction performance by shotgun sequencing, which allows an accurate estimation of the distribution of DNA fragment lengths. Extraction with N-phenacylthiazolium bromide (PTB) buffer decreased median fragment length by 35% when compared with cetyl-trimethyl ammonium bromide (CTAB); modifying the binding conditions of DNA to silica allowed for an additional decrease of 10%. We did not observe a further decrease in length for single-stranded DNA (ssDNA) versus double-stranded DNA (dsDNA) library preparation methods. Our protocol enables the retrieval of ultrashort molecules from herbarium specimens, which will help to unlock the genetic information stored in herbaria.

  3. Understanding extractive bleed : wood extractives: distribution, properties, and classes

    Science.gov (United States)

    Edward Burke; Norm Slavik; Tony Bonura; Dennis Connelly; Tom Faris; Arnie Nebelsick; Brent Stuart; Sam Williams; Alex C. Wiedenhoeft

    2010-01-01

    Color, odor, and natural durability of heartwood are characteristics imparted by a class of chemicals in wood known collectively extractives. Wood is converted by the tree from sapwood to heartwood by the deposition of extractives, typically many years after the growth ring undergoing this change was formed by the tree. Extractives are thus not a part of the wood...

  4. ACTIVITIES OF ACACIA NILOTICA EXTRACTS

    African Journals Online (AJOL)

    DR. AMINU

    sensitivity tests of crude extract fractions of the plant extracts using ethanol, chloroform, methanol, petroleum ether, water and ethyl acetate were investigated on nine bacterial isolates. .... These were obtained by punching the filter paper with.

  5. Dynamics of Agricultural Groundwater Extraction

    NARCIS (Netherlands)

    Hellegers, P.J.G.J.; Zilberman, D.; Ierland, van E.C.

    2001-01-01

    Agricultural shallow groundwater extraction can result in desiccation of neighbouring nature reserves and degradation of groundwater quality in the Netherlands, whereas both externalities are often not considered when agricultural groundwater extraction patterns are being determined. A model is

  6. Language extraction from zinc sulfide

    Science.gov (United States)

    Varn, Dowman Parks

    2001-09-01

    Recent advances in the analysis of one-dimensional temporal and spacial series allow for detailed characterization of disorder and computation in physical systems. One such system that has defied theoretical understanding since its discovery in 1912 is polytypism. Polytypes are layered compounds, exhibiting crystallinity in two dimensions, yet having complicated stacking sequences in the third direction. They can show both ordered and disordered sequences, sometimes each in the same specimen. We demonstrate a method for extracting two-layer correlation information from ZnS diffraction patterns and employ a novel technique for epsilon-machine reconstruction. We solve a long-standing problem---that of determining structural information for disordered materials from their diffraction patterns---for this special class of disorder. Our solution offers the most complete possible statistical description of the disorder. Furthermore, from our reconstructed epsilon-machines we find the effective range of the interlayer interaction in these materials, as well as the configurational energy of both ordered and disordered specimens. Finally, we can determine the 'language' (in terms of the Chomsky Hierarchy) these small rocks speak, and we find that regular languages are sufficient to describe them.

  7. Screening antimicrobial activity of various extracts of Urtica dioica.

    Science.gov (United States)

    Modarresi-Chahardehi, Amir; Ibrahim, Darah; Fariza-Sulaiman, Shaida; Mousavi, Leila

    2012-12-01

    antimicrobial activity extracts from extraction method I (33 out of 152 of crude extracts) and 6.82% from extraction method II (13 out of 190 of crude extracts). However, crude extracts from method I exhibited better antimicrobial activity against the Gram-positive bacteria than the Gram-negative bacteria. The positive results on medicinal plants screening for antibacterial activity constitutes primary information for further phytochemical and pharmacological studies. Therefore, the extracts could be suitable as antimicrobial agents in pharmaceutical and food industry.

  8. Screening antimicrobial activity of various extracts of Urtica dioica

    Directory of Open Access Journals (Sweden)

    Amir Modarresi-Chahardehi

    2012-12-01

    antimicrobial activity extracts from extraction method I (33 out of 152 of crude extracts and 6.82% from extraction method II (13 out of 190 of crude extracts. However, crude extracts from method I exhibited better antimicrobial activity against the Gram-positive bacteria than the Gram-negative bacteria. The positive results on medicinal plants screening for antibacterial activity constitutes primary information for further phytochemical and pharmacological studies. Therefore, the extracts could be suitable as antimicrobial agents in pharmaceutical and food industry.

  9. Improvements in solvent extraction columns

    International Nuclear Information System (INIS)

    Aughwane, K.R.

    1987-01-01

    Solvent extraction columns are used in the reprocessing of irradiated nuclear fuel. For an effective reprocessing operation a solvent extraction column is required which is capable of distributing the feed over most of the column. The patent describes improvements in solvent extractions columns which allows the feed to be distributed over an increased length of column than was previously possible. (U.K.)

  10. AGRICULTURAL USES OF SEAWEEDS EXTRACTS

    Directory of Open Access Journals (Sweden)

    Monica Popescu

    2013-12-01

    Full Text Available Marine bioactive substances extracted from seaweed are currently used in food, animal feed, as a raw material in the industry and have therapeutic applications. Most of the products based on marine algae are extracted from Brown algae Ascophyllum nodosum. The use of extracts of seaweed in agriculture is beneficial because the amount of chemical fertilizers and obtaining organic yield.

  11. On extraction reagents for hydrometallurgy

    International Nuclear Information System (INIS)

    Zolotov, Yu.A.

    1975-01-01

    Fundamental requirements to the extractants are considered. Ways of obtaining selective extractants are discussed in particular on the basis of coordination chemistry achivements. Attention is drawn to expediency of study (as extractants) of flotation reagents, additions to the oil, pesticides, accelerators of caoutchouc vulcanization

  12. DOCUMENT IMAGE REGISTRATION FOR IMPOSED LAYER EXTRACTION

    Directory of Open Access Journals (Sweden)

    Surabhi Narayan

    2017-02-01

    Full Text Available Extraction of filled-in information from document images in the presence of template poses challenges due to geometrical distortion. Filled-in document image consists of null background, general information foreground and vital information imposed layer. Template document image consists of null background and general information foreground layer. In this paper a novel document image registration technique has been proposed to extract imposed layer from input document image. A convex polygon is constructed around the content of the input and the template image using convex hull. The vertices of the convex polygons of input and template are paired based on minimum Euclidean distance. Each vertex of the input convex polygon is subjected to transformation for the permutable combinations of rotation and scaling. Translation is handled by tight crop. For every transformation of the input vertices, Minimum Hausdorff distance (MHD is computed. Minimum Hausdorff distance identifies the rotation and scaling values by which the input image should be transformed to align it to the template. Since transformation is an estimation process, the components in the input image do not overlay exactly on the components in the template, therefore connected component technique is applied to extract contour boxes at word level to identify partially overlapping components. Geometrical features such as density, area and degree of overlapping are extracted and compared between partially overlapping components to identify and eliminate components common to input image and template image. The residue constitutes imposed layer. Experimental results indicate the efficacy of the proposed model with computational complexity. Experiment has been conducted on variety of filled-in forms, applications and bank cheques. Data sets have been generated as test sets for comparative analysis.

  13. Information management

    Science.gov (United States)

    Ricks, Wendell; Corker, Kevin

    1990-01-01

    Primary Flight Display (PFD) information management and cockpit display of information management research is presented in viewgraph form. The information management problem in the cockpit, information management burdens, the key characteristics of an information manager, the interface management system handling the flow of information and the dialogs between the system and the pilot, and overall system architecture are covered.

  14. Resinous constituent extracting process

    Energy Technology Data Exchange (ETDEWEB)

    Sayer, W F

    1947-10-07

    The method of recovering oily constituents from coal or oil shale comprising the saturation of coal or oil shale in a sealed vessel with an organic solution having a boiling point at atmospheric pressure of not exceeding 220/sup 0/C, elevating the temperature within the vessel to a temperature below the cracking temperature of the constituents and maintaining the pressure within the vessel below 51 pounds, to extract the oily material from the coal or oil shale and subsequently separating the solvent from the oily material.

  15. Extraction of uranium from seawater

    International Nuclear Information System (INIS)

    Kanno, M.

    1977-01-01

    The nuclear power generation is thought to be very important in Japan. However, known domestic uranium resources in Japan are very rare. So, extraction of uranium from sea water have been carried out since 1962 at Japan Tobacco and Salt Public Corporation. There are a number of results obtained also by Kyoto University, Shikoku Govenment Industrial Research Institute, Tokyo University and others. In order to investigate the technical and economical feasibility of extraction of uranium and other resources from sea water, a research program was started in fiscal 1975, sponsored by the Ministry of International Trade and Industry with the budget of about $440,000. In this program, the conceptional design of two types of model plants, the ''column type'' and the ''tidal type'' was drawn on the design bases set up with available information. It was found that there has been several problems waiting solution, but there were no technically fatal problems. Adsorption tests were carried out with adsorbents of more than eleven types, including titanium hydroxide, and it was found that titanium hydroxide made by titanyl surphate and urea had the largest adsorption capacity of uranium among them. Elution experiments were performed only with ammonium carbonate and the efficiency at the temperature of 60 0 C showed three times higher than that of 20 0 C. A few long term column operation was conducted, mainly with the adsorbent of granulated titanium hydroxide for 15-60 days. The maximum yield of uranium throughout the adsorption and elution operation was over 20% and macimum concentration of uranium in eluate was 7 ppm

  16. Cross domains Arabic named entity recognition system

    Science.gov (United States)

    Al-Ahmari, S. Saad; Abdullatif Al-Johar, B.

    2016-07-01

    Named Entity Recognition (NER) plays an important role in many Natural Language Processing (NLP) applications such as; Information Extraction (IE), Question Answering (QA), Text Clustering, Text Summarization and Word Sense Disambiguation. This paper presents the development and implementation of domain independent system to recognize three types of Arabic named entities. The system works based on a set of domain independent grammar-rules along with Arabic part of speech tagger in addition to gazetteers and lists of trigger words. The experimental results shown, that the system performed as good as other systems with better results in some cases of cross-domains corpora.

  17. Microbiological metal extraction processes

    International Nuclear Information System (INIS)

    Torma, A.E.

    1991-01-01

    Application of biotechnological principles in the mineral processing, especially in hydrometallurgy, has created new opportunities and challenges for these industries. During the 1950's and 60's, the mining wastes and unused complex mineral resources have been successfully treated in bacterial assisted heap and dump leaching processes for copper and uranium. The interest in bio-leaching processes is the consequence of economic advantages associated with these techniques. For example, copper can be produced from mining wastes for about 1/3 to 1/2 of the costs of copper production by the conventional smelting process from high-grade sulfide concentrates. The economic viability of bio leaching technology lead to its world wide acceptance by the extractive industries. During 1970's this technology grew into a more structured discipline called 'bio hydrometallurgy'. Currently, bio leaching techniques are ready to be used, in addition to copper and uranium, for the extraction of cobalt, nickel, zinc, precious metals and for the desulfurization of high-sulfur content pyritic coals. As a developing technology, the microbiological leaching of the less common and rare metals has yet to reach commercial maturity. However, the research in this area is very active. In addition, in a foreseeable future the biotechnological methods may be applied also for the treatment of high-grade ores and mineral concentrates using adapted native and/or genetically engineered microorganisms. (author)

  18. Carcinogenicity of soil extracts

    Energy Technology Data Exchange (ETDEWEB)

    Shcherbak, N P

    1970-01-01

    A total of 270 3-mo-old mice, hybrids of the C57BL and CBA strains which are highly susceptible to carcinogens, were painted on the skin (2-3 admin./week) with 3-4 drops of (1) a concentrated benzene extract of soil taken near a petroleum refinery with a 3,4 benzpyrene (BP) content of 0.22%; (2) a 0.22% soln of pure BP in benzene; (3) a concentrated benzene extract of soil taken from an old residential area of Moscow (BP content 0.0004%); (4) a 0.0004% BP soln in benzene; and (5) pure benzene. Only mice in the first 2 groups developed tumors. In group (1), 8 mice had papillomas, 46 had skin cancer, 1 had a sarcoma and 2 had plasmocytomas. In group (2) all 60 animals had skin cancer. Lung metastases were present at autopsy in 5 mice in group (1) and in 10 mice in group (2); in some cases, these tumors were multiple. Lymph node metastases were found in 6 mice in group (1) and in 10 mice in group (2). Tumors developed more slowly in group (1) than in group (2).

  19. Extractive text summarization system to aid data extraction from full text in systematic review development.

    Science.gov (United States)

    Bui, Duy Duc An; Del Fiol, Guilherme; Hurdle, John F; Jonnalagadda, Siddhartha

    2016-12-01

    Extracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process. We developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables. At the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure. Computer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. How extractive industries affect health: Political economy underpinnings and pathways.

    Science.gov (United States)

    Schrecker, Ted; Birn, Anne-Emanuelle; Aguilera, Mariajosé

    2018-06-07

    A systematic and theoretically informed analysis of how extractive industries affect health outcomes and health inequities is overdue. Informed by the work of Saskia Sassen on "logics of extraction," we adopt an expansive definition of extractive industries to include (for example) large-scale foreign acquisitions of agricultural land for export production. To ground our analysis in concrete place-based evidence, we begin with a brief review of four case examples of major extractive activities. We then analyze the political economy of extractivism, focusing on the societal structures, processes, and relationships of power that drive and enable extraction. Next, we examine how this global order shapes and interacts with politics, institutions, and policies at the state/national level contextualizing extractive activity. Having provided necessary context, we posit a set of pathways that link the global political economy and national politics and institutional practices surrounding extraction to health outcomes and their distribution. These pathways involve both direct health effects, such as toxic work and environmental exposures and assassination of activists, and indirect effects, including sustained impoverishment, water insecurity, and stress-related ailments. We conclude with some reflections on the need for future research on the health and health equity implications of the global extractive order. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Supercritical fluid extraction of hops

    Directory of Open Access Journals (Sweden)

    ZORAN ZEKOVIC

    2007-01-01

    Full Text Available Five cultivars of hop were extracted by the method of supercritical fluid extraction using carbon dioxide (SFE–CO2 as extractant. The extraction (50 g of hop sample using a CO2 flow rate of 97.725 L/h was done in the two steps: 1. extraction at 150 bar and 40°C for 2.5 h (sample of series A was obtained and, after that, the same sample of hop was extracted in the second step: 2. extraction at 300 bar and 40 °C for 2.5 h (sample of series B was obtained. The Magnum cultivar was chosen for the investigation of the extraction kinetics. For the qualitative and quantitative analysis of the obtained hop extracts, the GC-MS method was used. Two of four themost common compounds of hop aroma (a-humulene and b-caryophyllene were detected in samples of series A. In addition, isomerized a-acids and a high content of b-acids were detected. The a-acids content in the samples of series B was the highest in the extract of the Magnum cultivar (it is a bitter variety of hop. The low contents of a-acids in all the other hop samples resulted in extracts with low a-acids content, i.e., that contents were under the prescribed a-acids content.

  2. 78 FR 64905 - Carriage of Conditionally Permitted Shale Gas Extraction Waste Water in Bulk

    Science.gov (United States)

    2013-10-30

    ...-ZA31 Carriage of Conditionally Permitted Shale Gas Extraction Waste Water in Bulk AGENCY: Coast Guard... availability of a proposed policy letter concerning the carriage of shale gas extraction waste water in bulk... transport shale gas extraction waste water in bulk. The policy letter also defines the information the Coast...

  3. Biologically active extracts with kidney affections applications

    International Nuclear Information System (INIS)

    Pascu, Mihaela; Pascu, Daniela-Elena; Cozea, Andreea; Bunaciu, Andrei A.; Miron, Alexandra Raluca; Nechifor, Cristina Aurelia

    2015-01-01

    Highlights: • The paper highlighted the compositional similarities and differences between the three extracts of bilberry and cranberry fruit derived from the same Ericaceae family. • A method of antioxidant activity, different cellulose membranes, a Whatman filter and Langmuir – kinetic model were used. • Arbutoside presence in all three extracts of bilberry and cranberry fruit explains their use in urinary infections – cystitis and colibacillosis. • Following these research studies, it was established that the fruits of bilberry and cranberry (fruit and leaves) significantly reduce the risk of urinary infections, and work effectively to protect against free radicals and inflammation. - Abstract: This paper is aimed to select plant materials rich in bioflavonoid compounds, made from herbs known for their application performances in the prevention and therapy of renal diseases, namely kidney stones and urinary infections (renal lithiasis, nephritis, urethritis, cystitis, etc.). This paper presents a comparative study of the medicinal plant extracts composition belonging to Ericaceae-Cranberry (fruit and leaves) – Vaccinium vitis-idaea L. and Bilberry (fruit) – Vaccinium myrtillus L. Concentrated extracts obtained from medicinal plants used in this work were analyzed from structural, morphological and compositional points of view using different techniques: chromatographic methods (HPLC), scanning electronic microscopy, infrared, and UV spectrophotometry, also by using kinetic model. Liquid chromatography was able to identify the specific compounds of the Ericaceae family, present in all three extracts, arbutosid, as well as specific components of each species, mostly from the class of polyphenols. The identification and quantitative determination of the active ingredients from these extracts can give information related to their therapeutic effects.

  4. Biologically active extracts with kidney affections applications

    Energy Technology Data Exchange (ETDEWEB)

    Pascu, Mihaela, E-mail: mihhaela_neagu@yahoo.com [SC HOFIGAL S.A., Analytical Research Department, 2 Intr. Serelor, Bucharest-4 042124 (Romania); Politehnica University of Bucharest, Faculty of Applied Chemistry and Material Science, 1-5 Polizu Street, 11061 Bucharest (Romania); Pascu, Daniela-Elena [Politehnica University of Bucharest, Faculty of Applied Chemistry and Material Science, 1-5 Polizu Street, 11061 Bucharest (Romania); Cozea, Andreea [SC HOFIGAL S.A., Analytical Research Department, 2 Intr. Serelor, Bucharest-4 042124 (Romania); Transilvania University of Brasov, Faculty of Food and Tourism, 148 Castle Street, 500036 Brasov (Romania); Bunaciu, Andrei A. [SCIENT – Research Center for Instrumental Analysis, S.C. CROMATEC-PLUS S.R.L., 18 Sos. Cotroceni, Bucharest 060114 (Romania); Miron, Alexandra Raluca; Nechifor, Cristina Aurelia [Politehnica University of Bucharest, Faculty of Applied Chemistry and Material Science, 1-5 Polizu Street, 11061 Bucharest (Romania)

    2015-12-15

    Highlights: • The paper highlighted the compositional similarities and differences between the three extracts of bilberry and cranberry fruit derived from the same Ericaceae family. • A method of antioxidant activity, different cellulose membranes, a Whatman filter and Langmuir – kinetic model were used. • Arbutoside presence in all three extracts of bilberry and cranberry fruit explains their use in urinary infections – cystitis and colibacillosis. • Following these research studies, it was established that the fruits of bilberry and cranberry (fruit and leaves) significantly reduce the risk of urinary infections, and work effectively to protect against free radicals and inflammation. - Abstract: This paper is aimed to select plant materials rich in bioflavonoid compounds, made from herbs known for their application performances in the prevention and therapy of renal diseases, namely kidney stones and urinary infections (renal lithiasis, nephritis, urethritis, cystitis, etc.). This paper presents a comparative study of the medicinal plant extracts composition belonging to Ericaceae-Cranberry (fruit and leaves) – Vaccinium vitis-idaea L. and Bilberry (fruit) – Vaccinium myrtillus L. Concentrated extracts obtained from medicinal plants used in this work were analyzed from structural, morphological and compositional points of view using different techniques: chromatographic methods (HPLC), scanning electronic microscopy, infrared, and UV spectrophotometry, also by using kinetic model. Liquid chromatography was able to identify the specific compounds of the Ericaceae family, present in all three extracts, arbutosid, as well as specific components of each species, mostly from the class of polyphenols. The identification and quantitative determination of the active ingredients from these extracts can give information related to their therapeutic effects.

  5. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  6. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  7. Information Crisis

    CERN Document Server

    Losavio, Michael

    2012-01-01

    Information Crisis discusses the scope and types of information available online and teaches readers how to critically assess it and analyze potentially dangerous information, especially when teachers, editors, or other information gatekeepers are not available to assess the information for them. Chapters and topics include:. The Internet as an information tool. Critical analysis. Legal issues, traps, and tricks. Protecting personal safety and identity. Types of online information.

  8. A Validation of Parafoveal Semantic Information Extraction in Reading Chinese

    Science.gov (United States)

    Zhou, Wei; Kliegl, Reinhold; Yan, Ming

    2013-01-01

    Parafoveal semantic processing has recently been well documented in reading Chinese sentences, presumably because of language-specific features. However, because of a large variation of fixation landing positions on pretarget words, some preview words actually were located in foveal vision when readers' eyes landed close to the end of the…

  9. Geographical Information Extraction With Remote Sensing. Part III - Appendices.

    Science.gov (United States)

    1998-08-01

    images available. TNO report B.72 FEL-98-A077 Appendix B DA020 Barren ground ( kale grond) Definition: Land surface void of vegetation or other specific...BH 100I/BH 140 B13 dam BI020LJDBO90 Bat area lake/ponds BHO80 Ba2 inundation BH090 E Eal area barren ground DA020 Vegetation /soil Ea2 cropland EAO1O...various categories like private, context military, agricultural etc (see FACC 1 ). VEG vegetation characteristics type of plant or plantings, like

  10. Extracting Information Based on Partial or Complete Network Data

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... depth and breadth first search network traversal do not perform well overall .... work was created using the Python's preferential attachment ..... [13] J. Boland, T. Haynes, and L. Lawson, “Domination from a distance,”. Congr.

  11. Source-specific Informative Prior for i-Vector Extraction

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2015-01-01

    An i-vector is a low-dimensional fixed-length representation of a variable-length speech utterance, and is defined as the posterior mean of a latent variable conditioned on the observed feature sequence of an utterance. The assumption is that the prior for the latent variable is non...

  12. Information extraction from topographic map using colour and shape ...

    Indian Academy of Sciences (India)

    1095–1117. c Indian Academy of Sciences ..... Shape analysis methods can be classified according to many different criteria. The first criterion .... j. ) ,. (6) where, are discrete Fourier descriptor of query image and original image, respectively.

  13. Extracting Information from the Gravitational Redshift of Compact ...

    Indian Academy of Sciences (India)

    Abstract. Essential macroscopic internal properties of compact objects ..... angular velocity regardless of the value of Z. Similar reasoning applies to any other quantity .... allows even to sharpen this concept to be discussed in the next section.

  14. Information Extraction from Large-Multi-Layer Social Networks

    Science.gov (United States)

    2015-08-06

    mization [4]. Methods that fall into this category include spec- tral algorithms, modularity methods, and methods that rely on statistical inference...Snijders and Chris Baerveldt, “A multilevel network study of the effects of delinquent behavior on friendship evolution,” Journal of mathematical sociol- ogy...1970. [10] Ulrike Luxburg, “A tutorial on spectral clustering,” Statistics and Computing, vol. 17, no. 4, pp. 395–416, Dec. 2007. [11] R. A. Fisher, “On

  15. Biomedical Information Extraction: Mining Disease Associated Genes from Literature

    Science.gov (United States)

    Huang, Zhong

    2014-01-01

    Disease associated gene discovery is a critical step to realize the future of personalized medicine. However empirical and clinical validation of disease associated genes are time consuming and expensive. In silico discovery of disease associated genes from literature is therefore becoming the first essential step for biomarker discovery to…

  16. Acquiring Information from Wider Scope to Improve Event Extraction

    Science.gov (United States)

    2012-05-01

    film ”. 2.3.2 Argument Constraint Even if the scenario is well detected, there is no guarantee of identifying the event correctly. Think about words...from 2003 newswire, with the same genre and time period as ACE 2005 data to avoid possible influences of variations in the genre or time period on the

  17. Extracting Information Based on Partial or Complete Network Data

    African Journals Online (AJOL)

    pc

    2018-03-05

    Mar 5, 2018 ... The k-Vertex Maximum Domination, introduced by Miyano and Ono in [4], .... work was created using the Python's random network model. ER(n, m) to ..... Proceedings of the Seventeenth Computing: The Australasian Theory.

  18. Extracting Fuel Efficiency Information From the Car Dashboard ...

    Indian Academy of Sciences (India)

    for one particular type of trip, say, city driving, long driving, hilly driving, congested ... to refer to the distance traveled by the car divided by volume of the fuel used to cover .... From now on, we must restrict our attention to interpolation curves. 2.

  19. On Robust Information Extraction from High-Dimensional Data

    Czech Academy of Sciences Publication Activity Database

    Kalina, Jan

    2014-01-01

    Roč. 9, č. 1 (2014), s. 131-144 ISSN 1452-4864 Grant - others:GA ČR(CZ) GA13-01930S Institutional support: RVO:67985807 Keywords : data mining * high-dimensional data * robust econometrics * outliers * machine learning Subject RIV: IN - Informatics, Computer Science

  20. Unclassified information on tritium extraction and purification technology: attachment 1

    International Nuclear Information System (INIS)

    McNorrill, P.L.

    1976-01-01

    Several tritium recovery and purification techniques developed at non-production sites are described in the unclassified and declassified literature. Heating of irradiated Li-Al alloy under vacuum to release tritium is described in declassified reports of Argonne National Laboratory. Use of palladium membranes to separate hydrogen isotopes from other gases is described by Argonne, KAPL, and others. Declassified KAPL reports describe tritium sorption on palladium beds and suggest fractional absorption as a means of isotope separation. A thermal diffusion column for tritium enrichment is described in a Canadian report. Mound Laboratory reports describe theoretical and experimental studies of thermal diffusion columns. Oak Ridge reports tabulate ''shape factors'' for thermal diffusion columns. Unclassified journals contain many articles on thermal diffusion theory, experiments, and separation of gas mixtures by thermal diffusion columns; much of these data can be readily extended to the separation of hydrogen-tritium mixtures. Cryogenic distillation for tritium recovery is described in the Mound Laboratory reports. Process equipment such as pumps, valves, Hopcalite beds, and uranium beds are described in reports by ANL, KAPL, and MLM, and in WASH-1269, Tritium Control Technology

  1. Real-Time Information Extraction from Big Data

    Science.gov (United States)

    2015-10-01

    Introduction Enormous amounts of data are being generated by a large number of sensors and devices (Internet of Things: IoT ), and this data is...brief summary in Section 7. Data Access Patterns for Current and Big Data Systems Many current solution architectures rely on accessing data resident...by highly skilled human experts based on their intuition and vast knowledge. We do not have, and cannot produce enough experts to fill our

  2. Textual information access statistical models

    CERN Document Server

    Gaussier, Eric

    2013-01-01

    This book presents statistical models that have recently been developed within several research communities to access information contained in text collections. The problems considered are linked to applications aiming at facilitating information access:- information extraction and retrieval;- text classification and clustering;- opinion mining;- comprehension aids (automatic summarization, machine translation, visualization).In order to give the reader as complete a description as possible, the focus is placed on the probability models used in the applications

  3. Information science team

    Science.gov (United States)

    Billingsley, F.

    1982-01-01

    Concerns are expressed about the data handling aspects of system design and about enabling technology for data handling and data analysis. The status, contributing factors, critical issues, and recommendations for investigations are listed for data handling, rectification and registration, and information extraction. Potential supports to individual P.I., research tasks, systematic data system design, and to system operation. The need for an airborne spectrometer class instrument for fundamental research in high spectral and spatial resolution is indicated. Geographic information system formatting and labelling techniques, very large scale integration, and methods for providing multitype data sets must also be developed.

  4. Steroid hormones in environmental matrices: extraction method comparison.

    Science.gov (United States)

    Andaluri, Gangadhar; Suri, Rominder P S; Graham, Kendon

    2017-11-09

    The U.S. Environmental Protection Agency (EPA) has developed methods for the analysis of steroid hormones in water, soil, sediment, and municipal biosolids by HRGC/HRMS (EPA Method 1698). Following the guidelines provided in US-EPA Method 1698, the extraction methods were validated with reagent water and applied to municipal wastewater, surface water, and municipal biosolids using GC/MS/MS for the analysis of nine most commonly detected steroid hormones. This is the first reported comparison of the separatory funnel extraction (SFE), continuous liquid-liquid extraction (CLLE), and Soxhlet extraction methods developed by the U.S. EPA. Furthermore, a solid phase extraction (SPE) method was also developed in-house for the extraction of steroid hormones from aquatic environmental samples. This study provides valuable information regarding the robustness of the different extraction methods. Statistical analysis of the data showed that SPE-based methods provided better recovery efficiencies and lower variability of the steroid hormones followed by SFE. The analytical methods developed in-house for extraction of biosolids showed a wide recovery range; however, the variability was low (≤ 7% RSD). Soxhlet extraction and CLLE are lengthy procedures and have been shown to provide highly variably recovery efficiencies. The results of this study are guidance for better sample preparation strategies in analytical methods for steroid hormone analysis, and SPE adds to the choice in environmental sample analysis.

  5. Oxygen Extraction from Minerals

    Science.gov (United States)

    Muscatello, Tony

    2017-01-01

    Oxygen, whether used as part of rocket bipropellant or for astronaut life support, is a key consumable for space exploration and commercialization. In Situ Resource Utilization (ISRU) has been proposed many times as a method for making space exploration more cost effective and sustainable. On planetary and asteroid surfaces the presence of minerals in the regolith that contain oxygen is very common, making them a potential oxygen resource. The majority of research and development for oxygen extraction from minerals has been for lunar regolith although this work would generally be applicable to regolith at other locations in space. This presentation will briefly survey the major methods investigated for oxygen extraction from regolith with a focus on the current status of those methods and possible future development pathways. The major oxygen production methods are (1) extraction from lunar ilmenite (FeTiO3) with either hydrogen or carbon monoxide, (2) carbothermal reduction of iron oxides and silicates with methane, and (3) molten regolith electrolysis (MRE) of silicates. Methods (1) and (2) have also been investigated in a two-step process using CO reduction and carbon deposition followed by carbothermal reduction. All three processes have byproducts that could also be used as resources. Hydrogen or carbon monoxide reduction produce iron metal in small amounts that could potentially be used as construction material. Carbothermal reduction also makes iron metal along with silicon metal and a glass with possible applications. MRE produces iron, silicon, aluminum, titanium, and glass, with higher silicon yields than carbothermal reduction. On Mars and possibly on some moons and asteroids, water is present in the form of mineral hydrates, hydroxyl (-OH) groups on minerals, andor water adsorbed on mineral surfaces. Heating of the minerals can liberate the water which can be electrolyzed to provide a source of oxygen as well. The chemistry of these processes, some key

  6. Lithium extractive metallurgy

    International Nuclear Information System (INIS)

    Josa, J.M.; Merino, J.L.

    1985-01-01

    The Nuclear Fusion National Program depends on lithium supplies. Extractive metallurgy development is subordinate to the localization and evaluation of ore resources. Nowadays lithium raw materials usable with present technology consist of pegmatite ore and brine. The Instituto Geologico y Minero Espanol (IGME) found lepidolite, ambligonite and spodrimene in pegmatite ores in different areas of Spain. However, an evaluation of resources has not been made. Different Spanish surface and underground brines are to be sampled and analyzed. If none of these contain significant levels of lithium, the Junta de Energia Nuclear (JEN) will try an agreement with IGME for ENUSA (Empresa Nacional del Uranio, S.A.) to explore pegmatite-ore bodies from different locations. Different work stages, laboratory tests, pilots plants tests and commercial plant, are foreseen, if the deposits are found. (author)

  7. Influence of Extraction Parameters on Hydroalcohol Extracts of the ...

    African Journals Online (AJOL)

    ... the influence of alcohol concentration (50, 70 and 90 % v/v), extraction time (2, 6 and 10 h), and particle size of the herbal drug (0.25, 0.5 and 1.0 mm) on the pH, dry residue and myrsinoic acid B (MAB) content of hydroalcoholic extracts by high performance liquid chromatography (HPLC) method. Results: For the extracts, ...

  8. Supercritical carbon dioxide hop extraction

    Directory of Open Access Journals (Sweden)

    Pfaf-Šovljanski Ivana I.

    2005-01-01

    Full Text Available The hop of Magnum cultivar was extracted using supercritical carbon dioxide (SFE-as extractant. Extraction was carried out in the two steps: the first one being carried out at 150 bar and 40°C for 2.5 h (Extract A, and the second was the extraction of the same hop sample at 300 bar and 40°C for 2.5 h (Extract B. Extraction kinetics of the system hop-SFE-CO2 was investigated. Two of four most common compounds of hop aroma (α-humulene and β-caryophyllene were detected in Extract A. Isomerised α-acids and β-acids were detected too. a-Acid content in Extract B was high (that means it is a bitter variety of hop. Mathematical modeling using empirical model characteristic time model and simple single sphere model has been performed on Magnum cultivar extraction experimental results. Characteristic time model equations, best fitted experimental results. Empirical model equation, fitted results well, while simple single sphere model equation poorly approximated the results.

  9. Characterization of an Antimicrobial Extract from Elaeagnus angustifolia

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Dehghan

    2014-07-01

    Full Text Available Background: According to ethnobotanical data, Elaeagnus angustifolia fruit has wound healing activity, anti-inflammatory effect and antifebrile prosperities. Objectives: This study was performed as to the best of our knowledge; there has been no scientific report on the characterization of antimicrobial effect of E. angustifolia extract. Materials and Methods: An aqueous extract of Elaeagnus angustifolia was prepared and antimicrobial activity tests were performed on various target cultures. Minimal inhibitory concentration (MIC and minimal bactericidal concentration (MBC of the extract was done using the broth dilution technique. To characterize the extract, shelf life, thermal and pH stability, effects of detergents such as Tween 80, Tween 20, Triton X100, toluene and enzymes on the antimicrobial activity of Elaeagnus angustifolia extract, were examined. Results: The MIC values ranged from 7.5 to 0.1 mg/mL, showing maximum activity (1.62 mg/mL against E. coli. Similarly, the MBC of the extract against E. coli was 1.62 mg/mL. Antimicrobial activity of the extract was relatively stable when kept in the refrigerator for 60 days. The antimicrobial activity of Elaeagnus angustifolia extract was absolutely stable at temperatures up to 700° C. After exposure of the Elaeagnus angustifolia extract to different pH solutions in the range of 4-10, almost 100% residual activity was found against E. coli at pH 4, 5, 6, and 7. Treatment of the extract with detergents, lipase and lysozyme eliminated its antimicrobial activity. Conclusions: Our study gives an indication of the presence of promising antimicrobial compounds and provides basic information about the nature of the Elaeagnus angustifolia extract. Future studies should elucidate the components responsible for antimicrobial activity of these extracts against target cultures.

  10. Sophia: A Expedient UMLS Concept Extraction Annotator.

    Science.gov (United States)

    Divita, Guy; Zeng, Qing T; Gundlapalli, Adi V; Duvall, Scott; Nebeker, Jonathan; Samore, Matthew H

    2014-01-01

    An opportunity exists for meaningful concept extraction and indexing from large corpora of clinical notes in the Veterans Affairs (VA) electronic medical record. Currently available tools such as MetaMap, cTAKES and HITex do not scale up to address this big data need. Sophia, a rapid UMLS concept extraction annotator was developed to fulfill a mandate and address extraction where high throughput is needed while preserving performance. We report on the development, testing and benchmarking of Sophia against MetaMap and cTAKEs. Sophia demonstrated improved performance on recall as compared to cTAKES and MetaMap (0.71 vs 0.66 and 0.38). The overall f-score was similar to cTAKES and an improvement over MetaMap (0.53 vs 0.57 and 0.43). With regard to speed of processing records, we noted Sophia to be several fold faster than cTAKES and the scaled-out MetaMap service. Sophia offers a viable alternative for high-throughput information extraction tasks.

  11. Membrane extraction instead of solvent extraction - what does it give

    International Nuclear Information System (INIS)

    Macasek, F.

    1989-01-01

    Membrane extraction, i.e. separation in double-emulsion systems, is analyzed theoretically as a three-phase distribution process. Its efficiency is evaluated from the point of view of chemical equilibria and diffusion transport kinetics. The main advantages of membrane extraction as compared with solvent extraction are in higher yields (for preconcentration) and higher capacity for recovery of solutes. A pertraction factor and multiplication factor were defined. They are convenient parameters for numerical characterization of solute distribution, system capacity, process economics, and separation kinetics (both at a linear and non-linear extraction isotherm). 17 refs.; 4 figs

  12. Figure text extraction in biomedical literature.

    Directory of Open Access Journals (Sweden)

    Daehyun Kim

    2011-01-01

    Full Text Available Figures are ubiquitous in biomedical full-text articles, and they represent important biomedical knowledge. However, the sheer volume of biomedical publications has made it necessary to develop computational approaches for accessing figures. Therefore, we are developing the Biomedical Figure Search engine (http://figuresearch.askHERMES.org to allow bioscientists to access figures efficiently. Since text frequently appears in figures, automatically extracting such text may assist the task of mining information from figures. Little research, however, has been conducted exploring text extraction from biomedical figures.We first evaluated an off-the-shelf Optical Character Recognition (OCR tool on its ability to extract text from figures appearing in biomedical full-text articles. We then developed a Figure Text Extraction Tool (FigTExT to improve the performance of the OCR tool for figure text extraction through the use of three innovative components: image preprocessing, character recognition, and text correction. We first developed image preprocessing to enhance image quality and to improve text localization. Then we adapted the off-the-shelf OCR tool on the improved text localization for character recognition. Finally, we developed and evaluated a novel text correction framework by taking advantage of figure-specific lexicons.The evaluation on 382 figures (9,643 figure texts in total randomly selected from PubMed Central full-text articles shows that FigTExT performed with 84% precision, 98% recall, and 90% F1-score for text localization and with 62.5% precision, 51.0% recall and 56.2% F1-score for figure text extraction. When limiting figure texts to those judged by domain experts to be important content, FigTExT performed with 87.3% precision, 68.8% recall, and 77% F1-score. FigTExT significantly improved the performance of the off-the-shelf OCR tool we used, which on its own performed with 36.6% precision, 19.3% recall, and 25.3% F1-score for

  13. Utilizing a Value of Information Framework to Improve Ore Collection and Classification Procedures

    National Research Council Canada - National Science Library

    Phillips, Julia A

    2006-01-01

    .... We use a value of information framework (VOI) to consider the economic feasibility of a mine purchasing additional information on extracted ore type to reduce the uncertainty of extracted ore grade quality...

  14. Codifying unstructured data: A Natural Language Processing approach to extract rich data from clinical letters

    Directory of Open Access Journals (Sweden)

    Arron Lacey

    2017-04-01

    Clix Enrich can be used to accurately extract SNOMED concepts from clinical letters. The resulting datasets are readily available to link to existing EHRs, and can be linked to EHRs that adopt the SNOMED coding structure, or backward compatible hierarchies. Clix Enrich comes with out-of-the-box extraction methods but the optimum way to extract the correct information would be to build in custom queries, thus requiring clinical expertise to validate extraction.

  15. Smart Extraction and Analysis System for Clinical Research.

    Science.gov (United States)

    Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung

    2017-05-01

    With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.

  16. Text feature extraction based on deep learning: a review.

    Science.gov (United States)

    Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan

    2017-01-01

    Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.

  17. Producing ashless coal extracts by microwave irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Ozgur Sonmez; Elife Sultan Giray [Mersin University, Mersin (Turkey). Department of Chemistry

    2011-06-15

    To produce ashless coal extracts, three Turkish coals were extracted with N-methyl-2-pyrrolidinone (NMP), NMP/ethylenediamine (EDA) (17/1, vol/vol) mixture and NMP/tetralin (9/1, vol/vol) mixture through thermal extraction and microwave extraction. Solvent extraction by microwave irradiation (MI) was found to be more effective than that by thermal extraction. Extraction yield of coals in NMP enhanced by addition of a little EDA, but tetralin addition showed variances according to extraction method used. While tetralin addition caused a decrease in the thermal extraction yield, it increased the yield of the extraction by MI. Following the extraction, the solid extracts were produced with ash content ranging from 0.11% to 1.1%. Ash content of solid extract obtained from microwave extraction are less than ash contents of solid extracts obtained from thermal extraction. 34 refs., 7 figs., 5 tabs.

  18. Uranium extraction in phosphoric acid

    International Nuclear Information System (INIS)

    Araujo Figueiredo, C. de

    1984-01-01

    Uranium is recovered from the phosphoric liquor produced from the concentrate obtained from phosphorus-uraniferous mineral from Itataia mines (CE, Brazil). The proposed process consists of two extraction cycles. In the first one, uranium is reduced to its tetravalent state and then extracted by dioctylpyrophosphoric acid, diluted in Kerosene. Re-extraction is carried out with concentrated phosphoric acid containing an oxidising agent to convert uranium to its hexavalent state. This extract (from the first cycle) is submitted to the second cycle where uranium is extracted with DEPA-TOPO (di-2-hexylphosphoric acid/tri-n-octyl phosphine oxide) in Kerosene. The extract is then washed and uranium is backextracted and precipitated as commercial concentrate. The organic phase is recovered. Results from discontinuous tests were satisfactory, enabling to establish operational conditions for the performance of a continuous test in a micro-pilot plant. (Author) [pt

  19. Unsymmetrical phosphate as extractant for the extraction of nitric acid

    International Nuclear Information System (INIS)

    Gaikwad, R.H.; Jayaram, R.V.

    2016-01-01

    Tri-n-butyl phosphate (TBP) was first used as an extractant in 1944, during Manhattan project for the separation of actinides and further explored by Warf in 1949 for the extraction of Ce(IV) from aqueous nitric acid. TBP was further used as an extractant in the Plutonium Uranium Recovery by Extraction (PUREX) process. To meet the stringent requirements of the nuclear industry TBP has been extensively investigated. In spite of its wide applicability, TBP suffers from various disadvantages such as high aqueous solubility, third phase formation, chemical and radiation degradation leading to the formation of undesired products. It also suffers from incomplete decontamination of the actinides from fission products. Various attempts have been made to overcome the problems associated with TBP by way of using higher homologues of TBP such as Tri-iso amyl phosphate (TiAP), Tri-secondary butyl phosphate (TsBP), Tri amyl phosphate (TAP). It was found that in some cases the results were considerably better than those obtained with TBP for uranium/thorium extraction. The extraction of nitric acid by TBP and its higher homologues which are symmetrical are well documented. However, no solvent has emerged clearly superior than TBP. Here in we report the extraction of nitric acid with neutral unsymmetrical phosphates and study them as extractants for the extraction of nitric acid. Dibutyl secbutyl phosphate, dibutyl pentyl phosphate and dibutyl heptyl phosphate were synthesised for this purpose and the extraction of nitric acid was studied in n-dodecane. The results indicate that the substitution of one of the alkyl groups of the symmetrical phosphate adjacent to the phosphoryl (P=O) group of the phosphate does not have any pronounced effect on the extraction capacity of nitric acid. (author)

  20. Uranium refining by solvent extraction

    International Nuclear Information System (INIS)

    Kraikaew, J.

    1996-01-01

    The yellow cake refining was studied in both laboratory and semi-pilot scales. The process units mainly consist of dissolution and filtration, solvent extraction, and precipitation and filtration. Effect of flow ratio (organic flow rate/ aqueous flow rate) on working efficiencies of solvent extraction process was studied. Detailed studies were carried out on extraction, scrubbing and stripping processes. Purity of yellow cake product obtained is high as 90.32% U 3 O 8

  1. Partial information decomposition as a unified approach to the specification of neural goal functions.

    Science.gov (United States)

    Wibral, Michael; Priesemann, Viola; Kay, Jim W; Lizier, Joseph T; Phillips, William A

    2017-03-01

    In many neural systems anatomical motifs are present repeatedly, but despite their structural similarity they can serve very different tasks. A prime example for such a motif is the canonical microcircuit of six-layered neo-cortex, which is repeated across cortical areas, and is involved in a number of different tasks (e.g. sensory, cognitive, or motor tasks). This observation has spawned interest in finding a common underlying principle, a 'goal function', of information processing implemented in this structure. By definition such a goal function, if universal, cannot be cast in processing-domain specific language (e.g. 'edge filtering', 'working memory'). Thus, to formulate such a principle, we have to use a domain-independent framework. Information theory offers such a framework. However, while the classical framework of information theory focuses on the relation between one input and one output (Shannon's mutual information), we argue that neural information processing crucially depends on the combination of multiple inputs to create the output of a processor. To account for this, we use a very recent extension of Shannon Information theory, called partial information decomposition (PID). PID allows to quantify the information that several inputs provide individually (unique information), redundantly (shared information) or only jointly (synergistic information) about the output. First, we review the framework of PID. Then we apply it to reevaluate and analyze several earlier proposals of information theoretic neural goal functions (predictive coding, infomax and coherent infomax, efficient coding). We find that PID allows to compare these goal functions in a common framework, and also provides a versatile approach to design new goal functions from first principles. Building on this, we design and analyze a novel goal function, called 'coding with synergy', which builds on combining external input and prior knowledge in a synergistic manner. We suggest that

  2. Analysis of Technique to Extract Data from the Web for Improved Performance

    Science.gov (United States)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  3. New extraction technique for alkaloids

    Directory of Open Access Journals (Sweden)

    Djilani Abdelouaheb

    2006-01-01

    Full Text Available A method of extraction of natural products has been developed. Compared with existing methods, the new technique is rapid, more efficient and consumes less solvent. Extraction of alkaloids from natural products such as Hyoscyamus muticus, Datura stramonium and Ruta graveolens consists of the use of a sonicated solution containing a surfactant as extracting agent. The alkaloids are precipitated by Mayer reagent, dissolved in an alkaline solution, and then extracted with chloroform. This article compares the results obtained with other methods showing clearly the advantages of the new method.

  4. ANTHOCYANINS ALIPHATIC ALCOHOLS EXTRACTION FEATURES

    Directory of Open Access Journals (Sweden)

    P. N. Savvin

    2015-01-01

    Full Text Available Anthocyanins red pigments that give color a wide range of fruits, berries and flowers. In the food industry it is widely known as a dye a food additive E163. To extract from natural vegetable raw materials traditionally used ethanol or acidified water, but in same technologies it’s unacceptable. In order to expand the use of anthocyanins as colorants and antioxidants were explored extracting pigments alcohols with different structures of the carbon skeleton, and the position and number of hydroxyl groups. For the isolation anthocyanins raw materials were extracted sequentially twice with t = 60 C for 1.5 hours. The evaluation was performed using extracts of classical spectrophotometric methods and modern express chromaticity. Color black currant extracts depends on the length of the carbon skeleton and position of the hydroxyl group, with the alcohols of normal structure have higher alcohols compared to the isomeric structure of the optical density and index of the red color component. This is due to the different ability to form hydrogen bonds when allocating anthocyanins and other intermolecular interactions. During storage blackcurrant extracts are significant structural changes recoverable pigments, which leads to a significant change in color. In this variation, the stronger the higher the length of the carbon skeleton and branched molecules extractant. Extraction polyols (ethyleneglycol, glycerol are less effective than the corresponding monohydric alcohols. However these extracts saved significantly higher because of their reducing ability at interacting with polyphenolic compounds.

  5. Passive vapor extraction feasibility study

    International Nuclear Information System (INIS)

    Rohay, V.J.

    1994-01-01

    Demonstration of a passive vapor extraction remediation system is planned for sites in the 200 West Area used in the past for the disposal of waste liquids containing carbon tetrachloride. The passive vapor extraction units will consist of a 4-in.-diameter pipe, a check valve, a canister filled with granular activated carbon, and a wind turbine. The check valve will prevent inflow of air that otherwise would dilute the soil gas and make its subsequent extraction less efficient. The granular activated carbon is used to adsorb the carbon tetrachloride from the air. The wind turbine enhances extraction rates on windy days. Passive vapor extraction units will be designed and operated to meet all applicable or relevant and appropriate requirements. Based on a cost analysis, passive vapor extraction was found to be a cost-effective method for remediation of soils containing lower concentrations of volatile contaminants. Passive vapor extraction used on wells that average 10-stdft 3 /min air flow rates was found to be more cost effective than active vapor extraction for concentrations below 500 parts per million by volume (ppm) of carbon tetrachloride. For wells that average 5-stdft 3 /min air flow rates, passive vapor extraction is more cost effective below 100 ppm

  6. Supercritical fluid extraction of uranium

    International Nuclear Information System (INIS)

    Kumar, Pradeep

    2017-01-01

    Uranium being strategic material, its separation and purification is of utmost importance in nuclear industry, for which solvent extraction is being employed. During solvent extraction significant quantity of radioactive liquid waste gets generated which is of environmental concern. In recent decades supercritical fluid extraction (SFE) has emerged as promising alternative to solvent extraction owing to its inherent advantage of reduction in liquid waste generation and simplification of process. In this paper a brief overview of research work carried out so far on SFE of uranium by BARC has been given

  7. Sterically hindered solvent extractants

    International Nuclear Information System (INIS)

    Solka, J.L.; Reis, A.H. Jr.; Mason, G.W.; Lewey, S.M.; Peppard, D.F.

    1978-01-01

    Di-t-pentylphosphinic acid, [C(CH 3 ) 2 (CH 2 CH 3 )] 2 PO(OH), H[Dt-PeP], has been shown by single-crystal X-ray diffraction data to be dimeric in the solid state. H[Dt-PeP] crystallizes in the centro-symmetric orthorhombic space group, Cmca, with unit cell parameters, a = 17.694(7), b = 11.021(4), and c = 13.073(5) A, and Z = 8, indicating that the molecule must conform to a crystallographic mirror plane or 2-fold axis. A measured density of 1.088 g/cm 3 is in good agreement with a calculated value of 1.074 g/cm 3 for a unit cell volume of 2549.3(A) 3 and a formula weight of 206.25 g. A total of 646 three-dimensional X-ray data were collected on an automated XRD-490 G.E. diffractometer. The structure was solved using a combination of direct methods, Patterson, Fourier, and least-squares refinement techniques. Refinement of the data indicates that H[Dt-PeP] is dimeric, and contains a mirror plane in which the hydrogen-bonded, eight-membered ring lies. A structural disorder involving principally the ethylene carbon but affecting the methyl carbons as well precluded a precise determination of the carbon positions and severely reduced the precision of the final refinement. In the liquid-liquid extraction system consisting of a solution of H[Dt-PeP] in benzene vs an acidic aqueous chloride phase, the extraction of UO 2 2+ follows the stoichiometry: UO 2 sub(A) 2+ + 2(HY) 2 subO = UO 2 (HY 2 ) 2 sub(O) + 2Hsub(A) + where (HY) 2 represents the dimer of H[Dt-PeP] and A and O represent the mutually equilibrated aqueous and organic phases. The expression for the distribution ratio, k, for UO 2 2+ is given. (author)

  8. Quantum-entanglement storage and extraction in quantum network node

    Science.gov (United States)

    Shan, Zhuoyu; Zhang, Yong

    Quantum computing and quantum communication have become the most popular research topic. Nitrogen-vacancy (NV) centers in diamond have been shown the great advantage of implementing quantum information processing. The generation of entanglement between NV centers represents a fundamental prerequisite for all quantum information technologies. In this paper, we propose a scheme to realize the high-fidelity storage and extraction of quantum entanglement information based on the NV centers at room temperature. We store the entangled information of a pair of entangled photons in the Bell state into the nuclear spins of two NV centers, which can make these two NV centers entangled. And then we illuminate how to extract the entangled information from NV centers to prepare on-demand entangled states for optical quantum information processing. The strategy of engineering entanglement demonstrated here maybe pave the way towards a NV center-based quantum network.

  9. Plutonium and americium extraction studies with bifunctional organophosphorus extractants

    International Nuclear Information System (INIS)

    Navratil, J.D.

    1985-01-01

    Neutral bifunctional organophosphorus extractants, such as octylphenyl-N,N-diisobutylcarbamoylmethylphosphine oxide (CMPO) and dihexyl-N,N-diethylcarbamoylmethylphosphonate (CMP), are under study at the Rocky Flats Plant (RFP) to remove plutonium and americium from the 7M nitric acid waste. These compounds extract trivalent actinides from strong nitric acid, a property which distinguishes them from monofunctional organiphosphorus reagents. Furthermore, the reagents extract hydroytic plutonium (IV) polymer which is present in the acid waste stream. The compounds extract trivalent actinides with a 3:1 stoichiometry, whereas tetra- and hexavalent actinides extract with a stoichiometry of 2:1. Preliminary studies indicate that the extracted plutonium polymer complex contains one to two molecules of CMP per plutonium ion and the plutonium(IV) maintains a polymeric structure. Recent studies by Horwitz and co-workers conclude that the CMPO and CMP reagents behave as monodentate ligands. At RFP, three techniques are being tested for using CMP and CMPO to remove plutonium and americium from nitric acid waste streams. The different techniques are liquid-liquid extraction, extraction chromatography, and solid-supported liquid membranes. Recent tests of the last two techniques will be briefly described. In all the experiments, CMP was an 84% pure material from Bray Oil Co. and CMPO was 98% pure from M and T Chemicals

  10. determination of lipophilic extractives in ionic liquid extracts

    African Journals Online (AJOL)

    dell

    Chem. 9: 63-69. Freire CSR, Pinto PCR, Santiago AS,. Silvestre AJD, Evtuquin DV and Neto. CP 2006a Comparative study of lipophilic extractives of hardwoods and corresponding ECF bleached kraft pulps. BioResources. 1: 3-17. Freire CSR, Silvestre AJD and Neto CP. 2005. Lipophilic extractives in. Eucalyptus globulus.

  11. Analytical procedures for identifying anthocyanins in natural extracts

    International Nuclear Information System (INIS)

    Marco, Paulo Henrique; Poppi, Ronei Jesus; Scarminio, Ieda Spacino

    2008-01-01

    Anthocyanins are among the most important plant pigments. Due to their potential benefits for human health, there is considerable interest in these natural pigments. Nonetheless, there is great difficulty in finding a technique that could provide the identification of structurally similar compounds and estimate the number and concentration of the species present. A lot of techniques have been tried to find the best methodology to extract information from these systems. In this paper, a review of the most important procedures is given, from the extraction to the identification of anthocyanins in natural extracts. (author)

  12. Extraction of trapped gases in ice cores for isotope analysis

    International Nuclear Information System (INIS)

    Leuenberger, M.; Bourg, C.; Francey, R.; Wahlen, M.

    2002-01-01

    The use of ice cores for paleoclimatic investigations is discussed in terms of their application for dating, temperature indication, spatial time marker synchronization, trace gas fluxes, solar variability indication and changes in the Dole effect. The different existing techniques for the extraction of gases from ice cores are discussed. These techniques, all to be carried out under vacuum, are melt-extraction, dry-extraction methods and the sublimation technique. Advantages and disadvantages of the individual methods are listed. An extensive list of references is provided for further detailed information. (author)

  13. Social information

    Directory of Open Access Journals (Sweden)

    Luiz Fernando de Barros Campos

    Full Text Available Based on Erving Goffman's work, the article aims to discuss a definition of information centered on the type conveyed by individuals in a multimodal way, encompassing language and body in situations of co-presence, where face-to-face interaction occurs, and influencing inter-subjective formation of the self. Six types of information are highlighted: material information, expressive information, ritualized information, meta-information, strategic information, and information displays. It is argued that the construction of this empirical object tends to dissolve the tension among material, cognitive and pragmatic aspects, constituting an example of the necessary integration among them. Some vulnerable characteristics of the theory are critically mentioned and it is suggested that the concept of information displays could provide a platform to approach the question of the interaction order in its relations with the institutional and social orders, and consequently, to reassess the scope of the notion of social information analyzed.

  14. Nonlocal Intracranial Cavity Extraction

    Science.gov (United States)

    Manjón, José V.; Eskildsen, Simon F.; Coupé, Pierrick; Romero, José E.; Collins, D. Louis; Robles, Montserrat

    2014-01-01

    Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and followup of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume (ICV) is often used for normalization. However, the high variability of brain shape and size due to normal intersubject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of prelabeled brain images to capture the large variability of brain shapes. To this end, an improved nonlocal label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy and reproducibility while maintaining a reduced computational burden. PMID:25328511

  15. Nonlocal Intracranial Cavity Extraction

    Directory of Open Access Journals (Sweden)

    José V. Manjón

    2014-01-01

    Full Text Available Automatic and accurate methods to estimate normalized regional brain volumes from MRI data are valuable tools which may help to obtain an objective diagnosis and followup of many neurological diseases. To estimate such regional brain volumes, the intracranial cavity volume (ICV is often used for normalization. However, the high variability of brain shape and size due to normal intersubject variability, normal changes occurring over the lifespan, and abnormal changes due to disease makes the ICV estimation problem challenging. In this paper, we present a new approach to perform ICV extraction based on the use of a library of prelabeled brain images to capture the large variability of brain shapes. To this end, an improved nonlocal label fusion scheme based on BEaST technique is proposed to increase the accuracy of the ICV estimation. The proposed method is compared with recent state-of-the-art methods and the results demonstrate an improved performance both in terms of accuracy and reproducibility while maintaining a reduced computational burden.

  16. Study on the extraction kinetics of U(IV) extraction with neutral phosphoric extractant

    International Nuclear Information System (INIS)

    Lin Zhou; Liao Shishu; Li Zhou

    1995-04-01

    The extraction kinetics of U(IV) in the diisooctyl isobutylphosphonate system has been studied by using the single drop method. The effects of the concentrations of U(IV), HCl and extractant on the extraction rate have been examined. In a certain HCl concentration, the extraction rate equation has been acquired and in the condition of various HCl concentration the extraction rate of U(IV) is proportional to [HCl 1.51 . The effect of operation temperature was also examined, and the calculated apparent activation energy is equal to 23.24 kJ/mol. From the experimental results, the extraction reaction process and the rate-controlling step have been deduced. (4 figs., 5 tabs.)

  17. EXTRACTING KNOWLEDGE FROM DATA - DATA MINING

    Directory of Open Access Journals (Sweden)

    DIANA ELENA CODREANU

    2011-04-01

    Full Text Available Managers of economic organizations have at their disposal a large volume of information and practically facing an avalanche of information, but they can not operate studying reports containing detailed data volumes without a correlation because of the good an organization may be decided in fractions of time. Thus, to take the best and effective decisions in real time, managers need to have the correct information is presented quickly, in a synthetic way, but relevant to allow for predictions and analysis.This paper wants to highlight the solutions to extract knowledge from data, namely data mining. With this technology not only has to verify some hypotheses, but aims at discovering new knowledge, so that economic organization to cope with fierce competition in the market.

  18. Pathology report data extraction from relational database using R, with extraction from reports on melanoma of skin as an example.

    Science.gov (United States)

    Ye, Jay J

    2016-01-01

    Different methods have been described for data extraction from pathology reports with varying degrees of success. Here a technique for directly extracting data from relational database is described. Our department uses synoptic reports modified from College of American Pathologists (CAP) Cancer Protocol Templates to report most of our cancer diagnoses. Choosing the melanoma of skin synoptic report as an example, R scripting language extended with RODBC package was used to query the pathology information system database. Reports containing melanoma of skin synoptic report in the past 4 and a half years were retrieved and individual data elements were extracted. Using the retrieved list of the cases, the database was queried a second time to retrieve/extract the lymph node staging information in the subsequent reports from the same patients. 426 synoptic reports corresponding to unique lesions of melanoma of skin were retrieved, and data elements of interest were extracted into an R data frame. The distribution of Breslow depth of melanomas grouped by year is used as an example of intra-report data extraction and analysis. When the new pN staging information was present in the subsequent reports, 82% (77/94) was precisely retrieved (pN0, pN1, pN2 and pN3). Additional 15% (14/94) was retrieved with certain ambiguity (positive or knowing there was an update). The specificity was 100% for both. The relationship between Breslow depth and lymph node status was graphed as an example of lesion-specific multi-report data extraction and analysis. R extended with RODBC package is a simple and versatile approach well-suited for the above tasks. The success or failure of the retrieval and extraction depended largely on whether the reports were formatted and whether the contents of the elements were consistently phrased. This approach can be easily modified and adopted for other pathology information systems that use relational database for data management.

  19. Behavioral changes during dental appointments in children having tooth extractions

    Directory of Open Access Journals (Sweden)

    Mariana Gonzalez Cademartori

    2017-01-01

    Full Text Available Background: Tooth extractions are associated with anxiety-related situations that can cause behavioral problems in pediatric dental clinics. Aim: We aimed to describe the behavior of children during tooth extraction appointments, compare it to their behavior in preceding and subsequent dental appointments, and assess the behavioral differences according to gender, age, type of dentition, and reason for extraction. Settings and Design: This was a retrospective study based on information obtained from records of children between 6 and 13 years of age who were cared for at the Dentistry School in Pelotas, Brazil. Materials and Methods: Child behavior was assessed during the dental appointment that preceded the tooth extraction, during the tooth extraction appointment, and in the subsequent dental appointment using the Venham Behavior Rating Scale. Statistical Analysis: Results were analyzed using the Pearson Chi-square and McNemar tests. Results: Eighty-nine children were included. Cooperative behavior prevailed in all the dental appointments. The prevalence of “mild/intense protest” was higher in the tooth extraction appointments than in the previous or subsequent dental appointments (P < 0.001. No significant differences in behavior were detected between the type of dentition (primary or permanent teeth, reason for extraction or gender. Conclusion: In this sample of children treated at a dental school, the occurrence of uncooperative behavior was higher during the tooth extraction appointments than in the preceding and subsequent dental appointments.

  20. Effect of soybean extract after tooth extraction on osteoblast numbers

    Directory of Open Access Journals (Sweden)

    Rosa Sharon Suhono

    2011-06-01

    Full Text Available Background: Many researches were done to find natural materials that may increase and promote bone healing processes after trauma and surgery. One of natural material that had been studied was soybean extract which contains phytoestrogen, a non-steroidal compounds found in plants that may binds to estrogen receptors and have estrogen-like activity. Purpose: The aim of this study was to investigate the effect of soybean extract feeding on the number of osteoblast cells in alveolar bone socket after mandibular tooth extraction. Methods: This study was studied on male Rattus norvegicus strain Wistar. Seventeen rats divided into three groups were used in this study. Group 1 fed with carboxy methyl cellulose (CMC solution 0,2% for seven days, and the left mandibular central incisivus was extracted; group 2 fed with soybean extract for seven days and the left mandibular central incisives was extracted; group 3 received the left mandibular central incisives extraction followed by soybean extract feeding for seven days after the extraction. All groups were sacrificed on the seventh day post-extraction, and the alveolar bone sockets were taken for histopathological observation. The tissues were processed and stained using hematoxylin and eosin to identify the amount of osteoblast cells. The number of osteoblast cells was counted using an Image Tool program. The data was analyzed statistically using the One-Way ANOVA test. Results: Significant differences were found on the number of osteoblast cells in alveolar bone after tooth extraction between groups. Group 2 (fed with soybean extract is higher than group 1 (fed with CMC and group 3 (fed with soybean extract after extraction. Conclusion: Soybean extract feeding that given for seven days pre-tooth extraction can increase the number of osteoblast cells compared with the group that were not given soybean extract feeding and also with the group that were given soybean extract feeding for seven days post

  1. Extraction systems of the SPS

    CERN Multimedia

    CERN PhotoLab

    1973-01-01

    A pair of prototype septum magnets for the extraction systems of the SPS. Each of the two extraction systems will contain eighteen of these septum magnets (eight with a 4 mm septum and ten with a 16 mm septum) mounted in pairs in nine vacuum tanks.

  2. [Endoscopic extraction of gallbladder calculi].

    Science.gov (United States)

    Kühner, W; Frimberger, E; Ottenjann, R

    1984-06-29

    Endoscopic extraction of gallbladder stones were performed, as far as we know for the first time, in three patients with combined choledochocystolithiasis. Following endoscopic papillotomy (EPT) and subsequent mechanical lithotripsy of multiple choledochal concrements measuring up to 3 cm the gallbladder stones were successfully extracted with a Dormia basket through the cystic duct. The patients have remained free of complications after the endoscopic intervention.

  3. Extractive separation of tellurium(4)

    International Nuclear Information System (INIS)

    Gawali, S.B.; Shinde, V.M.

    1977-01-01

    A method is described for the extraction of tellurium (4) from hydrobromic acid media using 4-methyl-2-pentanol as an extractant. The method affords the determination of tellurium after its separation from Se, Au, Cu, Pb, Fe, Os, V and Al. (author)

  4. Solids recycling in solvent extraction

    International Nuclear Information System (INIS)

    Robinson, L.F.

    1980-01-01

    In an extraction process for extracting values from a first stream into a substantially immiscible second stream using a multi-compartmental rotary contactor, unwanted solids formed in the contactor and discharged at least partly with the the first stream are separated and re-entered into the contactor intermediate the points at which the streams are discharged. (author)

  5. Scorebox extraction from mobile sports videos using Support Vector Machines

    Science.gov (United States)

    Kim, Wonjun; Park, Jimin; Kim, Changick

    2008-08-01

    Scorebox plays an important role in understanding contents of sports videos. However, the tiny scorebox may give the small-display-viewers uncomfortable experience in grasping the game situation. In this paper, we propose a novel framework to extract the scorebox from sports video frames. We first extract candidates by using accumulated intensity and edge information after short learning period. Since there are various types of scoreboxes inserted in sports videos, multiple attributes need to be used for efficient extraction. Based on those attributes, the optimal information gain is computed and top three ranked attributes in terms of information gain are selected as a three-dimensional feature vector for Support Vector Machines (SVM) to distinguish the scorebox from other candidates, such as logos and advertisement boards. The proposed method is tested on various videos of sports games and experimental results show the efficiency and robustness of our proposed method.

  6. Parallel artificial liquid membrane extraction

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Rasmussen, Knut Einar; Parmer, Marthe Petrine

    2013-01-01

    This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated by an arti......This paper reports development of a new approach towards analytical liquid-liquid-liquid membrane extraction termed parallel artificial liquid membrane extraction. A donor plate and acceptor plate create a sandwich, in which each sample (human plasma) and acceptor solution is separated...... by an artificial liquid membrane. Parallel artificial liquid membrane extraction is a modification of hollow-fiber liquid-phase microextraction, where the hollow fibers are replaced by flat membranes in a 96-well plate format....

  7. Uranium extraction from phosphoric acid

    International Nuclear Information System (INIS)

    Lounis, A.

    1983-05-01

    A study has been carried out for the extraction of uranium from phosphoric acid produced in Algeria. First of all, the Algerian phosphoric acid produced in Algeria by SONATRACH has been characterised. This study helped us to synthesize a phosphoric acid that enabled us to pass from laboratory tests to pilot scale tests. We have then examined extraction and stripping parameters: diluent, DZEPHA/TOPO ratio and oxidising agent. The laboratory experiments enabled us to set the optimum condition for the choice of diluent, extractant concentration, ratio of the synergic mixture, oxidant concentration, redox potential. The equilibrium isotherms lead to the determination of the number of theoretical stages for the uranium extraction and stripping of uranium, then the extraction from phosphoric acid has been verified on a pilot scale (using a mixer-settler)

  8. Improving IUE High Dispersion Extraction

    Science.gov (United States)

    Lawton, Patricia J.; VanSteenberg, M. E.; Massa, D.

    2007-01-01

    We present a different method to extract high dispersion International Ultraviolet Explorer (IUE) spectra from the New Spectral Image Processing System (NEWSIPS) geometrically and photometrically corrected (SI HI) images of the echellogram. The new algorithm corrects many of the deficiencies that exist in the NEWSIPS high dispersion (SIHI) spectra . Specifically, it does a much better job of accounting for the overlap of the higher echelle orders, it eliminates a significant time dependency in the extracted spectra (which can be traced to the background model used in the NEWSIPS extractions), and it can extract spectra from echellogram images that are more highly distorted than the NEWSIPS extraction routines can handle. Together, these improvements yield a set of IUE high dispersion spectra whose scientific integrity is sign ificantly better than the NEWSIPS products. This work has been supported by NASA ADP grants.

  9. Utilizing linkage disequilibrium information from Indian Genome ...

    Indian Academy of Sciences (India)

    Using LD information derived from Indian Genome Variation database (IGVdb) on populations .... Line diagram represents the SNPs selected in Indian (upper panel) and CEPH .... out procedure for extracting DNA from human nucleated cells.

  10. Information Concepts

    CERN Document Server

    Marchionini, Gary

    2010-01-01

    Information is essential to all human activity, and information in electronic form both amplifies and augments human information interactions. This lecture surveys some of the different classical meanings of information, focuses on the ways that electronic technologies are affecting how we think about these senses of information, and introduces an emerging sense of information that has implications for how we work, play, and interact with others. The evolutions of computers and electronic networks and people's uses and adaptations of these tools manifesting a dynamic space called cyberspace. O

  11. Information Myopia

    Directory of Open Access Journals (Sweden)

    Nadi Helena Presser

    2016-04-01

    Full Text Available This article reflects on the ways of appropriation in organizations. The notion of Information Myopia is characterized by the lack of knowledge about the available informational capabilities in organizations, revealing a narrow view of the information environment. This analysis has focused on the process for renewing the software licenses contracts of a large multinational group, in order to manage its organizational assets in information technology. The collected, explained and justified information allowed to elaborate an action proposal, which enabled the creation of new organizational knowledge. In its theoretical dimension, the value of information was materialized by its use, in a collective process of organizational learning.

  12. Information cultures

    DEFF Research Database (Denmark)

    Skouvig, Laura

    2017-01-01

    The purpose of this article is to suggest a genealogy of the concept of information beyond the 20th century. The article discusses how the concept of information culture might provide a way of formulating such a genealogic strategy. The article approaches this purpose by providing a general...... narrative of premodern information cultures, examining works on early-modern scholars and 18th century savants and discussion of what seems to be a Foucauldian rupture in the conceptualization of information in 19th century England. The findings of the article are situated in the thinking that a genealogy...... of information would reveal that information had specific purposes in specific settings....

  13. Selectivity in extraction of copper and indium with chelate extractants

    International Nuclear Information System (INIS)

    Zivkovic, D.

    2003-01-01

    Simultaneous extraction of copper and indium with chelate extractants (LIX84 and D2E11PA) was described. Stechiometry of metal-organic complexes examined using the method of equimolar ratios resulted in CuR 2 and InR 3 forms of hydrophobic extracting species. A linear correlation was obtained between logarithm of distribution coefficients and chelate agents and pH, respectively. Selectivity is generally higher with higher concentrations of chelate agents in the organic phase, and is decreased with increase of concentration of hydrogen ions in feeding phase. (Original)

  14. INFORMATION MANAGERx

    African Journals Online (AJOL)

    USER

    Use of Online Information Sources in Federal University Medical Libraries in North West Geo-Political Zone of Nigeria. By ... on the quality of information services provided to clients. .... North West Zone need to identify the importance of.

  15. Informational Urbanism

    Directory of Open Access Journals (Sweden)

    Wolfgang G. Stock

    2015-10-01

    Full Text Available Contemporary and future cities are often labeled as "smart cities," "ubiquitous cities," "knowledge cities" and "creative cities." Informational urbanism includes all aspects of information and knowledge with regard to urban regions. "Informational city" is an umbrella term uniting the divergent trends of information-related city research. Informational urbanism is an interdisciplinary endeavor incorporating on the one side computer science and information science and on the other side urbanism, architecture, (city economics, and (city sociology. In our research project on informational cities, we visited more than 40 metropolises and smaller towns all over the world. In this paper, we sketch the theoretical background on a journey from Max Weber to the Internet of Things, introduce our research methods, and describe main results on characteristics of informational cities as prototypical cities of the emerging knowledge society.

  16. Copyright Information

    Science.gov (United States)

    ... Here: Home → Copyright Information URL of this page: https://medlineplus.gov/copyright.html Copyright Information To use ... the Magazine and NIH MedlinePlus Salud The FAQs ( https://medlineplus.gov/faq/faq.html ) The same content ...

  17. Output-Sensitive Pattern Extraction in Sequences

    DEFF Research Database (Denmark)

    Grossi, Roberto; Menconi, Giulia; Pisanti, Nadia

    2014-01-01

    Genomic Analysis, Plagiarism Detection, Data Mining, Intrusion Detection, Spam Fighting and Time Series Analysis are just some examples of applications where extraction of recurring patterns in sequences of objects is one of the main computational challenges. Several notions of patterns exist...... or extend them causes a loss of significant information (where the number of occurrences changes). Output-sensitive algorithms have been proposed to enumerate and list these patterns, taking polynomial time O(nc) per pattern for constant c > 1, which is impractical for massive sequences of very large length...

  18. Quantum Phase Extraction in Isospectral Electronic Nanostructures

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Christopher

    2010-04-28

    Quantum phase is not a direct observable and is usually determined by interferometric methods. We present a method to map complete electron wave functions, including internal quantum phase information, from measured single-state probability densities. We harness the mathematical discovery of drum-like manifolds bearing different shapes but identical resonances, and construct quantum isospectral nanostructures possessing matching electronic structure but divergent physical structure. Quantum measurement (scanning tunneling microscopy) of these 'quantum drums' [degenerate two-dimensional electron states on the Cu(111) surface confined by individually positioned CO molecules] reveals that isospectrality provides an extra topological degree of freedom enabling robust quantum state transplantation and phase extraction.

  19. Bengali text summarization by sentence extraction

    OpenAIRE

    Sarkar, Kamal

    2012-01-01

    Text summarization is a process to produce an abstract or a summary by selecting significant portion of the information from one or more texts. In an automatic text summarization process, a text is given to the computer and the computer returns a shorter less redundant extract or abstract of the original text(s). Many techniques have been developed for summarizing English text(s). But, a very few attempts have been made for Bengali text summarization. This paper presents a method for Bengali ...

  20. Design and application of PDF model for extracting

    Science.gov (United States)

    Xiong, Lei

    2013-07-01

    In order to change the steps of contributions in editorial department system from two steps to one, this paper advocates that the technology of extracting the information of PDF files should be transplanted from PDF reader into IEEE Xplore contribution system and that it should be combined with uploading in batch skillfully to enable editors to upload PDF files about 1GB in batch for once. Computers will extract the information of the title, author, address, mailbox, abstract and key words of thesis voluntarily for later retrieval so as to save plenty of labor, material and finance for editorial department.