WorldWideScience

Sample records for extract additional information

  1. Information extraction

    NARCIS (Netherlands)

    Zhang, Lei; Hoede, C.

    2002-01-01

    In this paper we present a new approach to extract relevant information by knowledge graphs from natural language text. We give a multiple level model based on knowledge graphs for describing template information, and investigate the concept of partial structural parsing. Moreover, we point out that

  2. Extracting additional risk managers information from a risk assessment of Listeria monocytogenes in deli meats

    NARCIS (Netherlands)

    Pérez-Rodríguez, F.; Asselt, van E.D.; García-Gimeno, R.M.; Zurera, G.; Zwietering, M.H.

    2007-01-01

    The risk assessment study of Listeria monocytogenes in ready-to-eat foods conducted by the U.S. Food and Drug Administration is an example of an extensive quantitative microbiological risk assessment that could be used by risk analysts and other scientists to obtain information and by managers and s

  3. Residuals of autoregressive model providing additional information for feature extraction of pattern recognition-based myoelectric control.

    Science.gov (United States)

    Pan, Lizhi; Zhang, Dingguo; Sheng, Xinjun; Zhu, Xiangyang

    2015-01-01

    Myoelectric control based on pattern recognition has been studied for several decades. Autoregressive (AR) features are one of the mostly used feature extraction methods among myoelectric control studies. Almost all previous studies only used the AR coefficients without the residuals of AR model for classification. However, the residuals of AR model contain important amplitude information of the electromyography (EMG) signals. In this study, we added the residuals to the AR features (AR+re) and compared its performance with the classical sixth-order AR coefficients. We tested six unilateral transradial amputees and eight able-bodied subjects for eleven hand and wrist motions. The classification accuracy (CA) of the intact side for amputee subjects and the right hand for able-bodied subjects showed that the CA of AR+re features was slightly but significantly higher than that of classical AR features (p = 0.009), which meant that residuals could provide additional information to classical AR features for classification. Interestingly, the CA of the affected side for amputee subjects showed that there was no significant difference between the CA of AR+re features and classical AR features (p > 0.05). We attributed this to the fact that the amputee subjects could not use their affected side to produce consistent EMG patterns as their intact side or the dominant hand of the able-bodied subjects. Since the residuals were already available when the AR coefficients were computed, the results of this study suggested adding the residuals to classical AR features to potentially improve the performance of pattern recognition-based myoelectric control.

  4. Information extraction system

    Science.gov (United States)

    Lemmond, Tracy D; Hanley, William G; Guensche, Joseph Wendell; Perry, Nathan C; Nitao, John J; Kidwell, Paul Brandon; Boakye, Kofi Agyeman; Glaser, Ron E; Prenger, Ryan James

    2014-05-13

    An information extraction system and methods of operating the system are provided. In particular, an information extraction system for performing meta-extraction of named entities of people, organizations, and locations as well as relationships and events from text documents are described herein.

  5. Multimedia Information Extraction

    CERN Document Server

    Maybury, Mark T

    2012-01-01

    The advent of increasingly large consumer collections of audio (e.g., iTunes), imagery (e.g., Flickr), and video (e.g., YouTube) is driving a need not only for multimedia retrieval but also information extraction from and across media. Furthermore, industrial and government collections fuel requirements for stock media access, media preservation, broadcast news retrieval, identity management, and video surveillance.  While significant advances have been made in language processing for information extraction from unstructured multilingual text and extraction of objects from imagery and vid

  6. Extracting useful information from images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic...... and heterogeneous, congruent and non-congruent images are used to illustrate how the described methods work and to compare some of them...

  7. Informed consent in dental extractions.

    Directory of Open Access Journals (Sweden)

    José Luis Capote Femenías

    2009-07-01

    Full Text Available When performing any oral intervention, particularly dental extractions, the specialist should have the oral or written consent of the patient. This consent includes the explanation of all possible complications, whether typical, very serious or personalized associated with the previous health condition, age, profession, religion or any other characteristic of the patient, as well as the possi.ble benefits of the intervention. This article is related with the bioethical aspects related with dental extractions, in order to determine the main elements that the informed consent should include.

  8. Web-Based Information Extraction Technology

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Information extraction techniques on the Web are the current research hotspot. Now many information extraction techniques based on different principles have appeared and have different capabilities. We classify the existing information extraction techniques by the principle of information extraction and analyze the methods and principles of semantic information adding, schema defining,rule expression, semantic items locating and object locating in the approaches. Based on the above survey and analysis,several open problems are discussed.

  9. Information Extraction and Webpage Understanding

    Directory of Open Access Journals (Sweden)

    M.Sharmila Begum

    2011-11-01

    Full Text Available The two most important tasks in information extraction from the Web are webpage structure understanding and natural language sentences processing. However, little work has been done toward an integrated statistical model for understanding webpage structures and processing natural language sentences within the HTML elements. Our recent work on webpage understanding introduces a joint model of Hierarchical Conditional Random Fields (HCRFs and extended Semi-Markov Conditional Random Fields (Semi-CRFs to leverage the page structure understanding results in free text segmentation and labeling. In this top-down integration model, the decision of the HCRF model could guide the decision making of the Semi-CRF model. However, the drawback of the topdown integration strategy is also apparent, i.e., the decision of the Semi-CRF model could not be used by the HCRF model to guide its decision making. This paper proposed a novel framework called WebNLP, which enables bidirectional integration of page structure understanding and text understanding in an iterative manner. We have applied the proposed framework to local business entity extraction and Chinese person and organization name extraction. Experiments show that the WebNLP framework achieved significantly better performance than existing methods.

  10. 47 CFR 25.111 - Additional information.

    Science.gov (United States)

    2010-10-01

    ... frequencies to be used for tracking, telemetry and control functions of DBS systems. ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Additional information. 25.111 Section 25.111 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS...

  11. Extracting information from multiplex networks.

    Science.gov (United States)

    Iacovacci, Jacopo; Bianconi, Ginestra

    2016-06-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering, and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from big data. For these reasons, characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper, we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function Θ̃(S) for describing their mesoscale organization and community structure. As working examples for studying these measures, we consider three multiplex network datasets coming for social science.

  12. Extracting Information from Multiplex Networks

    CERN Document Server

    Iacovacci, Jacopo

    2016-01-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from Big Data. For these reasons characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function $\\widetilde{\\Theta}^{S}$ for describing their mesoscale organization and community structure. As working examples for studying thes...

  13. Extracting information from multiplex networks

    Science.gov (United States)

    Iacovacci, Jacopo; Bianconi, Ginestra

    2016-06-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering, and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from big data. For these reasons, characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper, we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function Θ ˜ S for describing their mesoscale organization and community structure. As working examples for studying these measures, we consider three multiplex network datasets coming for social science.

  14. MR图像中附加图信息提取及图像显示实现%Implementation of Both Additional Information Extraction and Image Display from MR Image

    Institute of Scientific and Technical Information of China (English)

    万遂人; 顾翠艳; 孙钰

    2013-01-01

    MR图像中的附加图信息(overlay)可以为磁共振波谱(MRS)提供指导,同时能帮助医生定位病灶,从而准确显示附加图信息,对辅助医学影像诊断具有非常重要的意义.然而常用的一些DICOM浏览工具在显示DICOM医学图像时往往会丢失对于磁共振波谱研究有重要意义的附加信息.基于此种情况,本文采用江苏省人民医院提供的磁共振波谱数据,对像素的提取方法和附加信息进行仔细研究,同时在DICOM图像上显示实验,并在Eclipse开发环境中,利用Java语言和开源包dcm4che-2.0.23进行DICOM显示程序dicomreader的开发.实验结果证明,本文的Java程序dicomreader软件可以快速将DICOM转换显示为图像并精确显示出附加信息,并且提取出来的附加信息可以用于研究人员定位脑肿瘤等病变组织、对脑组织进行三维重建、对MRI图像进行分割或是根据波谱信息进行临床诊断等相关研究.%Additional figure information in MR images is able to provide guidance for Magnetic Resonance Spectrum (MRS),helping located lesions,and it has certain clinical application significance.While several free Digital Imaging and Communication in Medicine (DICOM) Viewers are able to display DICOM image files and key information,however it still has some shortages in the applications to the area of scientific research.First at all,their major basic function is just to display images.In some extent,they could only give the assistance to the research of magnetic resonance data.And then they are unable to completely display some special kinds of DICOM images,such as Siemens/GE magnetic resonance imaging.The location information of MRI images would be often lost in it.Above all,a program named dicomreader that displays DICOM image properly is developed by using Java and the open source toolkit of dcm4che.The process for extracting overlay pixels and displaying additional information on the DICOM images is also proposed

  15. MIXED HEDGING UNDER ADDITIVE MARKET PRICE INFORMATION

    Institute of Scientific and Technical Information of China (English)

    Haifeng YAN; Jianqi YANG; Limin LIU

    2008-01-01

    Assume that there is additional market information in the financial market, which is represented by n given T-contingent claims. The special claims with observed prices at time 0 can only be traded at time 0. Hence, investment opportunities increase. By means of the techniques developed by Gourierout et al. (1998), the mixed hedging problem is considered, especially, the price of contingent claim and the optimal hedging strategy are obtained. An explicit description of the mean-variance efficient solution is given after arguing mean-variance efficient frontier problem.

  16. [Information about phosphorus additives and nutritional counseling].

    Science.gov (United States)

    Kido, Shinsuke; Nomura, Kengo; Sasaki, Shohei; Shiozaki, Yuji; Segawa, Hiroko; Tatsumi, Sawako

    2012-10-01

    Hyperphosphatemia is a common disorder in patients with chronic kidney disease (CKD) , and may result in hyperparathyroidism and renal osteodystrophy. Hyperphosphatemia also may contribute to deterioration vascular calcification and increase mortality. Hence, correction and prevention of hyperphosphatemia is a main component of the management of CKD. This goal is usually approached both by administering phosphorus binders and by restricting dietary phosphorus (P) intake. Dietary intake of phosphorus (P) is derived largely from foods with high protein content or food additives and is an important determinant of P balance in patient with CKD. Food additives (PO4) can dramatically increase the amount of P consumed in the daily diet, especially because P is more readily absorbed in its inorganic form. In addition, information about the P content and type in prepared foods is often unavailable or misleading. Therefore, during dietary counseling of patients with CKD, we recommended that they consider both the absolute dietary P content and the P-to-protein ratio of foods and meals including food additives.

  17. Personalized Web Services for Web Information Extraction

    CERN Document Server

    Jarir, Zahi; Erradi, Mahammed

    2011-01-01

    The field of information extraction from the Web emerged with the growth of the Web and the multiplication of online data sources. This paper is an analysis of information extraction methods. It presents a service oriented approach for web information extraction considering both web data management and extraction services. Then we propose an SOA based architecture to enhance flexibility and on-the-fly modification of web extraction services. An implementation of the proposed architecture is proposed on the middleware level of Java Enterprise Edition (JEE) servers.

  18. Mate extract as feed additive for improvement of beef quality

    DEFF Research Database (Denmark)

    de Zawadzki, Andressa; Arrivetti, Leandro de O.R.; Vidal, Marília P.

    2017-01-01

    Mate (Ilex paraguariensis A.St.-Hil.) is generally recognized as safe (GRAS status) and has a high content of alkaloids, saponins, and phenolic acids. Addition of mate extract to broilers feed has been shown to increase the oxidative stability of chicken meat, however, its effect on beef quality ...

  19. Information- Theoretic Analysis for the Difficulty of Extracting Hidden Information

    Institute of Scientific and Technical Information of China (English)

    ZHANG Wei-ming; LI Shi-qu; CAO Jia; LIU Jiu-fen

    2005-01-01

    The difficulty of extracting hidden information,which is essentially a kind of secrecy, is analyzed by information-theoretic method. The relations between key rate, message rate, hiding capacity and difficulty of extraction are studied in the terms of unicity distance of stego-key, and the theoretic conclusion is used to analyze the actual extracting attack on Least Significant Bit(LSB) steganographic algorithms.

  20. Enhanced Pattern Representation in Information Extraction

    Institute of Scientific and Technical Information of China (English)

    廖乐健; 曹元大; 张映波

    2004-01-01

    Traditional pattern representation in information extraction lack in the ability of representing domain-specific concepts and are therefore devoid of flexibility. To overcome these restrictions, an enhanced pattern representation is designed which includes ontological concepts, neighboring-tree structures and soft constraints. An information-extraction inference engine based on hypothesis-generation and conflict-resolution is implemented. The proposed technique is successfully applied to an information extraction system for Chinese-language query front-end of a job-recruitment search engine.

  1. 24 CFR 1710.116 - Additional information.

    Science.gov (United States)

    2010-04-01

    ...) Violations and litigations. This information need appear only if any of the questions are answered in the... violation of a Federal, state or local law concerned with the environment, land sales, securities sales... pending against them any criminal proceedings in any court? (OILSR suspension notices on...

  2. Information Extraction From Chemical Patents

    Directory of Open Access Journals (Sweden)

    Sandra Bergmann

    2012-01-01

    Full Text Available The development of new chemicals or pharmaceuticals is preceded by an indepth analysis of published patents in this field. This information retrieval is a costly and time inefficient step when done by a human reader, yet it is mandatory for potential success of an investment. The goal of the research project UIMA-HPC is to automate and hence speed-up the process of knowledge mining about patents. Multi-threaded analysis engines, developed according to UIMA (Unstructured Information Management Architecture standards, process texts and images in thousands of documents in parallel. UNICORE (UNiform Interface to COmputing Resources workflow control structures make it possible to dynamically allocate resources for every given task to gain best cpu-time/realtime ratios in an HPC environment.

  3. YOGHURT STRUCTURE MODIFICATION WITH AMARANTH EXTRACT AND TRANGLUTAMINASE ADDITION

    OpenAIRE

    A. G. Shleikin; N. P. Danilov; A. E. Argymbaeva; S. V. Rykov

    2015-01-01

    To obtain functional dairy products is an important issue to improve the diet of the population. This article is devoted to development of yoghurt with transglutaminase enzyme. Transglutaminase (EC 2.3.2.13) performs cross-linking of proteins, strengthening the structure of the product. The concentration of transglutaminase used was 4 U/g of protein (0.1 %). Amaranth extract prepared from amaranth flour weighed in 5 % concentration per fermented samples volume was used as a vegetable additive...

  4. Extracting laboratory test information from biomedical text

    Directory of Open Access Journals (Sweden)

    Yanna Shen Kang

    2013-01-01

    Full Text Available Background: No previous study reported the efficacy of current natural language processing (NLP methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens was very limited or when lexical morphology of the entity was distinctive (as in units of measures, yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure.

  5. 25 CFR 103.14 - Can BIA request additional information?

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Can BIA request additional information? 103.14 Section... request additional information? BIA may require the lender to provide additional information, whenever BIA believes it needs the information to properly evaluate a new lender, guaranty application, or insurance...

  6. Application of GIS to Geological Information Extraction

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    GIS. a powerful tool for processing spatial data, is advantageous in its spatial overlaying. In this paper, GIS is applied to the extraction of geological information. Information associated with mineral resources is chosen to delineate the geo-anomalies, the basis of ore-forming anomalies and of mineral-deposit location. This application is illustrated with an example in Weixi area, Yunnan Province.

  7. 46 CFR 535.606 - Requests for additional information.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 9 2010-10-01 2010-10-01 false Requests for additional information. 535.606 Section 535... on Agreements § 535.606 Requests for additional information. (a) The Commission may request from the... responses to a request for additional information shall be submitted to the Director, Bureau of Trade...

  8. 32 CFR 1698.3 - Requests for additional information.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Requests for additional information. 1698.3... ADVISORY OPINIONS § 1698.3 Requests for additional information. (a) The Director may request additional appropriate information from the requester for an advisory opinion. (b) The Director will forward a copy of...

  9. 21 CFR 207.31 - Additional drug listing information.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Additional drug listing information. 207.31... Procedures for Domestic Drug Establishments § 207.31 Additional drug listing information. (a) In addition to... following information by letter or by Federal Register notice: (1) For a particular prescription drug...

  10. Extraction of information from a single quantum

    OpenAIRE

    Paraoanu, G. S.

    2011-01-01

    We investigate the possibility of performing quantum tomography on a single qubit with generalized partial measurements and the technique of measurement reversal. Using concepts from statistical decision theory, we prove that, somewhat surprisingly, no information can be obtained using this scheme. It is shown that, irrespective of the measurement technique used, extraction of information from single quanta is at odds with other general principles of quantum physics.

  11. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...

  12. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...... independently or with the Stanford NLP toolkit....

  13. 10 CFR 1.3 - Sources of additional information.

    Science.gov (United States)

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Sources of additional information. 1.3 Section 1.3 Energy NUCLEAR REGULATORY COMMISSION STATEMENT OF ORGANIZATION AND GENERAL INFORMATION Introduction § 1.3 Sources of additional information. (a) A statement of the NRC's organization, policies, procedures...

  14. 19 CFR 111.60 - Request for additional information.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 1 2010-04-01 2010-04-01 false Request for additional information. 111.60 Section... Monetary Penalty in Lieu of Suspension or Revocation § 111.60 Request for additional information. If, in... concerning the alleged misconduct, he may request that information in writing. The broker's request must set...

  15. Extracting an entanglement signature from only classical mutual information

    Energy Technology Data Exchange (ETDEWEB)

    Starling, David J.; Howell, John C. [Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627 (United States); Broadbent, Curtis J. [Department of Physics and Astronomy, University of Rochester, Rochester, New York 14627 (United States); Rochester Theory Center, University of Rochester, Rochester, New York 14627 (United States)

    2011-09-15

    We introduce a quantity which is formed using classical notions of mutual information and which is computed using the results of projective measurements. This quantity constitutes a sufficient condition for entanglement and represents the amount of information that can be extracted from a bipartite system for spacelike separated observers. In addition to discussion, we provide simulations as well as experimental results for the singlet and maximally correlated mixed states.

  16. Extracting an entanglement signature from only classical mutual information

    Science.gov (United States)

    Starling, David J.; Broadbent, Curtis J.; Howell, John C.

    2011-09-01

    We introduce a quantity which is formed using classical notions of mutual information and which is computed using the results of projective measurements. This quantity constitutes a sufficient condition for entanglement and represents the amount of information that can be extracted from a bipartite system for spacelike separated observers. In addition to discussion, we provide simulations as well as experimental results for the singlet and maximally correlated mixed states.

  17. Web Information Extraction%Web信息抽取

    Institute of Scientific and Technical Information of China (English)

    李晶; 陈恩红

    2003-01-01

    With the tremendous amount of information available on the Web, the ability to quickly obtain information has become a crucial problem. It is not enough for us to acquire information only with Web information retrieval technology. Therefore more and more people pay attention to Web information extraction technology. This paper first in- troduces some concepts of information extraction technology, then introduces and analyzes several typical Web information extraction methods based on the differences in extraction patterns.

  18. Automated information extraction from web APIs documentation

    OpenAIRE

    Ly, Papa Alioune; Pedrinaci, Carlos; Domingue, John

    2012-01-01

    A fundamental characteristic of Web APIs is the fact that, de facto, providers hardly follow any standard practices while implementing, publishing, and documenting their APIs. As a consequence, the discovery and use of these services by third parties is significantly hampered. In order to achieve further automation while exploiting Web APIs we present an approach for automatically extracting relevant technical information from the Web pages documenting them. In particular we have devised two ...

  19. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  20. Extracting the information backbone in online system.

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  1. Extracting the information backbone in online system

    CERN Document Server

    Zhang, Qian-Ming; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers mainly dedicated to improve the recommendation performance (accuracy and diversity) of the algorithms while overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improve both of...

  2. Addition of Selenium to Carica papaya Linn Pulp Extract Enhances ...

    African Journals Online (AJOL)

    HP

    Atomic absorption spectrophotometry was used for the analysis of the elements. ... Wound area was monitored with a camera and evaluated by software. ... wound healing while negative control took 14 days and other treatment ... extract contained 9 % protein but no tannin while water extract contained .... distilled water.

  3. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Some techniques and methods for deriving water information from SPOT -4 (XI) image were investigatedand discussed in this paper. An algorithm of decision-tree (DT) classification which includes several classifiers based onthe spectral responding characteristics of water bodies and other objects, was developed and put forward to delineate wa-ter bodies. Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary informa-tion of DEM and slope (DTDS) was also designed for water bodies extraction. In addition, supervised classificationmethod of maximum-likelyhood classification (MLC), and unsupervised method of interactive self-organizing dada analy-sis technique (ISODATA) were used to extract waterbodies for comparison purpose. An index was designed and used toassess the accuracy of different methods adopted in the research. Results have shown that water extraction accuracy wasvariable with respect to the various techniques applied. It was low using ISODATA, very high using DT algorithm andmuch higher using both DTDS and MLC.

  4. Digital image processing for information extraction.

    Science.gov (United States)

    Billingsley, F. C.

    1973-01-01

    The modern digital computer has made practical image processing techniques for handling nonlinear operations in both the geometrical and the intensity domains, various types of nonuniform noise cleanup, and the numerical analysis of pictures. An initial requirement is that a number of anomalies caused by the camera (e.g., geometric distortion, MTF roll-off, vignetting, and nonuniform intensity response) must be taken into account or removed to avoid their interference with the information extraction process. Examples illustrating these operations are discussed along with computer techniques used to emphasize details, perform analyses, classify materials by multivariate analysis, detect temporal differences, and aid in human interpretation of photos.

  5. Extraction of information from unstructured text

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H.; DeLand, S.M.; Crowder, S.V.

    1995-11-01

    Extracting information from unstructured text has become an emphasis in recent years due to the large amount of text now electronically available. This status report describes the findings and work done by the end of the first year of a two-year LDRD. Requirements of the approach included that it model the information in a domain independent way. This means that it would differ from current systems by not relying on previously built domain knowledge and that it would do more than keyword identification. Three areas that are discussed and expected to contribute to a solution include (1) identifying key entities through document level profiling and preprocessing, (2) identifying relationships between entities through sentence level syntax, and (3) combining the first two with semantic knowledge about the terms.

  6. A Public Opinion Survey on Correctional Education: Does Additional Information on Efficacy Lead to Additional Support?

    Science.gov (United States)

    Waterland, Keri Lynn

    2009-01-01

    Though much research has been done on the efficacy of correctional education on reducing recidivism rates for prison inmates, there is little research on the effect that information about the efficacy of correctional education has on public opinion. This study examined whether providing additional information regarding the efficacy of correctional…

  7. Extracting the information backbone in online system.

    Directory of Open Access Journals (Sweden)

    Qian-Ming Zhang

    Full Text Available Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  8. Extracting the Information Backbone in Online System

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such “less can be more” feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency. PMID:23690946

  9. Effect of Apacox, a feed additive containing herb extracts and ...

    African Journals Online (AJOL)

    PC

    submitted to conventional spectrophotometry (Shimadzu, Model UV-160A, Tokyo, ..... oregano essential oil on performance of chickens and on iron-induced lipid oxidation of breast, thigh .... in crude plant extract using high-performance liquid chromatography UV-visible photodiode-array- detection mass spectrometry.

  10. Extracts of medicinal plants as functional beer additives

    Directory of Open Access Journals (Sweden)

    Đorđević Sofija

    2016-01-01

    Full Text Available This paper is based on determining the level of the antioxidant activity of beer, to which sensory acceptable amounts of selected extracts of medicinal plants were added, with the aim of obtaining a beer with increased functional and new sensory features. For purposes of this study a commercial lager beer type Pils and extracts of herbal drugs: Melissae folium, Thymi herba, Juniperi fructus, Urticae radix and Lupuli strobuli, were used. Total phenols were analyzed by the method of Folin-Ciocalteu, and the antioxidant activity of samples using FRAP and DPPH test. Sensory evaluation of beer was conducted on 80 subjects, using a nine levels hedonic scale. The results showed that the content of total phenols was the highest in the beer which thyme, juniper and lemon balm were added to (384.22, 365.38 and 363.08 mg GAE/L, respectively, representing the increase of 37.09, 30.36 and 29.55% (respectively compared to the commercial lager beer. Values of antioxidant activity were correlated with the content of total phenols. The extract of lemon balm blended in the best manner with the baseline, commercial lager beer in terms of sensory acceptability. New beer, enriched with lemon balm, had a pleasant, appealing and harmonious flavor and aroma.

  11. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  12. Reference Information Extraction and Processing Using Random Conditional Fields

    Directory of Open Access Journals (Sweden)

    Tudor Groza

    2012-06-01

    Full Text Available Fostering both the creation and the linking of data with the scope of supporting the growth of the Linked Data Web requires us to improve the acquisition and extraction mechanisms of the underlying semantic metadata. This is particularly important for the scientific publishing domain, where currently most of the datasets are being created in an author-driven, manual manner. In addition, such datasets capture only fragments of the complete metadata, omitting usually, important elements such as the references, although they represent valuable information. In this paper we present an approach that aims at dealing with this aspect of extraction and processing of reference information. The experimental evaluation shows that, currently, our solution handles very well diverse types of reference format, thus making it usable for, or adaptable to, any area of scientific publishing.

  13. SEMANTIC INFORMATION EXTRACTION IN UNIVERSITY DOMAIN

    Directory of Open Access Journals (Sweden)

    Swathi

    2012-07-01

    Full Text Available Today’s conventional search engines hardly do provide the essential content relevant to the user’s search query. This is because the context and semantics of the request made by the user is not analyzed to the full extent. So here the need for a semantic web search arises. SWS is upcoming in the area of web search which combines Natural Language Processing and Artificial Intelligence. The objective of the work done here is to design, develop and implement a semantic search engine- SIEU(Semantic Information Extraction in University Domain confined to the university domain. SIEU uses ontology as a knowledge base for the information retrieval process. It is not just a mere keyword search. It is one layer above what Google or any other search engines retrieve by analyzing just the keywords. Here the query is analyzed both syntactically and semantically. The developed system retrieves the web results more relevant to the user query through keyword expansion. The results obtained here will be accurate enough to satisfy the request made by the user. The level of accuracy will be enhanced since the query is analyzed semantically. The system will be of great use to the developers and researchers who work on web. The Google results are re-ranked and optimized for providing the relevant links. For ranking an algorithm has been applied which fetches more apt results for the user query.

  14. 12 CFR 980.7 - Examinations; requests for additional information.

    Science.gov (United States)

    2010-01-01

    ... business activity is consistent with the housing finance and community lending mission of the Banks and the... ACTIVITIES NEW BUSINESS ACTIVITIES § 980.7 Examinations; requests for additional information. (a) General... new business activity, nothing in this part shall limit the right of the Finance Board at any time to...

  15. Additional energy-information relations in thermodynamics of small systems

    Science.gov (United States)

    Uzdin, Raam

    2017-09-01

    The Clausius inequality form of the second law of thermodynamics relates information changes (entropy) to changes in the first moment of the energy (heat and indirectly also work). Are there similar relations between other moments of the energy distribution, and other information measures, or is the Clausius inequality a one of a kind instance of the energy-information paradigm? If there are additional relations, can they be used to make predictions on measurable quantities? Changes in the energy distribution beyond the first moment (average heat or work) are especially important in small systems which are often very far from thermal equilibrium. The additional energy-information relations (AEIR's), here derived, provide positive answers to the two questions above and add another layer to the fundamental connection between energy and information. To illustrate the utility of the new AEIR's, we find scenarios where the AEIR's yield tighter constraints on performance (e.g., in thermal machines) compared to the second law. To obtain the AEIR's we use the Bregman divergence—a mathematical tool found to be highly suitable for energy-information studies. The quantum version of the AEIR's provides a thermodynamic meaning to various quantum coherence measures. It is intriguing to fully map the regime of validity of the AEIR's and extend the present results to more general scenarios including continuous systems and particles exchange with the baths.

  16. Respiratory Information Extraction from Electrocardiogram Signals

    KAUST Repository

    Amin, Gamal El Din Fathy

    2010-12-01

    The Electrocardiogram (ECG) is a tool measuring the electrical activity of the heart, and it is extensively used for diagnosis and monitoring of heart diseases. The ECG signal reflects not only the heart activity but also many other physiological processes. The respiratory activity is a prominent process that affects the ECG signal due to the close proximity of the heart and the lungs. In this thesis, several methods for the extraction of respiratory process information from the ECG signal are presented. These methods allow an estimation of the lung volume and the lung pressure from the ECG signal. The potential benefit of this is to eliminate the corresponding sensors used to measure the respiration activity. A reduction of the number of sensors connected to patients will increase patients’ comfort and reduce the costs associated with healthcare. As a further result, the efficiency of diagnosing respirational disorders will increase since the respiration activity can be monitored with a common, widely available method. The developed methods can also improve the detection of respirational disorders that occur while patients are sleeping. Such disorders are commonly diagnosed in sleeping laboratories where the patients are connected to a number of different sensors. Any reduction of these sensors will result in a more natural sleeping environment for the patients and hence a higher sensitivity of the diagnosis.

  17. Method for Extracting Product Information from TV Commercial

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2011-09-01

    Full Text Available Television (TV Commercial program contains important product information that displayed only in seconds. People who need that information has no insufficient time for noted it, even just for reading that information. This research work focus on automatically detect text and extract important information from a TV commercial to provide information in real time and for video indexing. We propose method for product information extraction from TV commercial using knowledge based system with pattern matching rule based method. Implementation and experiments on 50 commercial screenshot images achieved a high accuracy result on text extraction and information recognition.

  18. Information Extraction on the Web with Credibility Guarantee

    OpenAIRE

    Nguyen, Thanh Tam

    2015-01-01

    The Web became the central medium for valuable sources of information extraction applications. However, such user-generated resources are often plagued by inaccuracies and misinformation due to the inherent openness and uncertainty of the Web. In this work we study the problem of extracting structured information out of Web data with a credibility guarantee. The ultimate goal is that not only the structured information should be extracted as much as possible but also its credibility is high. ...

  19. Information Extraction from Large-Multi-Layer Social Networks

    Science.gov (United States)

    2015-08-06

    paper we introduce a novel method to extract information from such multi-layer networks, where each type of link forms its own layer. Using the concept...Approved for public release; distribution is unlimited. Information extraction from large-multi-layer social networks The views, opinions and/or findings...Information extraction from large-multi-layer social networks Report Title Social networks often encode community structure using multiple distinct

  20. Automatic Addition of Genre Information in a Japanese Dictionary

    Directory of Open Access Journals (Sweden)

    Raoul BLIN

    2012-10-01

    Full Text Available This article presents the method used for the automatic addition of genre information to the Japanese entries in a Japanese-French dictionary. The dictionary is intended for a wide audience, ranging from learners of Japanese as a second language to researchers. The genre characterization is based on the statistical analysis of corpora representing different genres. We will discuss the selection of genres and corpora, the tool and method of analysis, the difficulties encountered during this analysis and their solutions.

  1. THE METHODS OF EXTRACTING WATER INFORMATION FROM SPOT IMAGE

    Institute of Scientific and Technical Information of China (English)

    DUJin-kang; FENGXue-zhi; 等

    2002-01-01

    Some techniques and methods for deriving water information from SPOT-4(XI) image were investigated and discussed in this paper.An algorithmoif decision-tree(DT) classification which includes several classifiers based on the spectral responding characteristics of water bodies and other objects,was developed and put forward to delineate water bodies.Another algorithm of decision-tree classification based on both spectral characteristics and auxiliary information of DEM and slope(DTDS) was also designed for water bodies extraction.In addition,supervised classification method of maximum-likelyhood classification(MLC),and unsupervised method of interactive self -organizing dada analysis technique(ISODATA) were used to extract waterbodies for comparison purpose.An index was designed and used to assess the accuracy of different methods abopted in the research.Results have shown that water extraction accuracy was variable with respect to the various techniques applied.It was low using ISODATA,very high using DT algorithm and much higher using both DTDS and MLC.

  2. Factors affecting high-pressure solvent extraction (accelerated solvent extraction) of additives from polymers.

    Science.gov (United States)

    Vandenburg, H J; Clifford, A A; Bartle, K D; Zhu, S A; Carroll, J; Newton, I D; Garden, L M

    1998-05-01

    Irganox 1010 (pentaerythritol tetrakis[3-(3,5-di-tert-butyl-4-hydroxyphenyl)] propionate) is successfully extracted from polypropylene using solvents at high temperatures and pressures in a homemade accelerated solvent extraction system. For example, using freeze-ground polymer, 90% extraction is possible within 5 min with 2-propanol at 150 °C. Extraction curves for 2-propanol and acetone fit well to the "hot ball" model, previously developed for supercritical fluid extraction. Diffusion coefficients are determined for extractions with 2-propanol, acetone, and cyclohexane over a range of temperatures, and the activation energies for the diffusion are 134, 107, and 61 kJ mol(-)(1), respectively. The lower figure for acetone and cyclohexane indicates that these solvents swell the polymer more than does 2-propanol. The polymer dissolves in the solvent at too high a temperature, which causes blockage of the transfer lines. For maximum extraction rates, the highest temperature for each solvent that avoids dissolution of the polymer should be used. The use of mixed solvents is investigated and shows advantages in some cases, with the aim of producing a solvent that will swell the polymer but not dissolve it.

  3. Preliminary studies of alternative feed additives for broilers: Alternanthera brasiliana extract, propolis extract and linseed oil

    Directory of Open Access Journals (Sweden)

    MW Biavatti

    2003-05-01

    Full Text Available The influence of alternative treatments using fluidextracts of Alternanthera brasiliana, propolis resin and linseed oil on the performance and blood biochemistry of broilers was evaluated. The study was done with five treatments: basal diet (negative control; basal diet + 40 ppm avylamicin and 120 ppm monensin (positive control; basal diet + A. brasiliana extract (180 mL/200 kg of feed; basal diet + propolis extract (200 mL/200 kg of feed and basal diet + linseed oil (2.5% replacing soybean oil. Propolis and A. brasiliana extracts improved broiler performance from 14 to 21 days, whereas linseed oil had no effect. The findings of this experiment revealed that A. brasiliana and propolis extracts can be used as antimicrobials, but further studies are necessary to find the best concentration in broiler diets.

  4. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  5. The Agent of extracting Internet Information with Lead Order

    Science.gov (United States)

    Mo, Zan; Huang, Chuliang; Liu, Aijun

    In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.

  6. Pattern information extraction from crystal structures

    OpenAIRE

    Okuyan, Erhan

    2005-01-01

    Cataloged from PDF version of article. Determining crystal structure parameters of a material is a quite important issue in crystallography. Knowing the crystal structure parameters helps to understand physical behavior of material. For complex structures, particularly for materials which also contain local symmetry as well as global symmetry, obtaining crystal parameters can be quite hard. This work provides a tool that will extract crystal parameters such as primitive vect...

  7. How to retrieve additional information from the multiplicity distributions

    CERN Document Server

    Wilka, Grzegorz

    2016-01-01

    Multiplicity distributions $P(N)$ measured in multiparticle production processes are most frequently described by the Negative Binomial Distribution (NBD). However, with increasing collision energy some systematic discrepancies become more and more apparent. They are usually attributed to the possible multi-source structure of the production process and described using a multi-NBD form of the multiplicity distribution. We investigate the possibility of keeping a single NBD but with its parameters depending on the multiplicity $N$. This is done by modifying the widely known clan model of particle production leading to the NBD form of $P(N)$. This is then confronted with the approach based on the so-called cascade-stochastic formalism which is based on different types of recurrence relations defining $P(N)$. We demonstrate that a combination of both approaches allows the retrieval of additional valuable information from the multiplicity distributions, namely the oscillatory behavior of the counting statistics a...

  8. Real-Time Information Extraction from Big Data

    Science.gov (United States)

    2015-10-01

    I N S T I T U T E F O R D E F E N S E A N A L Y S E S Real-Time Information Extraction from Big Data Robert M. Rolfe...Information Extraction from Big Data Jagdeep Shah Robert M. Rolfe Francisco L. Loaiza-Lemos October 7, 2015 I N S T I T U T E F O R D E F E N S E...AN A LY S E S Abstract We are drowning under the 3 Vs (volume, velocity and variety) of big data . Real-time information extraction from big

  9. An architecture for biological information extraction and representation.

    Science.gov (United States)

    Vailaya, Aditya; Bluvas, Peter; Kincaid, Robert; Kuchinsky, Allan; Creech, Michael; Adler, Annette

    2005-02-15

    Technological advances in biomedical research are generating a plethora of heterogeneous data at a high rate. There is a critical need for extraction, integration and management tools for information discovery and synthesis from these heterogeneous data. In this paper, we present a general architecture, called ALFA, for information extraction and representation from diverse biological data. The ALFA architecture consists of: (i) a networked, hierarchical, hyper-graph object model for representing information from heterogeneous data sources in a standardized, structured format; and (ii) a suite of integrated, interactive software tools for information extraction and representation from diverse biological data sources. As part of our research efforts to explore this space, we have currently prototyped the ALFA object model and a set of interactive software tools for searching, filtering, and extracting information from scientific text. In particular, we describe BioFerret, a meta-search tool for searching and filtering relevant information from the web, and ALFA Text Viewer, an interactive tool for user-guided extraction, disambiguation, and representation of information from scientific text. We further demonstrate the potential of our tools in integrating the extracted information with experimental data and diagrammatic biological models via the common underlying ALFA representation. aditya_vailaya@agilent.com.

  10. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  11. Information Extraction from Unstructured Text for the Biodefense Knowledge Center

    Energy Technology Data Exchange (ETDEWEB)

    Samatova, N F; Park, B; Krishnamurthy, R; Munavalli, R; Symons, C; Buttler, D J; Cottom, T; Critchlow, T J; Slezak, T

    2005-04-29

    The Bio-Encyclopedia at the Biodefense Knowledge Center (BKC) is being constructed to allow an early detection of emerging biological threats to homeland security. It requires highly structured information extracted from variety of data sources. However, the quantity of new and vital information available from every day sources cannot be assimilated by hand, and therefore reliable high-throughput information extraction techniques are much anticipated. In support of the BKC, Lawrence Livermore National Laboratory and Oak Ridge National Laboratory, together with the University of Utah, are developing an information extraction system built around the bioterrorism domain. This paper reports two important pieces of our effort integrated in the system: key phrase extraction and semantic tagging. Whereas two key phrase extraction technologies developed during the course of project help identify relevant texts, our state-of-the-art semantic tagging system can pinpoint phrases related to emerging biological threats. Also we are enhancing and tailoring the Bio-Encyclopedia by augmenting semantic dictionaries and extracting details of important events, such as suspected disease outbreaks. Some of these technologies have already been applied to large corpora of free text sources vital to the BKC mission, including ProMED-mail, PubMed abstracts, and the DHS's Information Analysis and Infrastructure Protection (IAIP) news clippings. In order to address the challenges involved in incorporating such large amounts of unstructured text, the overall system is focused on precise extraction of the most relevant information for inclusion in the BKC.

  12. Source-specific Informative Prior for i-Vector Extraction

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2015-01-01

    -informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows that extracting i-vectors for a heterogeneous dataset, containing speech samples recorded from multiple sources, using informative priors instead is applicable, and leads to favorable results...

  13. Querying and Extracting Timeline Information from Road Traffic Sensor Data.

    Science.gov (United States)

    Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen

    2016-08-23

    The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system-a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset.

  14. Querying and Extracting Timeline Information from Road Traffic Sensor Data

    Directory of Open Access Journals (Sweden)

    Ardi Imawan

    2016-08-01

    Full Text Available The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset.

  15. Lotus seed epicarp extract as potential antioxidant and anti-obesity additive in Chinese Cantonese Sausage.

    Science.gov (United States)

    Qi, Suijian; Zhou, Delong

    2013-02-01

    The antioxidative activities of a lotus seed epicarp extract in different concentrations (6.25, 12.5, 25, 50 and 100 μg.mL(-1)) in pork homogenates representative of Chinese Cantonese Sausage were evaluated using three methods: thiobarbituric acid-reactive substances (TBARS) values, peroxide values (POVs) and acid values (AVs). Also the cytotoxic and anti-obesity effects of the lotus seed epicarp extracts were evaluated using an in vitro 3T3-L1 preadipocyte cell model. Results showed that the lotus seed epicarp extracts were non-toxic and effective in inhibiting preadipocyte differentiation. Supplementation of pork homogenate with lotus seed epicarp extracts was effective in retarding lipid oxidation. Moreover, the antioxidative and preadipocyte differentiation inhibition effects of the lotus seed epicarp extracts were dose-dependent. Thus, the lotus seed epicarp extract might be a good candidate as an antioxidant and anti-obesity natural additive in Chinese Cantonese Sausage.

  16. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    Science.gov (United States)

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  17. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    Science.gov (United States)

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  18. Mars Target Encyclopedia: Information Extraction for Planetary Science

    Science.gov (United States)

    Wagstaff, K. L.; Francis, R.; Gowda, T.; Lu, Y.; Riloff, E.; Singh, K.

    2017-06-01

    Mars surface targets / and published compositions / Seek and ye will find. We used text mining methods to extract information from LPSC abstracts about the composition of Mars surface targets. Users can search by element, mineral, or target.

  19. Can we replace curation with information extraction software?

    Science.gov (United States)

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL.

  20. How to retrieve additional information from the multiplicity distributions

    Science.gov (United States)

    Wilk, Grzegorz; Włodarczyk, Zbigniew

    2017-01-01

    Multiplicity distributions (MDs) P(N) measured in multiparticle production processes are most frequently described by the negative binomial distribution (NBD). However, with increasing collision energy some systematic discrepancies have become more and more apparent. They are usually attributed to the possible multi-source structure of the production process and described using a multi-NBD form of the MD. We investigate the possibility of keeping a single NBD but with its parameters depending on the multiplicity N. This is done by modifying the widely known clan model of particle production leading to the NBD form of P(N). This is then confronted with the approach based on the so-called cascade-stochastic formalism which is based on different types of recurrence relations defining P(N). We demonstrate that a combination of both approaches allows the retrieval of additional valuable information from the MDs, namely the oscillatory behavior of the counting statistics apparently visible in the high energy data.

  1. Automatic Addition of Genre Information in a Japanese Dictionary

    Directory of Open Access Journals (Sweden)

    BLIN, Raoul

    2012-10-01

    Full Text Available This article presents the method used for the automatic addition of genre information to the Japanese entries in a Japanese-French dictionary. The dictionary is intended for a wide audience, ranging from learners of Japanese as a second language to researchers. The genre characterization is based on the statistical analysis of corpora representing different genres. We will discuss the selection of genres and corpora, the tool and method of analysis, the difficulties encountered during this analysis and their solutions.-----Članek predstavlja metodo za samodejno dodajanje informacij o žanru k japonskim iztočnicam v japonsko-francoskem slovarju, ki je namenjen tako učencem japonščine kot tujega jezika kot tudi raziskovalcem. Žanrski opis je osnovan na statistični analizi korpusov različnih žanrov. Članek opisuje izbiro žanrov in korpusov, orodja in metode za analizo, težave pri analizi in rešitve zanje.

  2. Moving Target Information Extraction Based on Single Satellite Image

    Directory of Open Access Journals (Sweden)

    ZHAO Shihu

    2015-03-01

    Full Text Available The spatial and time variant effects in high resolution satellite push broom imaging are analyzed. A spatial and time variant imaging model is established. A moving target information extraction method is proposed based on a single satellite remote sensing image. The experiment computes two airplanes' flying speed using ZY-3 multispectral image and proves the validity of spatial and time variant model and moving information extracting method.

  3. Pattern information extraction from crystal structures

    Science.gov (United States)

    Okuyan, Erhan; Güdükbay, Uğur; Gülseren, Oğuz

    2007-04-01

    Determining the crystal structure parameters of a material is an important issue in crystallography and material science. Knowing the crystal structure parameters helps in understanding the physical behavior of material. It can be difficult to obtain crystal parameters for complex structures, particularly those materials that show local symmetry as well as global symmetry. This work provides a tool that extracts crystal parameters such as primitive vectors, basis vectors and space groups from the atomic coordinates of crystal structures. A visualization tool for examining crystals is also provided. Accordingly, this work could help crystallographers, chemists and material scientists to analyze crystal structures efficiently. Program summaryTitle of program: BilKristal Catalogue identifier: ADYU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYU_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: None Programming language used: C, C++, Microsoft .NET Framework 1.1 and OpenGL Libraries Computer: Personal Computers with Windows operating system Operating system: Windows XP Professional RAM: 20-60 MB No. of lines in distributed program, including test data, etc.:899 779 No. of bytes in distributed program, including test date, etc.:9 271 521 Distribution format:tar.gz External routines/libraries: Microsoft .NET Framework 1.1. For visualization tool, graphics card driver should also support OpenGL Nature of problem: Determining crystal structure parameters of a material is a quite important issue in crystallography. Knowing the crystal structure parameters helps to understand physical behavior of material. For complex structures, particularly, for materials which also contain local symmetry as well as global symmetry, obtaining crystal parameters can be quite hard. Solution method: The tool extracts crystal parameters such as primitive vectors, basis vectors and identify the space group from

  4. Integrating Information Extraction Agents into a Tourism Recommender System

    Science.gov (United States)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  5. Mining knowledge from text repositories using information extraction: A review

    Indian Academy of Sciences (India)

    Sandeep R Sirsat; Dr Vinay Chavan; Dr Shrinivas P Deshpande

    2014-02-01

    There are two approaches to mining text form online repositories. First, when the knowledge to be discovered is expressed directly in the documents to be mined, Information Extraction (IE) alone can serve as an effective tool for such text mining. Second, when the documents contain concrete data in unstructured form rather than abstract knowledge, Information Extraction (IE) can be used to first transform the unstructured data in the document corpus into a structured database, and then use some state-of-the-art data mining algorithms/tools to identify abstract patterns in this extracted data. This paper presents the review of several methods related to these two approaches.

  6. Simultaneous extraction of flavonoids from Chamaecyparis obtusa using deep eutectic solvents as additives of conventional extractions solvents.

    Science.gov (United States)

    Tang, Baokun; Park, Ha Eun; Row, Kyung Ho

    2015-01-01

    Three flavones (quercetin, myricetin and amentoflavone) were extracted from Chamaecyparis obtusa leaves using deep eutectic solvents (DESs) as additives to conventional extractions solvents. Sixteen DESs were synthesized from different salts and hydrogen bond donors. In addition, C. obtusa was extracted under optimal conditions of methanol as the solvent in the heating process (60°C) for 120 min at a solid/liquid ratio of 80%. Under these optimal conditions, a good linear relationship was observed at analyte concentrations ranging from 5.0 to 200.0 μg/mL (R(2) > 0.999). The extraction recovery ranged from 96.7 to 103.3% with the inter- and intraday relative standard deviations of <4.97%. Under the optimal conditions, from C. obtusa, the quantities of quercetin, myricetin and amentoflavone extracted were 325.90, 8.66 and 50.34 µg/mL, respectively. Overall, DESs are expected to have a wide range of applications. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. 21 CFR 71.4 - Samples; additional information.

    Science.gov (United States)

    2010-04-01

    ... samples of the color additive, articles used as components thereof, or of the food, drug, or cosmetic in... additive, or articles used as components thereof, or of the food, drug, or cosmetic in which the color... respect to the safety of the color additive or the physical or technical effect it produces. The date...

  8. Improving information extraction using a probability-based approach

    DEFF Research Database (Denmark)

    Kim, S.; Ahmed, Saeema; Wallace, K.

    2007-01-01

    or retire. It is becoming essential to retrieve vital information from archived product documents, if it is available. There is, therefore, great interest in ways of extracting relevant and sharable information from documents. A keyword-based search is commonly used, but studies have shown...

  9. Improvement of the cloud point extraction of uranyl ions by the addition of ionic liquids.

    Science.gov (United States)

    Gao, Song; Sun, Taoxiang; Chen, Qingde; Shen, Xinghai

    2013-12-15

    The cloud point extraction (CPE) of uranyl ions by different kinds of extractants in Triton X-114 (TX-114) micellar solution was investigated upon the addition of ionic liquids (ILs) with various anions, i.e., bromide (Br(-)), tetrafluoroborate (BF4(-)), hexafluorophosphate (PF6(-)) and bis[(trifluoromethyl)sulfonyl]imide (NTf2(-)). A significant increase of the extraction efficiency was found on the addition of NTf2(-) based ILs when using neutral extractant tri-octylphosphine oxide (TOPO), and the extraction efficiency kept high at both nearly neutral and high acidity. However, the CPE with acidic extractants, e.g., bis(2-ethylhexyl) phosphoric acid (HDEHP) and 8-hydroxyquinoline (8-HQ) which are only effective at nearly neutral condition, was not improved by ILs. The results of zeta potential and (19)F NMR measurements indicated that the anion NTf2(-) penetrated into the TX-114 micelles and was enriched in the surfactant-rich phase during the CPE process. Meanwhile, NTf2(-) may act as a counterion in the CPE of UO2(2+) by TOPO. Furthermore, the addition of IL increased the separation factor of UO2(2+) and La(3+), which implied that in the micelle TOPO, NTf2(-) and NO3(-) established a soft template for UO2(2+). Therefore, the combination of CPE and IL provided a supramolecular recognition to concentrate UO2(2+) efficiently and selectively.

  10. The study of the extraction of 3-D informations

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Ki [Korea Univ., Seoul (Korea); Kim, Jin Hun; Kim, Hui Yung; Lee, Gi Sik; Lee, Yung Shin [Sokyung Univ., Seoul (Korea)

    1998-04-01

    To extract three dimensional information in 3 dimensional real world two methods are applied (stereo image method, virtual reality environment method). 1. Stereo image method. From the paris of stereo image matching methods are applied to find the corresponding points in the two images. To solve the problem various methods are applied 2. Virtual reality environment method. As an alternate method to extract 3-D information, virtual reality environment is use. It is very useful to fine 6 DOF for a some given target points in 3-D space. We considered the accuracies and reliability of the 3-D informations. 34 figs., 4 tabs. (Author)

  11. 40 CFR 141.154 - Required additional health information.

    Science.gov (United States)

    2010-07-01

    ... in drinking water is primarily from materials and components associated with service lines and home... following lead-specific information: (1) A short informational statement about lead in drinking water and... the potential for lead exposure by flushing your tap for 30 seconds to 2 minutes before using...

  12. Chromatographic Evaluation and Characterization of Components of Gentian Root Extract Used as Food Additives.

    Science.gov (United States)

    Amakura, Yoshiaki; Yoshimura, Morio; Morimoto, Sara; Yoshida, Takashi; Tada, Atsuko; Ito, Yusai; Yamazaki, Takeshi; Sugimoto, Naoki; Akiyama, Hiroshi

    2016-01-01

    Gentian root extract is used as a bitter food additive in Japan. We investigated the constituents of this extract to acquire the chemical data needed for standardized specifications. Fourteen known compounds were isolated in addition to a mixture of gentisin and isogentisin: anofinic acid, 2-methoxyanofinic acid, furan-2-carboxylic acid, 5-hydroxymethyl-2-furfural, 2,3-dihydroxybenzoic acid, isovitexin, gentiopicroside, loganic acid, sweroside, vanillic acid, gentisin 7-O-primeveroside, isogentisin 3-O-primeveroside, 6'-O-glucosylgentiopicroside, and swertiajaposide D. Moreover, a new compound, loganic acid 7-(2'-hydroxy-3'-O-β-D-glucopyranosyl)benzoate (1), was also isolated. HPLC was used to analyze gentiopicroside and amarogentin, defined as the main constituents of gentian root extract in the List of Existing Food Additives in Japan.

  13. Extraction of Coupling Information From $Z' \\to jj$

    OpenAIRE

    Rizzo, T. G.

    1993-01-01

    An analysis by the ATLAS Collaboration has recently shown, contrary to popular belief, that a combination of strategic cuts, excellent mass resolution, and detailed knowledge of the QCD backgrounds from direct measurements can be used to extract a signal in the $Z' \\to jj$ channel in excess of $6\\sigma$ for certain classes of extended electroweak models. We explore the possibility that the data extracted from $Z$ dijet peak will have sufficient statistical power as to supply information on th...

  14. PDF text classification to leverage information extraction from publication reports.

    Science.gov (United States)

    Bui, Duy Duc An; Del Fiol, Guilherme; Jonnalagadda, Siddhartha

    2016-06-01

    Data extraction from original study reports is a time-consuming, error-prone process in systematic review development. Information extraction (IE) systems have the potential to assist humans in the extraction task, however majority of IE systems were not designed to work on Portable Document Format (PDF) document, an important and common extraction source for systematic review. In a PDF document, narrative content is often mixed with publication metadata or semi-structured text, which add challenges to the underlining natural language processing algorithm. Our goal is to categorize PDF texts for strategic use by IE systems. We used an open-source tool to extract raw texts from a PDF document and developed a text classification algorithm that follows a multi-pass sieve framework to automatically classify PDF text snippets (for brevity, texts) into TITLE, ABSTRACT, BODYTEXT, SEMISTRUCTURE, and METADATA categories. To validate the algorithm, we developed a gold standard of PDF reports that were included in the development of previous systematic reviews by the Cochrane Collaboration. In a two-step procedure, we evaluated (1) classification performance, and compared it with machine learning classifier, and (2) the effects of the algorithm on an IE system that extracts clinical outcome mentions. The multi-pass sieve algorithm achieved an accuracy of 92.6%, which was 9.7% (pPDF documents. Text classification is an important prerequisite step to leverage information extraction from PDF documents. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. 77 FR 67655 - Agency Information Collection Activities; Proposed Collection; Comment Request; Food Additive...

    Science.gov (United States)

    2012-11-13

    ... Collection; Comment Request; Food Additive Petitions and Investigational Food Additive Exemptions; Extension... comment in response to the notice. This notice solicits comments on food additive petitions regarding... of information technology. Food Additive Petitions and Investigational Food Additive Exemptions,...

  16. Ontology-Based Information Extraction for Business Intelligence

    Science.gov (United States)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  17. Effect of solvent addition sequence on lycopene extraction efficiency from membrane neutralized caustic peeled tomato waste.

    Science.gov (United States)

    Phinney, David M; Frelka, John C; Cooperstone, Jessica L; Schwartz, Steven J; Heldman, Dennis R

    2017-01-15

    Lycopene is a high value nutraceutical and its isolation from waste streams is often desirable to maximize profits. This research investigated solvent addition order and composition on lycopene extraction efficiency from a commercial tomato waste stream (pH 12.5, solids ∼5%) that was neutralized using membrane filtration. Constant volume dilution (CVD) was used to desalinate the caustic salt to neutralize the waste. Acetone, ethanol and hexane were used as direct or blended additions. Extraction efficiency was defined as the amount of lycopene extracted divided by the total lycopene in the sample. The CVD operation reduced the active alkali of the waste from 0.66 to lycopene efficiently from tomato processing byproducts.

  18. The effect of water plant extracts addition on the oxidative stability of meat products

    Directory of Open Access Journals (Sweden)

    Karolina M. Wójciak

    2011-06-01

    Full Text Available Background. Natural antioxidants extracted from plants have a lot of antioxidants catechins, epigallocatechins (green tea rosmariquinone, rosmaridiphenol (rosemary, capsaicinoids (red pepper. They can be used as alternatives to the synthetic antioxidants because of their equivalence or greater effect on inhibition of lipid oxidation and haem pigment (nitrosohemachrome protection. The aim of the study was to compare the effect of addition of green tea extract, red pepper extract and rosemary extract while curing process on colour and lipid stability during refrigerated storage of meat products. Material and methods. The pork meat was ground (10 mm plate and divided into four equal parts. To the first part (control sample – C was added curring mixture in amount of 2.2% in a ratio of meat dissolved in water. To the rests of parts were added the same curring mixtures in the same proportion dissolved in 0.5% water plant extracts: green tea (GT, red pepper (P, rosemary (R respectively. All samples were left at 4°C for 24 hours. After curing, samples were stuffed in casings and then heated in water until a final internal temperature of 70°C was reached. All samples were stored up to 30 days at 4°C. Analysis of acidity, oxidation – reduction potential, thiobarbituric acid reactive substances (TBARS, surface colour (Hunter L*, a* and b* values were measured directly after production and after 10, 20 and 30 days of chilling storage. Results. The addition of the plant extracts (pepper, green tea, rosemary to the pork meat samples does not change significantly acidity of the samples during chilling storage. All plants extracts effectively reduce lipid oxidation in cooked pork meat compared to the control. Pepper extract was effective in maintaining redness because of its reduction activity (low potential redox value in sample and low TBARS values in sample during chilling storage. Conclusions. Addition of pepper extract and green tea extract in

  19. Defense Satellite Communications: DOD Needs Additional Information to Improve Procurements

    Science.gov (United States)

    2015-07-01

    SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same as Report (SAR) 18. NUMBER OF PAGES 31 19a. NAME OF RESPONSIBLE PERSON a. REPORT...information available, DOD spent over $1 billion leasing commercial SATCOM. In prior work, GAO found that some major DOD users of commercial...Committee on Armed Services United States Senate The Department of Defense (DOD) leases commercial satellite communications (SATCOM) to support a variety

  20. The effect of selected supercritical CO2 plant extract addition on user properties of shower gels

    Directory of Open Access Journals (Sweden)

    Vogt Otmar

    2014-12-01

    Full Text Available The formulations of washing cosmetics i.e. shower gels, containing extracts obtained during supercritical CO2 extraction process as active ingredient, were developed. The subject of the study was the analysis of the physicochemical and user properties of the obtained products. In the work supercritical CO2 extracts of black currant seeds, strawberry seeds, hop cones and mint leafs were used. The formulation contains a mixture of surfactants (disodium cocoamphodiacetate, disodium laureth sulfosuccinate, cocoamide DEA, cocoamidepropyl betaine, Sodium Laureth Sulfate. Various thickener agents were applied to the obtained desired rheological properties of the cosmetics. Among others, sorbitol acetal derivatives, methylhydroxypropylcellulose and C10-30 alkyl acrylate crosspolymer were used. For stable products, the effect of extracts addition (black currants seeds, strawberries seeds, mint and hops, obtained from supercritical CO2 extraction process on the cosmetics properties, such as pH, viscosity, detergency and foam ability, were determined. The obtained results showed that the extracts could be used as components of shower gels.

  1. Extracting clinical information to support medical decision based on standards.

    Science.gov (United States)

    Gomoi, Valentin; Vida, Mihaela; Stoicu-Tivadar, Lăcrămioara; Stoicu-Tivadar, Vasile

    2011-01-01

    The paper presents a method connecting medical databases to a medical decision system, and describes a service created to extract the necessary information that is transferred based on standards. The medical decision can be improved based on many inputs from different medical locations. The developed solution is described for a concrete case concerning the management for chronic pelvic pain, based on the information retrieved from diverse healthcare databases.

  2. Tumor information extraction in radiology reports for hepatocellular carcinoma patients

    Science.gov (United States)

    Yim, Wen-wai; Denman, Tyler; Kwan, Sharon W.; Yetisgen, Meliha

    2016-01-01

    Hepatocellular carcinoma (HCC) is a deadly disease affecting the liver for which there are many available therapies. Targeting treatments towards specific patient groups necessitates defining patients by stage of disease. Criteria for such stagings include information on tumor number, size, and anatomic location, typically only found in narrative clinical text in the electronic medical record (EMR). Natural language processing (NLP) offers an automatic and scale-able means to extract this information, which can further evidence-based research. In this paper, we created a corpus of 101 radiology reports annotated for tumor information. Afterwards we applied machine learning algorithms to extract tumor information. Our inter-annotator partial match agreement scored at 0.93 and 0.90 F1 for entities and relations, respectively. Based on the annotated corpus, our sequential labeling entity extraction achieved 0.87 F1 partial match, and our maximum entropy classification relation extraction achieved scores 0.89 and 0. 74 F1 with gold and system entities, respectively. PMID:27570686

  3. Information Extraction and Linking in a Retrieval Context

    NARCIS (Netherlands)

    Moens, M.F.; Hiemstra, Djoerd

    We witness a growing interest and capabilities of automatic content recognition (often referred to as information extraction) in various media sources that identify entities (e.g. persons, locations and products) and their semantic attributes (e.g., opinions expressed towards persons or products,

  4. Spatiotemporal Information Extraction from a Historic Expedition Gazetteer

    Directory of Open Access Journals (Sweden)

    Mafkereseb Kassahun Bekele

    2016-11-01

    Full Text Available Historic expeditions are events that are flavored by exploratory, scientific, military or geographic characteristics. Such events are often documented in literature, journey notes or personal diaries. A typical historic expedition involves multiple site visits and their descriptions contain spatiotemporal and attributive contexts. Expeditions involve movements in space that can be represented by triplet features (location, time and description. However, such features are implicit and innate parts of textual documents. Extracting the geospatial information from these documents requires understanding the contextualized entities in the text. To this end, we developed a semi-automated framework that has multiple Information Retrieval and Natural Language Processing components to extract the spatiotemporal information from a two-volume historic expedition gazetteer. Our framework has three basic components, namely, the Text Preprocessor, the Gazetteer Processing Machine and the JAPE (Java Annotation Pattern Engine Transducer. We used the Brazilian Ornithological Gazetteer as an experimental dataset and extracted the spatial and temporal entities from entries that refer to three expeditioners’ site visits (which took place between 1910 and 1926 and mapped the trajectory of each expedition using the extracted information. Finally, one of the mapped trajectories was manually compared with a historical reference map of that expedition to assess the reliability of our framework.

  5. Preparatory information for third molar extraction: does preference for information and behavioral involvement matter?

    NARCIS (Netherlands)

    van Wijk, A.J.; Buchanan, H.; Coulson, N.; Hoogstraten, J.

    2010-01-01

    Objective: The objectives of the present study were to: (1) evaluate the impact of high versus low information provision in terms of anxiety towards third molar extraction (TME) as well as satisfaction with information provision. (2) Investigate how preference for information and behavioral

  6. Extending a geocoding database by Web information extraction

    Science.gov (United States)

    Wu, Yunchao; Niu, Zheng

    2008-10-01

    Local Search has recently attracted much attention. And the popular architecture of Local Search is map-and-hyperlinks, which links geo-referenced Web content to a map interface. This architecture shows that a good Local Search not only depends on search engine techniques, but also on a perfect geocoding database. The process of building and updating a geocoding database is laborious and time consuming so that it is usually difficult to keep up with the change of the real world. However, the Web provides a rich resource of location related information, which would be a supplementary information source for geocoding. Therefore, this paper introduces how to extract geographic information from Web documents to extend a geocoding database. Our approach involves two major steps. First, geographic named entities are identified and extracted from Web content. Then, named entities are geocoded and put into storage. By this way, we can extend a geocoding database to provide better local Web search services.

  7. A critical review on the spray drying of fruit extract: effect of additives on physicochemical properties.

    Science.gov (United States)

    Krishnaiah, Duduku; Nithyanandam, Rajesh; Sarbatly, Rosalam

    2014-01-01

    Spray drying accomplishes drying while particles are suspended in the air and is one method in the family of suspended particle processing systems, along with fluid-bed drying, flash drying, spray granulation, spray agglomeration, spray reaction, spray cooling, and spray absorption. This drying process is unique because it involves both particle formation and drying. The present paper reviews spray drying of fruit extracts, such as acai, acerola pomace, gac, mango, orange, cactus pear, opuntia stricta fruit, watermelon, and durian, and the effects of additives on physicochemical properties such as antioxidant activity, total carotenoid content, lycopene and β-carotene content, hygroscopy, moisture content, volatile retention, stickiness, color, solubility, glass transition temperature, bulk density, rehydration, caking, appearance under electron microscopy, and X-ray powder diffraction. The literature clearly demonstrates that the effect of additives and encapsulation play a vital role in determining the physicochemical properties of fruit extract powder. The technical difficulties in spray drying of fruit extracts can be overcome by modifying the spray dryer design. It also reveals that spray drying is a novel technology for converting fruit extract into powder form.

  8. The Study on Information Extraction Technology of Seismic Damage

    Directory of Open Access Journals (Sweden)

    Huang Zuo-wei

    2013-01-01

    Full Text Available In order to improve the information extraction technology of seismic damage assessment and information publishing of earthquake damage. Based on past earthquake experience it was constructed technical flow of earthquake damage assessment rapidly, this study, take Yushu earthquake as example, studies the framework and establishment of the information service system by means of Arc IMS and distributed database technology. It analysis some key technologies, build web publishing architecture of massive remote sensing images. The system implements joint application of remote sensing image processing technology, database technology and Web GIS technology, the result could provide the important basis for earthquake damage assessment, emergency management and rescue mission.

  9. Extraction of Information from Images using Dewrapping Techniques

    Directory of Open Access Journals (Sweden)

    Khalid Nazim S. A.

    2010-11-01

    Full Text Available An image containing textual information is called a document image. The textual information in document images is useful in areas like vehicle number plate reading, passport reading and cargo container reading and so on. Thus extracting useful textual information in the document image plays an important role in many applications. One of the major challenges in camera document analysis is to deal with the wrap and perspective distortions. In spite of the prevalence of dewrapping techniques, there is no standard efficient algorithm for the performance evaluation that concentrates on visualization. Wrapping is a common appearance document image before recognition. In order to capture the document images a mobile camera of 2megapixel resolution is used. A database is developed with variations in background, size and colour along with wrapped images, blurred and clean images. This database will be explored and text extraction from those document images is performed. In case of wrapped images no efficient dewrapping techniques have been implemented till date. Thus extracting the text from the wrapped images is done by maintaining a suitable template database. Further, the extracted text from the wrapped or other document images will be converted into an editable form such as Notepad or MS word document. The experimental results were corroborated on various objects of database.

  10. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  11. Extracting Semantic Information from Visual Data: A Survey

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2016-03-01

    Full Text Available The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods.

  12. A Modified Time-Delay Addition Method to Extract Resistive Leakage Current of MOSA

    Science.gov (United States)

    Khodsuz, Masume; Mirzaie, Mohammad

    2016-12-01

    Metal oxide surge arresters are one of the most important equipment for power system protection against switching and lightning over-voltages. High-energy stresses and environmental features are the main factors which degrade surge arresters. In order to verify surge arresters good condition, their monitoring is necessary. The majority of surge arrester monitoring techniques is based on total leakage current decomposition of their capacitive and resistive components. This paper introduces a new approach based on time-delay addition method to extract the resistive current from the total leakage current without measuring voltage signal. Surge arrester model for calculating leakage current has been performed in ATP-EMTP. In addition, the signal processing has been done using MATLAB software. To show the accuracy of the proposed method, experimental tests have been performed to extract resistive leakage current by the proposed method.

  13. Information technology - Telecommunications and information exchange between systems - Private integrated services network - Specification, functional model and information flows - Call interception additional network feature

    CERN Document Server

    International Organization for Standardization. Geneva

    2003-01-01

    Information technology - Telecommunications and information exchange between systems - Private integrated services network - Specification, functional model and information flows - Call interception additional network feature

  14. USE OF SPONGE, Callyspongia basilana EXTRACT AS ADDITIVE MATERIAL ON TIGER SHRIMP CULTURE

    Directory of Open Access Journals (Sweden)

    Rosmiati Rosmiati

    2010-06-01

    Full Text Available Blue shrimp disease is one of the main problems in tiger shrimp culture. It reduces shrimp quality which eventually will decrease its market price. Blue shrimp is caused by deficiency of nutrition and additive materials such as carotene and other nutrient which function as vitamin source for important metabolic processes and formation of color profile in shrimp and fish. The aims of this study were to study the application effect of carotenoid extract of sponge Callyspongia basilana, as an additive material on the ability of shrimp to get back to normal state after suffering blue shrimp disease and survival rate of shrimp and to find out the optimal concentration of sponge carotenoid extract to cure the diseased shrimp. This study was consisted of two steps namely; (1. Extraction of sponge carotenoid by maseration and fractionation using acetone and petroleum ether solvents and (2, the application of carotenoid extract on the diseased shrimp. The research was arranged in a complete randomized design with four experiments consisted of (A. Control (without carotenoid extract; (B,(C, and (D carotetoid extract addition of 3 mg/L, 6 mg/L, and 9 mg/L respectively with three replication each. The test animal used were blue diseased tiger shrimp with the density of 15 ind./container having 7.5–9.5 cm in size and the average weight of 5.5–10.0 g. The study showed that Callyspongia basilana carotenoid extract was able to change blue diseased shrimp to be normal within six days at the concentration of 9 mg/L. The highest survival rate was found in the experiment D (93.3%. Meanwhile, the lowest was obtained by the control population (13.3% and the other two treatments were 80.0%(C and 73.3% (B. The average of water quality parameters such as temperature, dissolved oxygen, pH, salinity, nitrite, and ammonia were in the suitable range for the growth and survival rate of tiger shrimp.

  15. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  16. Earth Science Data Analytics: Preparing for Extracting Knowledge from Information

    Science.gov (United States)

    Kempler, Steven; Barbieri, Lindsay

    2016-01-01

    Data analytics is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. Data analytics is a broad term that includes data analysis, as well as an understanding of the cognitive processes an analyst uses to understand problems and explore data in meaningful ways. Analytics also include data extraction, transformation, and reduction, utilizing specific tools, techniques, and methods. Turning to data science, definitions of data science sound very similar to those of data analytics (which leads to a lot of the confusion between the two). But the skills needed for both, co-analyzing large amounts of heterogeneous data, understanding and utilizing relevant tools and techniques, and subject matter expertise, although similar, serve different purposes. Data Analytics takes on a practitioners approach to applying expertise and skills to solve issues and gain subject knowledge. Data Science, is more theoretical (research in itself) in nature, providing strategic actionable insights and new innovative methodologies. Earth Science Data Analytics (ESDA) is the process of examining, preparing, reducing, and analyzing large amounts of spatial (multi-dimensional), temporal, or spectral data using a variety of data types to uncover patterns, correlations and other information, to better understand our Earth. The large variety of datasets (temporal spatial differences, data types, formats, etc.) invite the need for data analytics skills that understand the science domain, and data preparation, reduction, and analysis techniques, from a practitioners point of view. The application of these skills to ESDA is the focus of this presentation. The Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster was created in recognition of the practical need to facilitate the co-analysis of large amounts of data and information for Earth science. Thus, from a to

  17. Advanced applications of natural language processing for performing information extraction

    CERN Document Server

    Rodrigues, Mário

    2015-01-01

    This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses.   ·         Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for t...

  18. Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    Directory of Open Access Journals (Sweden)

    C.-C. Jay Kuo

    2005-08-01

    Full Text Available A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1 the moving cast shadow effect, (2 the occlusion effect, and (3 nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

  19. Information extraction from the GER 63-channel spectrometer data

    Science.gov (United States)

    Kiang, Richard K.

    1993-09-01

    The unprecedented data volume in the era of NASA's Mission to Planet Earth (MTPE) demands innovative information extraction methods and advanced processing techniques. The neural network techniques, which are intrinsic to distributed parallel processings and have shown promising results in analyzing remotely sensed data, could become the essential tools in the MTPE era. To evaluate the information content of data with higher dimension and the usefulness of neural networks in analyzing them, measurements from the GER 63-channel airborne imaging spectrometer data over Cuprite, Nevada, are used. The data are classified with 3-layer Perceptron of various architectures. It is shown that the neural network can achieve a level of performance similar to conventional methods, without the need for an explicit feature extraction step.

  20. Extracting Firm Information from Administrative Records: The ASSD Firm Panel

    OpenAIRE

    Fink, Martina; Segalla, Esther; Weber, Andrea; Zulehner, Christine

    2010-01-01

    This paper demonstrates how firm information can be extracted from administrative social security records. We use the Austrian Social Security Database (ASSD) and derive firms from employer identifiers in the universe of private sector workers. To correctly pin down entry end exits we use a worker flow approach which follows clusters of workers as they move across administrative entities. This procedure enables us to define different types of entry and exit such as start-ups, spinoffs, closur...

  1. OCR++: A Robust Framework For Information Extraction from Scholarly Articles

    OpenAIRE

    Singh, Mayank; Barua, Barnopriyo; Palod, Priyank; Garg, Manvi; Satapathy, Sidhartha; Bushi, Samuel; Ayush, Kumar; Rohith, Krishna Sai; Gamidi, Tulasi; Goyal, Pawan; Mukherjee, Animesh

    2016-01-01

    This paper proposes OCR++, an open-source framework designed for a variety of information extraction tasks from scholarly articles including metadata (title, author names, affiliation and e-mail), structure (section headings and body text, table and figure headings, URLs and footnotes) and bibliography (citation instances and references). We analyze a diverse set of scientific articles written in English language to understand generic writing patterns and formulate rules to develop this hybri...

  2. A new method for precursory information extraction: Slope-difference information method

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new method for precursory information extraction, i.e.,slope-difference information method is proposed in the paper for the daily-mean-value precursory data sequence. Taking Tangshan station as an example, the calculation of full-time-domain leveling data is made, which is tested and compared with several other methods. The results indicate that the method is very effective for extracting short-term precursory information from the daily mean values after the optimization is made. Therefore, it is valuable for popularization and application.

  3. Extraction of hidden information by efficient community detection in networks

    CERN Document Server

    Lee, Juyong; Lee, Jooyoung

    2012-01-01

    Currently, we are overwhelmed by a deluge of experimental data, and network physics has the potential to become an invaluable method to increase our understanding of large interacting datasets. However, this potential is often unrealized for two reasons: uncovering the hidden community structure of a network, known as community detection, is difficult, and further, even if one has an idea of this community structure, it is not a priori obvious how to efficiently use this information. Here, to address both of these issues, we, first, identify optimal community structure of given networks in terms of modularity by utilizing a recently introduced community detection method. Second, we develop an approach to use this community information to extract hidden information from a network. When applied to a protein-protein interaction network, the proposed method outperforms current state-of-the-art methods that use only the local information of a network. The method is generally applicable to networks from many areas.

  4. A Semantic Approach for Geospatial Information Extraction from Unstructured Documents

    Science.gov (United States)

    Sallaberry, Christian; Gaio, Mauro; Lesbegueries, Julien; Loustau, Pierre

    Local cultural heritage document collections are characterized by their content, which is strongly attached to a territory and its land history (i.e., geographical references). Our contribution aims at making the content retrieval process more efficient whenever a query includes geographic criteria. We propose a core model for a formal representation of geographic information. It takes into account characteristics of different modes of expression, such as written language, captures of drawings, maps, photographs, etc. We have developed a prototype that fully implements geographic information extraction (IE) and geographic information retrieval (IR) processes. All PIV prototype processing resources are designed as Web Services. We propose a geographic IE process based on semantic treatment as a supplement to classical IE approaches. We implement geographic IR by using intersection computing algorithms that seek out any intersection between formal geocoded representations of geographic information in a user query and similar representations in document collection indexes.

  5. Impact of green tea extract addition on oxidative changes in the lipid fraction of pastry products

    Directory of Open Access Journals (Sweden)

    Anna Żbikowska

    2017-03-01

    Full Text Available Background. Alongside flour, fat is the key ingredient of sponge cakes, including those with long shelf lives. It is an unstable food component, whose quality and nutritional safety depend on the composition and pres- ence of oxidation products. Consumption of fat oxidation products adversely affects the human body and contributes to the incidence of a number of medical conditions. Qualitative changes in fats extracted from thermostat sponge cakes with and without antioxidant additions were determined in this study. Material and methods. In the study, two types of antioxidant were used: natural – green tea extract in three doses (0.02%; 0.2% and 1.0% and synthetic BHA (0.02% and 100%, solid bakery shortening. Sponge-cakes were thermostatted at temperatures 63°C after twenty-eight days. In this study, the quality of the lipid fraction was analyzed. The amount of primary (PV and secondary (AnV oxidation products was determined, and   a Rancimat test was performed. Results. Adding antioxidants to fats varied in the degree to which oxidation processes of lipids fractions were inhibited. The peroxide value after twenty-eight days of thermostatting ranged from 3.57 meq O/kg (BHA and 11.14 O meq/kg (extract content – 1% to 62.85 meq O/kg (control sample. In turn, the value of AnV after the storage period ranged from 4.84 (BHA and 6.71 (extract content – 1% to 16.83 (control sample. Conclusion. The best protective effects in the process of oxidation was achieved by BHA. The longest in- duction time and the lowest peroxide value and anisidine value were obtained for this antioxidant. It was achieved after twenty-eight days of fat thermostatting. Nonetheless, the results demonstrated it is possible to use the commercially available green tea extract to slow the adverse process of fat oxidation in sponge cake products.

  6. Pharmacopeial HPLC identification methods are not sufficient to detect adulterations in commercial bilberry (Vaccinium myrtillus) extracts. Anthocyanin profile provides additional clues.

    Science.gov (United States)

    Govindaraghavan, Suresh

    2014-12-01

    Current pharmacopeias provide HPLC anthocyanin profiles to identify commercial bilberry extracts. However, the pharmacopeial identification protocols may not be sufficient enough to distinguish genuine bilberry extracts from adulterated material. This is primarily due to the non-inclusion of literature-reported anthocyanin profile and compositional variations in bilberry when sourced from different geographical regions. Using anthocyanin profiles of both authentic bilberry extracts and literature reports, we attempted to provide appropriate identification protocol for genuine bilberry extracts. We compared HPLC anthocyanin profiles of selected 'suspected' adulterant species and adulterant-spiked bilberry extracts to decipher clues to infer adulteration. The clues include appearance of new anthocyanin peaks and changes in compositional ratios of anthocyanins. In addition, we attempted to provide likely adulterants based on 'economic motivation' and market place information and appropriate clues to identify them in adulterated commercial bilberry extracts.

  7. Extraction of spatial information for low-bandwidth telerehabilitation applications

    Directory of Open Access Journals (Sweden)

    Kok Kiong Tan, PhD

    2014-09-01

    Full Text Available Telemedicine applications, based on two-dimensional (2D video conferencing technology, have been around for the past 15 to 20 yr. They have been demonstrated to be acceptable for face-to-face consultations and useful for visual examination of wounds and abrasions. However, certain telerehabilitation assessments need the use of spatial information in order to accurately assess the patient’s condition and sending three-dimensional video data over low-bandwidth networks is extremely challenging. This article proposes an innovative way of extracting the key spatial information from the patient’s movement during telerehabilitation assessment based on 2D video and then presenting the extracted data by using graph plots alongside the video to help physicians in assessments with minimum burden on existing video data transfer. Some common rehabilitation scenarios are chosen for illustrations, and experiments are conducted based on skeletal tracking and color detection algorithms using the Microsoft Kinect sensor. Extracted data are analyzed in detail and their usability discussed.

  8. Transliteration normalization for Information Extraction and Machine Translation

    Directory of Open Access Journals (Sweden)

    Yuval Marton

    2014-12-01

    Full Text Available Foreign name transliterations typically include multiple spelling variants. These variants cause data sparseness and inconsistency problems, increase the Out-of-Vocabulary (OOV rate, and present challenges for Machine Translation, Information Extraction and other natural language processing (NLP tasks. This work aims to identify and cluster name spelling variants using a Statistical Machine Translation method: word alignment. The variants are identified by being aligned to the same “pivot” name in another language (the source-language in Machine Translation settings. Based on word-to-word translation and transliteration probabilities, as well as the string edit distance metric, names with similar spellings in the target language are clustered and then normalized to a canonical form. With this approach, tens of thousands of high-precision name transliteration spelling variants are extracted from sentence-aligned bilingual corpora in Arabic and English (in both languages. When these normalized name spelling variants are applied to Information Extraction tasks, improvements over strong baseline systems are observed. When applied to Machine Translation tasks, a large improvement potential is shown.

  9. [Study on Information Extraction of Clinic Expert Information from Hospital Portals].

    Science.gov (United States)

    Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li

    2015-12-01

    Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.

  10. Extraction of Coupling Information From $Z' \\to jj$

    CERN Document Server

    Rizzo, T G

    1993-01-01

    An analysis by the ATLAS Collaboration has recently shown, contrary to popular belief, that a combination of strategic cuts, excellent mass resolution, and detailed knowledge of the QCD backgrounds from direct measurements can be used to extract a signal in the $Z' \\to jj$ channel in excess of $6\\sigma$ for certain classes of extended electroweak models. We explore the possibility that the data extracted from $Z$ dijet peak will have sufficient statistical power as to supply information on the couplings of the $Z'$ provided it is used in conjunction with complimentary results from the $Z' \\to \\ell^+ \\ell^-$ `discovery' channel. We show, for a 1 TeV $Z'$ produced at the SSC, that this technique can provide a powerful new tool with which to identify the origin of $Z'$'s.

  11. Extraction of coupling information from Z'-->jj

    Science.gov (United States)

    Rizzo, Thomas G.

    1993-11-01

    An analysis by the ATLAS Collaboration has recently shown, contrary to popular belief, that a combination of strategic cuts, excellent mass resolution, and detailed knowledge of the QCD backgrounds from direct measurements can be used to extract a signal in the Z'-->jj channel for certain classes of extended electroweak models. We explore the possibility that the data extracted from Z dijet peak will have sufficient statistical power as to supply information on the couplings of the Z' provided it is used in conjunction with complementary results from the Z'-->l+l- ``discovery'' channel. We show, for a 1 TeV Z' produced at the SSC, that this technique can provide a powerful new tool with which to identify the origin of Z'. Extensions of this analysis to the CERN LHC as well as for a more massive Z' are discussed.

  12. Extracting Backbones from Weighted Complex Networks with Incomplete Information

    Directory of Open Access Journals (Sweden)

    Liqiang Qian

    2015-01-01

    Full Text Available The backbone is the natural abstraction of a complex network, which can help people understand a networked system in a more simplified form. Traditional backbone extraction methods tend to include many outliers into the backbone. What is more, they often suffer from the computational inefficiency—the exhaustive search of all nodes or edges is often prohibitively expensive. In this paper, we propose a backbone extraction heuristic with incomplete information (BEHwII to find the backbone in a complex weighted network. First, a strict filtering rule is carefully designed to determine edges to be preserved or discarded. Second, we present a local search model to examine part of edges in an iterative way, which only relies on the local/incomplete knowledge rather than the global view of the network. Experimental results on four real-life networks demonstrate the advantage of BEHwII over the classic disparity filter method by either effectiveness or efficiency validity.

  13. Knowledge discovery: Extracting usable information from large amounts of data

    Energy Technology Data Exchange (ETDEWEB)

    Whiteson, R.

    1998-12-31

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best.

  14. Knowledge discovery: Extracting usable information from large amounts of data

    Energy Technology Data Exchange (ETDEWEB)

    Whiteson, R.

    1998-12-31

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best.

  15. Audio enabled information extraction system for cricket and hockey domains

    CERN Document Server

    Saraswathi, S; B., Sai Vamsi Krishna; S, Suresh Reddy

    2010-01-01

    The proposed system aims at the retrieval of the summarized information from the documents collected from web based search engine as per the user query related to cricket and hockey domain. The system is designed in a manner that it takes the voice commands as keywords for search. The parts of speech in the query are extracted using the natural language extractor for English. Based on the keywords the search is categorized into 2 types: - 1.Concept wise - information retrieved to the query is retrieved based on the keywords and the concept words related to it. The retrieved information is summarized using the probabilistic approach and weighted means algorithm.2.Keyword search - extracts the result relevant to the query from the highly ranked document retrieved from the search by the search engine. The relevant search results are retrieved and then keywords are used for summarizing part. During summarization it follows the weighted and probabilistic approaches in order to identify the data comparable to the k...

  16. Using XBRL Technology to Extract Competitive Information from Financial Statements

    Directory of Open Access Journals (Sweden)

    Dominik Ditter

    2011-12-01

    Full Text Available The eXtensible Business Reporting Language, or XBRL, is a reporting format for the automatic and electronic exchange of business and financial data. In XBRL every single reported fact is marked with a unique tag, enabling a full computer-based readout of financial data. It has the potential to improve the collection and analysis of financial data for Competitive Intelligence (e.g., the profiling of publicly available financial statements. The article describes how easily information from XBRL reports can be extracted.

  17. A High Accuracy Method for Semi-supervised Information Extraction

    Energy Technology Data Exchange (ETDEWEB)

    Tratz, Stephen C.; Sanfilippo, Antonio P.

    2007-04-22

    Customization to specific domains of dis-course and/or user requirements is one of the greatest challenges for today’s Information Extraction (IE) systems. While demonstrably effective, both rule-based and supervised machine learning approaches to IE customization pose too high a burden on the user. Semi-supervised learning approaches may in principle offer a more resource effective solution but are still insufficiently accurate to grant realistic application. We demonstrate that this limitation can be overcome by integrating fully-supervised learning techniques within a semi-supervised IE approach, without increasing resource requirements.

  18. Study on Gold(Ⅰ) Solvent Extraction from Alkaline Cyanide Solution by TBP with Addition of Surfactant

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    The new solvent extraction system for gold(Ⅰ) from alkaline cyanide solution by TBP with addition of surfactant in aqueous phase was studied. The effect of various factors, such as equilibrium pH, constitution of organic phase, molar ratio of CPB∶Au(CN)2-, extraction time, aqueous/organic phase ratio, different initial gold concentration, equilibrium temperature, different diluent, different types of extractants and surfactants etc., was inspected. The results show that gold(Ⅰ) can be extracted quantitatively by controlling the quantity of surfactant (CPB); both the equilibrium pH and diluent hardly influence percent extraction. Gold(Ⅰ) percent extraction reaches more than 98% under the optimal experimental conditions. 30% vol TBP diluted by sulphonating kerosene can load gold(Ⅰ) to rather high levels. Loading capacity is in excess of 38 g/L. The extraction mechanism is discussed and the overall extraction reaction is deduced.

  19. 75 FR 61572 - Additional Identifying Information Associated With Persons Whose Property and Interests in...

    Science.gov (United States)

    2010-10-05

    ... and Taking Certain Other Actions'' AGENCY: Office of Foreign Assets Control, Treasury. ACTION: Notice. SUMMARY: The Treasury Department's Office of Foreign Assets Control (``OFAC'') is publishing additional... Office of Foreign Assets Control Additional Identifying Information Associated With Persons...

  20. Extraction of Profile Information from Cloud Contaminated Radiances. Appendixes 2

    Science.gov (United States)

    Smith, W. L.; Zhou, D. K.; Huang, H.-L.; Li, Jun; Liu, X.; Larar, A. M.

    2003-01-01

    Clouds act to reduce the signal level and may produce noise dependence on the complexity of the cloud properties and the manner in which they are treated in the profile retrieval process. There are essentially three ways to extract profile information from cloud contaminated radiances: (1) cloud-clearing using spatially adjacent cloud contaminated radiance measurements, (2) retrieval based upon the assumption of opaque cloud conditions, and (3) retrieval or radiance assimilation using a physically correct cloud radiative transfer model which accounts for the absorption and scattering of the radiance observed. Cloud clearing extracts the radiance arising from the clear air portion of partly clouded fields of view permitting soundings to the surface or the assimilation of radiances as in the clear field of view case. However, the accuracy of the clear air radiance signal depends upon the cloud height and optical property uniformity across the two fields of view used in the cloud clearing process. The assumption of opaque clouds within the field of view permits relatively accurate profiles to be retrieved down to near cloud top levels, the accuracy near the cloud top level being dependent upon the actual microphysical properties of the cloud. The use of a physically correct cloud radiative transfer model enables accurate retrievals down to cloud top levels and below semi-transparent cloud layers (e.g., cirrus). It should also be possible to assimilate cloudy radiances directly into the model given a physically correct cloud radiative transfer model using geometric and microphysical cloud parameters retrieved from the radiance spectra as initial cloud variables in the radiance assimilation process. This presentation reviews the above three ways to extract profile information from cloud contaminated radiances. NPOESS Airborne Sounder Testbed-Interferometer radiance spectra and Aqua satellite AIRS radiance spectra are used to illustrate how cloudy radiances can be used

  1. Karst rocky desertification information extraction with EO-1 Hyperion data

    Science.gov (United States)

    Yue, Yuemin; Wang, Kelin; Zhang, Bing; Jiao, Quanjun; Yu, Yizun

    2008-12-01

    Karst rocky desertification is a special kind of land desertification developed under violent human impacts on the vulnerable eco-geo-environment of karst ecosystem. The process of karst rocky desertification results in simultaneous and complex variations of many interrelated soil, rock and vegetation biogeophysical parameters, rendering it difficult to develop simple and robust remote sensing mapping and monitoring approaches. In this study, we aimed to use Earth Observing 1 (EO-1) Hyperion hyperspectral data to extract the karst rocky desertification information. A spectral unmixing model based on Monte Carlo approach, was employed to quantify the fractional cover of photosynthetic vegetation (PV), non-photosynthetic vegetation (NPV) and bare substrates. The results showed that SWIR (1.9-2.35μm) portions of the spectrum were significantly different in PV, NPV and bare rock spectral properties. It has limitations in using full optical range or only SWIR (1.9-2.35μm) region of Hyperion to decompose image into PV, NPV and bare substrates covers. However, when use the tied-SWIR, the sub-pixel fractional covers of PV, NPV and bare substrates were accurately estimated. Our study indicates that the "tied-spectrum" method effectively accentuate the spectral characteristics of materials, while the spectral unmixing model based on Monte Carlo approach is a useful tool to automatically extract mixed ground objects in karst ecosystem. Karst rocky desertification information can be accurately extracted with EO-1 Hyperion. Imaging spectroscopy can provide a powerful methodology toward understanding the extent and spatial pattern of land degradation in karst ecosystem.

  2. The effect of filler addition and oven temperature to the antioxidant quality in the drying of Physalis angulata leaf extract obtained by subcritical water extraction

    Science.gov (United States)

    Susanti, R. F.; Natalia, Desy

    2016-11-01

    In traditional medicine, Physalis angulata which is well known as ceplukan in Indonesia, has been utilized to cure several diseases by conventional extraction in hot water. The investigation of the Swietenia mahagoni extract activity in modern medicine typically utilized organic solvents such as ethanol, methanol, chloroform and hexane in extraction. In this research, subcritical water was used as a solvent instead of organic solvent to extract the Pysalis angulata leaf part. The focus of this research was the investigation of extract drying condition in the presence of filler to preserve the quality of antioxidant in Swietenia mahagoni extract. Filler, which is inert, was added to the extract during drying to help absorb the water while protect the extract from exposure in heat during drying. The effects of filler types, concentrations and oven drying temperatures were investigated to the antioxidant quality covering total phenol and antioxidant activity. Aerosil and microcrystalline cellulose (MCC) were utilized as fillers with concentration was varied from 0-30 wt% for MCC and 0-15 wt% for aerosil. The oven drying temperature was varied from 40-60 oC. The results showed that compare to extract dried without filler, total phenol and antioxidant activity were improved upon addition of filler. The higher the concentration of filler, the better the antioxidant; however it was limited by the homogeneity of filler in the extract. Both of the variables (oven temperature and concentration) played an important role in the improvement of extract quality of Swietenia mahagoni leaf. It was related to the drying time which can be minimized to protect the deterioration of extract from heat. In addition, filler help to provide the powder form of extract instead of the typical extract form which is sticky and oily.

  3. Additive interaction of carbon dots extracted from soluble coffee and biogenic silver nanoparticles against bacteria

    Science.gov (United States)

    Andrade, Patricia F.; Nakazato, Gerson; Durán, Nelson

    2017-06-01

    It is known the presence of carbon dots (CDs) in carbohydrate based foods. CDs extracted from coffee grounds and instant coffee was also published. CDs from soluble coffee revealed an average size of 4.4 nm. CDs were well-dispersed in water, fluorescent and we have characterized by XPS, XRD analysis, fluorescence and by FTIR spectra. The MIC value by serial micro-dilution assays for CDs on S. aureus ATCC 25923 was 250 μg/mL and E. coli ATCC 25922 >1000 ug/mL. For silver nanoparticles biogenically synthesized was 6.7 μg/mL. Following the checkerboard assay with combining ½ MIC values of the MICs of 125 μg/mL of carbon dots and 3.4 μg/mL of silver nanoparticles, following the fractionated inhibitory concentration (FIC) index methodology, on S. aureus gave a fractionated inhibitory concentration (FIC) value of 1.0, meaning additive interaction. In general, the unfunctionalized CDs showed to be inefficient as antibacterial compounds, however the CDs extracted from Coffee powder and together silver nanoparticles appeared interesting as antibacterial association.

  4. Information extraction with object based support vector machines and vegetation indices

    Science.gov (United States)

    Ustuner, Mustafa; Abdikan, Saygin; Balik Sanli, Fusun

    2016-07-01

    Information extraction through remote sensing data is important for policy and decision makers as extracted information provide base layers for many application of real world. Classification of remotely sensed data is the one of the most common methods of extracting information however it is still a challenging issue because several factors are affecting the accuracy of the classification. Resolution of the imagery, number and homogeneity of land cover classes, purity of training data and characteristic of adopted classifiers are just some of these challenging factors. Object based image classification has some superiority than pixel based classification for high resolution images since it uses geometry and structure information besides spectral information. Vegetation indices are also commonly used for the classification process since it provides additional spectral information for vegetation, forestry and agricultural areas. In this study, the impacts of the Normalized Difference Vegetation Index (NDVI) and Normalized Difference Red Edge Index (NDRE) on the classification accuracy of RapidEye imagery were investigated. Object based Support Vector Machines were implemented for the classification of crop types for the study area located in Aegean region of Turkey. Results demonstrated that the incorporation of NDRE increase the classification accuracy from 79,96% to 86,80% as overall accuracy, however NDVI decrease the classification accuracy from 79,96% to 78,90%. Moreover it is proven than object based classification with RapidEye data give promising results for crop type mapping and analysis.

  5. Extraction of hidden information by efficient community detection in networks

    Science.gov (United States)

    Lee, Jooyoung; Lee, Juyong; Gross, Steven

    2013-03-01

    Currently, we are overwhelmed by a deluge of experimental data, and network physics has the potential to become an invaluable method to increase our understanding of large interacting datasets. However, this potential is often unrealized for two reasons: uncovering the hidden community structure of a network, known as community detection, is difficult, and further, even if one has an idea of this community structure, it is not a priori obvious how to efficiently use this information. Here, to address both of these issues, we, first, identify optimal community structure of given networks in terms of modularity by utilizing a recently introduced community detection method. Second, we develop an approach to use this community information to extract hidden information from a network. When applied to a protein-protein interaction network, the proposed method outperforms current state-of-the-art methods that use only the local information of a network. The method is generally applicable to networks from many areas. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No. 20120001222).

  6. 36 CFR 1290.3 - Sources of assassination records and additional records and information.

    Science.gov (United States)

    2010-07-01

    ... Sources of assassination records and additional records and information. Assassination records and... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Sources of assassination records and additional records and information. 1290.3 Section 1290.3 Parks, Forests, and Public Property...

  7. 38 CFR 53.10 - Decision makers, notifications, and additional information.

    Science.gov (United States)

    2010-07-01

    ... RETENTION OF NURSES AT STATE VETERANS HOMES § 53.10 Decision makers, notifications, and additional information. The Chief Consultant, Geriatrics and Extended Care, will make all determinations regarding..., notifications, and additional information. 53.10 Section 53.10 Pensions, Bonuses, and Veterans'...

  8. 49 CFR 260.25 - Additional information for Applicants not having a credit rating.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Additional information for Applicants not having a... Financial Assistance § 260.25 Additional information for Applicants not having a credit rating. Each application submitted by Applicants not having a recent credit rating from one or more nationally...

  9. Can metabolomics in addition to genomics add to prognostic and predictive information in breast cancer?

    Science.gov (United States)

    Howell, Anthony

    2010-11-16

    Genomic data from breast cancers provide additional prognostic and predictive information that is beginning to be used for patient management. The question arises whether additional information derived from other 'omic' approaches such as metabolomics can provide additional information. In an article published this month in BMC Cancer, Borgan et al. add metabolomic information to genomic measures in breast tumours and demonstrate, for the first time, that it may be possible to further define subgroups of patients which could be of value clinically. See research article: http://www.biomedcentral.com/1471-2407/10/628.

  10. Automated Extraction of Substance Use Information from Clinical Texts.

    Science.gov (United States)

    Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.

  11. Chemical characteristic and functional properties of arenga starch-taro (Colocasia esculanta L.) flour noodle with turmeric extracts addition

    Science.gov (United States)

    Ervika Rahayu N., H.; Ariani, Dini; Miftakhussolikhah, E., Maharani P.; Yudi, P.

    2017-01-01

    Arenga starch-taro (Colocasia esculanta L.) flour noodle is an alternative carbohydrate source made from 75% arenga starch and 25% taro flour, but it has a different color with commercial noodle product. The addition of natural color from turmeric may change the consumer preference and affect chemical characteristic and functional properties of noodle. This research aims to identify chemical characteristic and functional properties of arenga starch-taro flour noodle with turmeric extract addition. Extraction was performed using 5 variances of turmeric rhizome (0.06; 0.12; 0.18; 0.24; and 0.30 g (fresh weight/ml water). Then, noodle was made and chemical characteristic (proximate analysis) as well as functional properties (amylose, resistant starch, dietary fiber, antioxidant activity) were then evaluated. The result showed that addition of turmeric extract did not change protein, fat, carbohydrate, amylose, and resistant starch content significantly, while antioxidant activity was increased (23,41%) with addition of turmeric extract.

  12. Extraction of neutron spectral information from Bonner-Sphere data

    CERN Document Server

    Haney, J H; Zaidins, C S

    1999-01-01

    We have extended a least-squares method of extracting neutron spectral information from Bonner-Sphere data which was previously developed by Zaidins et al. (Med. Phys. 5 (1978) 42). A pulse-height analysis with background stripping is employed which provided a more accurate count rate for each sphere. Newer response curves by Mares and Schraube (Nucl. Instr. and Meth. A 366 (1994) 461) were included for the moderating spheres and the bare detector which comprise the Bonner spectrometer system. Finally, the neutron energy spectrum of interest was divided using the philosophy of fuzzy logic into three trapezoidal regimes corresponding to slow, moderate, and fast neutrons. Spectral data was taken using a PuBe source in two different environments and the analyzed data is presented for these cases as slow, moderate, and fast neutron fluences. (author)

  13. ONTOGRABBING: Extracting Information from Texts Using Generative Ontologies

    DEFF Research Database (Denmark)

    Nilsson, Jørgen Fischer; Szymczak, Bartlomiej Antoni; Jensen, P.A.

    2009-01-01

    We describe principles for extracting information from texts using a so-called generative ontology in combination with syntactic analysis. Generative ontologies are introduced as semantic domains for natural language phrases. Generative ontologies extend ordinary finite ontologies with rules...... analysis is primarily to identify paraphrases, thereby achieving a search functionality beyond mere keyword search with synsets. We further envisage use of the generative ontology as a phrase-based rather than word-based browser into text corpora....... for producing recursively shaped terms representing the ontological content (ontological semantics) of NL noun phrases and other phrases. We focus here on achieving a robust, often only partial, ontology-driven parsing of and ascription of semantics to a sentence in the text corpus. The aim of the ontological...

  14. Domain-independent information extraction in unstructured text

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H. [Sandia National Labs., Albuquerque, NM (United States). Software Surety Dept.

    1996-09-01

    Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness when compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.

  15. Evaluation of Yucca schidigera extract as feed additive on performance of broiler chicks in winter season

    Directory of Open Access Journals (Sweden)

    Sarada Prasanna Sahoo

    2015-04-01

    Full Text Available Aim: Yucca schidigera extract has been successfully used as feed additives in the poultry industry. It enhances the growth and productivity in broiler production. Hence, the present study was designed to analyze the effect of Y. schidigera extract in growth, carcass quality and behavior along with its economical utility in broiler rearing. Materials and Methods: Total, 120 numbers of day-old broiler chicks of equal sex ratio were randomly divided into Yucca supplemented treatment and control group, each having 60 birds in three replications of 20 numbers. The feeding management and rearing conditions were similar for all the groups as per the standard except the Yucca supplementation in the treatment group @ 125 mg/kg of feed. The parameters with respect to growth, carcass, behavior, and litter content were recorded as per standard procedures. Results: The Yucca supplementation can effectively enhance growth of 173 g in 6th week by utilizing lesser feed intake than control group, which ultimately proves better feed conversion rate, protein efficiency ratio, and energy efficiency ratio in broiler production. Eviscerated weight of 58.50% for the treatment group was significantly higher (p<0.05 than 54.10% in the control group. The breast meat yield of Yucca group (32.23% was significantly higher (p<0.05 than control (30.33%. More frequency of agonistic behavioral expressions was noticed in the control group than the treatment group. A profit of 43.68% was received by usage of Yucca supplementation in the diet on live weight basis. Numerically, lower percentage of moisture was present in Yucca treated group than the control. Conclusion: From this study, it can be concluded that Yucca supplementation has an important role in augmenting broiler‘s growth performance, efficiency to utilize feed, protein and energy, and survivability. Hence, use of Yucca powder in broiler ration could be beneficial to maintain the litter quality, which directly

  16. Evaluation of Yucca schidigera extract as feed additive on performance of broiler chicks in winter season.

    Science.gov (United States)

    Sahoo, Sarada Prasanna; Kaur, Daljeet; Sethi, A P S; Sharma, A; Chandra, M

    2015-04-01

    Yucca schidigera extract has been successfully used as feed additives in the poultry industry. It enhances the growth and productivity in broiler production. Hence, the present study was designed to analyze the effect of Y. schidigera extract in growth, carcass quality and behavior along with its economical utility in broiler rearing. Total, 120 numbers of day-old broiler chicks of equal sex ratio were randomly divided into Yucca supplemented treatment and control group, each having 60 birds in three replications of 20 numbers. The feeding management and rearing conditions were similar for all the groups as per the standard except the Yucca supplementation in the treatment group @ 125 mg/kg of feed. The parameters with respect to growth, carcass, behavior, and litter content were recorded as per standard procedures. The Yucca supplementation can effectively enhance growth of 173 g in 6(th) week by utilizing lesser feed intake than control group, which ultimately proves better feed conversion rate, protein efficiency ratio, and energy efficiency ratio in broiler production. Eviscerated weight of 58.50% for the treatment group was significantly higher (pYucca group (32.23%) was significantly higher (pYucca supplementation in the diet on live weight basis. Numerically, lower percentage of moisture was present in Yucca treated group than the control. From this study, it can be concluded that Yucca supplementation has an important role in augmenting broiler's growth performance, efficiency to utilize feed, protein and energy, and survivability. Hence, use of Yucca powder in broiler ration could be beneficial to maintain the litter quality, which directly enhances the productivity in broiler production without any adverse effect.

  17. 75 FR 77645 - Agency Information Collection Activities; Proposed Collection; Comment Request; Color Additive...

    Science.gov (United States)

    2010-12-13

    ... Collection; Comment Request; Color Additive Certification Requests and Recordkeeping AGENCY: Food and Drug... certification of color additives manufactured for use in foods, drugs, cosmetics or medical devices in the... of information technology. Color Additive Certification Requests and Recordkeeping--21 CFR Part...

  18. Research of information classification and strategy intelligence extract algorithm based on military strategy hall

    Science.gov (United States)

    Chen, Lei; Li, Dehua; Yang, Jie

    2007-12-01

    Constructing virtual international strategy environment needs many kinds of information, such as economy, politic, military, diploma, culture, science, etc. So it is very important to build an information auto-extract, classification, recombination and analysis management system with high efficiency as the foundation and component of military strategy hall. This paper firstly use improved Boost algorithm to classify obtained initial information, then use a strategy intelligence extract algorithm to extract strategy intelligence from initial information to help strategist to analysis information.

  19. Which is the best grape seed additive for frankfurters: extract, oil or flour?

    Science.gov (United States)

    Özvural, Emin Burçin; Vural, Halil

    2014-03-15

    Grape seed products (winery by-products) are valuable vegetable sources to enhance the quality of meat products. In this study, 21 treatments of frankfurters, in three different groups, including 0%, 0.01%, 0.03%, 0.05%, 0.1%, 0.3% and 0.5% grape seed extract (GSE), 0%, 1%, 2%, 4%, 6%, 8% and 10% grape seed oil (GSO), and 0%, 0.5%, 1%, 2%, 3%, 4% and 5% grape seed flour (GSF) were produced in order to compare the differences among them during refrigerated storage for 90 days. Increasing the level of GSO made the frankfurters lighter in color (P < 0.05). Lipid oxidation of all the 21 frankfurters were under the limit of deterioration (2.0 mg malonaldehite kg⁻¹ treatment) during 90 days' storage. However, increasing the amount of additives (GSE, GSO and GSF) led to a decrease in overall acceptability for each group. According to the general comparison of the three frankfurter groups in terms of lipid oxidation, TBARS (thiobarbituric acid reactive substances) values of the frankfurters including GSE and GSF were found to be similar, but the frankfurters containing GSO exhibited the highest lipid oxidation (P < 0.05). While the products including GSE were the most acceptable group in terms of overall acceptability, the group produced with GSF received the lowest points (P < 0.05). Although the three grape seed products have partially undesirable effects on the sensory characteristics of the frankfurters, all these additives showed different positive influences in the production of frankfurters. The results showed that the group of frankfurters including GSE was the best of three different groups of products due to the lipid oxidation and overall acceptability results. © 2013 Society of Chemical Industry.

  20. Polymer-additive extraction via pressurized fluids and organic solvents of variously cross-linked poly(methylmethacrylates).

    Science.gov (United States)

    Nazem, N; Taylor, L T

    2002-04-01

    Variously cross-linked poly(methylmethacrylates) (PMMAs) are synthesized with three additives incorporated at theoretically 1000 microg of the additive per gram of prepared polymer. The additives are Irganox 1010, Irganox 1076, and Irgafos 168. The in-house" synthesized polyacrylates are then subjected to supercritical fluid extraction (SFE) to determine if additive recovery is a function of percent cross-linking. Although considerable work in this regard has been performed with non-cross-linked polyolefins, the literature is lacking regarding polyacrylates. Some additive degradation apparently occurs during the synthesis, as judged by the increased complexity of the extract high-performance liquid chromatographic trace and the low percent recoveries observed especially for the Irganoxes. For low polymer cross-linking (1%), it appears that both PMMA synthetic reproducibility and readily observed polymer swelling during SFE are serious issues that adversely affect additive percent recovery and precision of results. Higher percent cross-linking yields more consistent analytical data than low percent cross-linking, even though the amount of additive extracted in all PMMA samples (regardless of cross-linking percentage) is essentially the same whether the extraction is via SFE or liquid-solid extraction with methylene chloride. Results for comparably cross-linked poly(ethylmethacrylate) and poly(butylmethacrylate) are similar to PMMA.

  1. Addition of Grape Seed Extract Renders Phosphoric Acid a Collagen-stabilizing Etchant.

    Science.gov (United States)

    Liu, Y; Dusevich, V; Wang, Y

    2014-08-01

    Previous studies found that grape seed extract (GSE), which is rich in proanthocyanidins, could protect demineralized dentin collagen from collagenolytic activities following clinically relevant treatment. Because of proanthocyanidin's adverse interference to resin polymerization, it was believed that GSE should be applied and then rinsed off in a separate step, which in effect increases the complexity of the bonding procedure. The present study aimed to investigate the feasibility of combining GSE treatment with phosphoric acid etching to address the issue. It is also the first attempt to formulate collagen-cross-linking dental etchants. Based on Fourier-transformed infrared spectroscopy and digestion assay, it was established that in the presence of 20% to 5% phosphoric acid, 30 sec of GSE treatment rendered demineralized dentin collagen inert to bacterial collagenase digestion. Based on this positive result, the simultaneous dentin etching and collagen protecting of GSE-containing phosphoric acid was evaluated on the premise of a 30-second etching time. According to micro-Raman spectroscopy, the formulation containing 20% phosphoric acid was found to lead to overetching. Based on scanning and transmission electronic microscopy, this same formulation exhibited unsynchronized phosphoric acid and GSE penetration. Therefore, addition of GSE did render phosphoric acid a collagen-stabilizing etchant, but the preferable phosphoric acid concentration should be <20%.

  2. Linking genes to literature: text mining, information extraction, and retrieval applications for biology.

    Science.gov (United States)

    Krallinger, Martin; Valencia, Alfonso; Hirschman, Lynette

    2008-01-01

    Efficient access to information contained in online scientific literature collections is essential for life science research, playing a crucial role from the initial stage of experiment planning to the final interpretation and communication of the results. The biological literature also constitutes the main information source for manual literature curation used by expert-curated databases. Following the increasing popularity of web-based applications for analyzing biological data, new text-mining and information extraction strategies are being implemented. These systems exploit existing regularities in natural language to extract biologically relevant information from electronic texts automatically. The aim of the BioCreative challenge is to promote the development of such tools and to provide insight into their performance. This review presents a general introduction to the main characteristics and applications of currently available text-mining systems for life sciences in terms of the following: the type of biological information demands being addressed; the level of information granularity of both user queries and results; and the features and methods commonly exploited by these applications. The current trend in biomedical text mining points toward an increasing diversification in terms of application types and techniques, together with integration of domain-specific resources such as ontologies. Additional descriptions of some of the systems discussed here are available on the internet http://zope.bioinfo.cnio.es/bionlp_tools/.

  3. Extracting information in spike time patterns with wavelets and information theory.

    Science.gov (United States)

    Lopes-dos-Santos, Vítor; Panzeri, Stefano; Kayser, Christoph; Diamond, Mathew E; Quian Quiroga, Rodrigo

    2015-02-01

    We present a new method to assess the information carried by temporal patterns in spike trains. The method first performs a wavelet decomposition of the spike trains, then uses Shannon information to select a subset of coefficients carrying information, and finally assesses timing information in terms of decoding performance: the ability to identify the presented stimuli from spike train patterns. We show that the method allows: 1) a robust assessment of the information carried by spike time patterns even when this is distributed across multiple time scales and time points; 2) an effective denoising of the raster plots that improves the estimate of stimulus tuning of spike trains; and 3) an assessment of the information carried by temporally coordinated spikes across neurons. Using simulated data, we demonstrate that the Wavelet-Information (WI) method performs better and is more robust to spike time-jitter, background noise, and sample size than well-established approaches, such as principal component analysis, direct estimates of information from digitized spike trains, or a metric-based method. Furthermore, when applied to real spike trains from monkey auditory cortex and from rat barrel cortex, the WI method allows extracting larger amounts of spike timing information. Importantly, the fact that the WI method incorporates multiple time scales makes it robust to the choice of partly arbitrary parameters such as temporal resolution, response window length, number of response features considered, and the number of available trials. These results highlight the potential of the proposed method for accurate and objective assessments of how spike timing encodes information. Copyright © 2015 the American Physiological Society.

  4. Response Surface Optimization of Rotenone Using Natural Alcohol-Based Deep Eutectic Solvent as Additive in the Extraction Medium Cocktail

    Directory of Open Access Journals (Sweden)

    Zetty Shafiqa Othman

    2017-01-01

    Full Text Available Rotenone is a biopesticide with an amazing effect on aquatic life and insect pests. In Asia, it can be isolated from Derris species roots (Derris elliptica and Derris malaccensis. The previous study revealed the comparable efficiency of alcohol-based deep eutectic solvent (DES in extracting a high yield of rotenone (isoflavonoid to binary ionic liquid solvent system ([BMIM]OTf and organic solvent (acetone. Therefore, this study intends to analyze the optimum parameters (solvent ratio, extraction time, and agitation rate in extracting the highest yield of rotenone extract at a much lower cost and in a more environmental friendly method by using response surface methodology (RSM based on central composite rotatable design (CCRD. By using RSM, linear polynomial equations were obtained for predicting the concentration and yield of rotenone extracted. The verification experiment confirmed the validity of both of the predicted models. The results revealed that the optimum conditions for solvent ratio, extraction time, and agitation rate were 2 : 8 (DES : acetonitrile, 19.34 hours, and 199.32 rpm, respectively. At the optimum condition of the rotenone extraction process using DES binary solvent system, this resulted in a 3.5-fold increase in a rotenone concentration of 0.49 ± 0.07 mg/ml and yield of 0.35 ± 0.06 (%, w/w as compared to the control extract (acetonitrile only. In fact, the rotenone concentration and yield were significantly influenced by binary solvent ratio and extraction time (P<0.05 but not by means of agitation rate. For that reason, the optimal extraction condition using alcohol-based deep eutectic solvent (DES as a green additive in the extraction medium cocktail has increased the potential of enhancing the rotenone concentration and yield extracted.

  5. Imaged document information location and extraction using an optical correlator

    Science.gov (United States)

    Stalcup, Bruce W.; Dennis, Phillip W.; Dydyk, Robert B.

    1999-12-01

    Today, the paper document is fast becoming a thing of the past. With the rapid development of fast, inexpensive computing and storage devices, many government and private organizations are archiving their documents in electronic form (e.g., personnel records, medical records, patents, etc.). Many of these organizations are converting their paper archives to electronic images, which are then stored in a computer database. Because of this, there is a need to efficiently organize this data into comprehensive and accessible information resources and provide for rapid access to the information contained within these imaged documents. To meet this need, Litton PRC and Litton Data Systems Division are developing a system, the Imaged Document Optical Correlation and Conversion System (IDOCCS), to provide a total solution to the problem of managing and retrieving textual and graphic information from imaged document archives. At the heart of IDOCCS, optical correlation technology provide a means for the search and retrieval of information from imaged documents. IDOCCS can be used to rapidly search for key words or phrases within the imaged document archives and has the potential to determine the types of languages contained within a document. In addition, IDOCCS can automatically compare an input document with the archived database to determine if it is a duplicate, thereby reducing the overall resources required to maintain and access the document database. Embedded graphics on imaged pages can also be exploited, e.g., imaged documents containing an agency's seal or logo can be singled out. In this paper, we present a description of IDOCCS as well as preliminary performance results and theoretical projections.

  6. Mangiferin has an additive effect on the apoptotic properties of hesperidin in Cyclopia sp. tea extracts.

    Directory of Open Access Journals (Sweden)

    Rafal Bartoszewski

    Full Text Available A variety of biological pro-health activities have been reported for mangiferin and hesperidin, two major phenolic compounds of Honeybush (Cyclopia sp. tea extracts. Given their increasing popularity, there is a need for understanding the mechanisms underlying the biological effects of these compounds. In this study, we used real-time cytotoxicity cellular analysis of the Cyclopia sp. extracts on HeLa cells and found that the higher hesperidin content in non-fermented "green" extracts correlated with their higher cytotoxicity compared to the fermented extracts. We also found that mangiferin had a modulatory effect on the apoptotic effects of hesperidin. Quantitative PCR analysis of hesperidin-induced changes in apoptotic gene expression profile indicated that two death receptor pathway members, TRADD and TRAMP, were up regulated. The results of this study suggest that hesperidin mediates apoptosis in HeLa cells through extrinsic pathway for programmed cell death.

  7. Mangiferin Has an Additive Effect on the Apoptotic Properties of Hesperidin in Cyclopia sp. Tea Extracts

    Science.gov (United States)

    Bartoszewski, Rafal; Hering, Anna; Marszałł, Marcin; Stefanowicz Hajduk, Justyna; Bartoszewska, Sylwia; Kapoor, Niren; Kochan, Kinga; Ochocka, Renata

    2014-01-01

    A variety of biological pro-health activities have been reported for mangiferin and hesperidin, two major phenolic compounds of Honeybush (Cyclopia sp.) tea extracts. Given their increasing popularity, there is a need for understanding the mechanisms underlying the biological effects of these compounds. In this study, we used real-time cytotoxicity cellular analysis of the Cyclopia sp. extracts on HeLa cells and found that the higher hesperidin content in non-fermented "green" extracts correlated with their higher cytotoxicity compared to the fermented extracts. We also found that mangiferin had a modulatory effect on the apoptotic effects of hesperidin. Quantitative PCR analysis of hesperidin-induced changes in apoptotic gene expression profile indicated that two death receptor pathway members, TRADD and TRAMP, were up regulated. The results of this study suggest that hesperidin mediates apoptosis in HeLa cells through extrinsic pathway for programmed cell death. PMID:24633329

  8. The effect of filler addition and oven temperature to the antioxidant quality in the drying of Physalis angulata fruit extract obtained by subcritical water extraction

    Science.gov (United States)

    Susanti, R. F.; Christianto, G.

    2016-01-01

    Physalis angulata or ceplukan is medicinal herb, which grows naturally in Indonesia. It has been used in traditional medicine to treat several diseases. It is also reported to have antimycobacterial, antileukemic, antipyretic. In this research, Pysalis angulata fruit was investigated for its antioxidant capacity. In order to avoid the toxic organic solvent commonly used in conventional extraction, subcritical water extraction method was used. During drying, filler which is inert was added to the extract. It can absorb water and change the oily and sticky form of extract to powder form. The effects of filler types, concentrations and drying temperatures were investigated to the antioxidant quality covering total phenol, flavonoid and antioxidant activity. The results showed that total phenol, flavonoid and antioxidant activity were improved by addition of filler because the drying time was shorter compared to extract without filler. Filler absorbs water and protects extract from exposure to heat during drying. The combination between high temperature and shorter drying time are beneficial to protect the antioxidant in extract. The type of fillers investigation showed that aerosil gave better performance compared to Microcrystalline Celullose (MCC).

  9. Activation and stabilization of the hydroperoxide lyase enzymatic extract from mint leaves (Mentha spicata) using selected chemical additives.

    Science.gov (United States)

    Akacha, Najla B; Karboune, Salwa; Gargouri, Mohamed; Kermasha, Selim

    2010-03-01

    The effects of selected lyoprotecting excipients and chemical additives on the specific activity and the thermal stability of the hydroperoxide lyase (HPL) enzymatic extract from mint leaves were investigated. The addition of KCl (5%, w/w) and dextran (2.5%, w/w) to the enzymatic extract, prior to lyophilization, increased the HPL specific activity by 2.0- and 1.2-fold, respectively, compared to the control lyophilized extract. From half-life time (t (1/2)), it can be seen that KCl has enhanced the HPL stability by 1.3- to 2.3-fold, during long-period storage at -20 degrees Celsius and 4 degrees Celsius. Among the selected additives used throughout this study, glycine appeared to be the most effective one. In addition to the activation effect conferred by glycine, it also enhanced the HPL thermal stability. In contrast, polyhydroxyl-containing additives were not effective for stabilizing the HPL enzymatic extract. On the other hand, there was no signification increase in HPL activity and its thermal stability with the presence of Triton X-100. The results also showed that in the presence of glycine (10%), the catalytic efficiency of HPL was increased by 2.45-fold than that without additive.

  10. "The Dose Makes the Poison": Informing Consumers About the Scientific Risk Assessment of Food Additives.

    Science.gov (United States)

    Bearth, Angela; Cousin, Marie-Eve; Siegrist, Michael

    2016-01-01

    Intensive risk assessment is required before the approval of food additives. During this process, based on the toxicological principle of "the dose makes the poison,ˮ maximum usage doses are assessed. However, most consumers are not aware of these efforts to ensure the safety of food additives and are therefore sceptical, even though food additives bring certain benefits to consumers. This study investigated the effect of a short video, which explains the scientific risk assessment and regulation of food additives, on consumers' perceptions and acceptance of food additives. The primary goal of this study was to inform consumers and enable them to construct their own risk-benefit assessment and make informed decisions about food additives. The secondary goal was to investigate whether people have different perceptions of food additives of artificial (i.e., aspartame) or natural origin (i.e., steviolglycoside). To attain these research goals, an online experiment was conducted on 185 Swiss consumers. Participants were randomly assigned to either the experimental group, which was shown a video about the scientific risk assessment of food additives, or the control group, which was shown a video about a topic irrelevant to the study. After watching the video, the respondents knew significantly more, expressed more positive thoughts and feelings, had less risk perception, and more acceptance than prior to watching the video. Thus, it appears that informing consumers about complex food safety topics, such as the scientific risk assessment of food additives, is possible, and using a carefully developed information video is a successful strategy for informing consumers.

  11. Green propolis extract as additive in the diet for lambs in feedlot

    Directory of Open Access Journals (Sweden)

    Camila Celeste Brandão Ferreira Ítavo

    2011-09-01

    Full Text Available The objective of this paper is to assess the effects of the inclusion of different levels of green propolis extract in the diet of lambs in feedlot on ingestive behavior, nutrients digestibility, physiological parameters and performance. Eight lambs were distributed in double Latin Square with four treatments, corresponding to the inclusion levels (4, 8, 12, 16 mL of green propolis ethanolic extract (30 g of ground crude propolis was infused in a 100-mL hydroalcoholic solution, 700 mL/L. The diets were composed of Brachiaria brizantha cv. MG5 hay and a commercial concentrate (roughage:concentrate ratio was 50:50 in a dry matter basis. No effect was observed on dry matter (31.2 g/kg of BW, crude protein, ether extract, neutral detergent fiber, non-fibrous carbohydrates and total digestible nutrients content (TDN intakes. No significant effect was seen on the digestibility coefficients, presenting an average of 65.94% of TDN. The green propolis extract levels do not have a significant effect on behavior or physiologic parameters. Seeking to maximize feeding efficiency, the inclusion of 7.60 mL/day (2.1189 mg of dry matter and 0.1123 mg of flavonoids of green propolis extract in the diet of lambs in feedlot is recommended.

  12. Antibacterial, anthelmintic and antioxidant activity of Argyreia elliptica extracts: Activity enhancement by the addition of metal salts

    Directory of Open Access Journals (Sweden)

    M K Prashanth

    2013-05-01

    Full Text Available Summary. Argyreia elliptica extracts were prepared with solvents at different polarity (petroleum ether, chloroform, ethyl acetate and methanol and evaluate their antibacterial, anthelmintic and antioxidant properties first time. An antioxidant activity was analyzed using different in vitro tests namely 2,2-diphenyl-1-picrylhydrazyl (DPPH and superoxide radical scavenging methods. Quantitative determination of phenols was carried out using spectrophotometric methods. In addition, the extracts were screened for their biological activity in absence and in presence of metal salts [Fe(III and Zn(II] ions. Results indicate that, the tested bacterial strains were most sensitive to the chloroform (CE and methanol extract (ME. Ethyl acetate (EA, CE and ME extracts showed potent radical scavenging activity. CE and ME extracts showed the highest total phenolic content and its enhanced anthelmintic and antioxidant activities were found in Fe(III combination. The extracts-Zn(II ion combination showed enhanced antibacterial activity against tested bacterial strains compare to the extracts alone.Industrial relevance. Herbal medicines have gained increasing attention worldwide for the treatment of various diseases because of their effectiveness and small side effects as compared to synthetic drugs. In general, the essential trace elements have been found to possess a very important role in biological system and also therapeutic activity depends on some trace elements. The present research reports the phytochemical screening of Argyrea elliptica leaves extracts. The antibacterial, anthilmentic and in vitro antioxidant activity activity of extracts and its metal salt combination was studied. The results scientifically establish the efficacy of the plant extracts and its metal salt combination as antibacterial, anthilmentic and antioxidant agents.Keywords. Argyreia elliptica; Antioxidant; Antibacterial activity; Total phenolic content.

  13. Effect of lithium salts addition on the ionic liquid based extraction of essential oil from Farfarae Flos.

    Science.gov (United States)

    Li, Zhen-Yu; Zhang, Sha-Sha; Jie-Xing; Qin, Xue-Mei

    2015-01-01

    In this study, an ionic liquids (ILs) based extraction approach has been successfully applied to the extraction of essential oil from Farfarae Flos, and the effect of lithium chloride was also investigated. The results indicated that the oil yields can be increased by the ILs, and the extraction time can be reduced significantly (from 4h to 2h), compared with the conventional water distillation. The addition of lithium chloride showed different effect according to the structures of ILs, and the oil yields may be related with the structure of cation, while the chemical compositions of essential oil may be related with the anion. The reduction of extraction time and remarkable higher efficiency (5.41-62.17% improved) by combination of lithium salt and proper ILs supports the suitability of the proposed approach.

  14. Variation in lipid extractability by solvent in microalgae. Additional criterion for selecting species and strains for biofuel production from microalgae.

    Science.gov (United States)

    Mendoza, Héctor; Carmona, Laura; Assunção, Patricia; Freijanes, Karen; de la Jara, Adelina; Portillo, Eduardo; Torres, Alicia

    2015-12-01

    The lipid extractability of 14 microalgae species and strains was assessed using organic solvents (methanol and chloroform). The high variability detected indicated the potential for applying this parameter as an additional criterion for microalgae screening in industrial processes such as biofuel production from microalgae. Species without cell walls presented higher extractability than species with cell walls. Analysis of cell integrity by flow cytometry and staining with propidium iodide showed a significant correlation between higher resistance to the physical treatments of cell rupture by sonication and the lipid extractability of the microalgae. The results highlight the cell wall as a determining factor in the inter- and intraspecific variability in lipid extraction treatments. Copyright © 2015. Published by Elsevier Ltd.

  15. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  16. Additional human exposure information for gasoline substance risk assessment (period 2002-2007)

    Energy Technology Data Exchange (ETDEWEB)

    Bomer, R.; Carter, M.; Dmytrasz, B.; Mulari, M.; Pizzella, G.; Roth, S.; Van de Sandt, P.

    2009-06-15

    This report provides an update on human exposure information for gasoline-related activities for which previous assessments had suggested that exposure was either elevated or highly variable or available data were considered out-of-date. In addition data are presented for several activities for which no information had been available previously. The occupational exposures activities described in this report include railcar loading, refinery maintenance, laboratory operations, aviation gasoline refuelling, gasoline pump maintenance and repair, gasoline pump calibration, and the operation of gasoline-powered gardening equipment. In addition, general public exposure levels are described, particularly relating to residency near service stations.

  17. Medicaid Analytic eXtract (MAX) General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Analytic eXtract (MAX) data is a set of person-level data files on Medicaid eligibility, service utilization, and payments. The MAX data are created to...

  18. Medicaid Analytic eXtract (MAX) General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Analytic eXtract (MAX) data is a set of person-level data files on Medicaid eligibility, service utilization, and payments. The MAX data are created to...

  19. Solvent extraction as additional purification method for postconsumer plastic packaging waste

    NARCIS (Netherlands)

    Thoden van Velzen, E.U.; Jansen, M.

    2011-01-01

    An existing solvent extraction process currently used to convert lightly polluted post-industrial packaging waste into high quality re-granulates was tested under laboratory conditions with highly polluted post-consumer packaging waste originating from municipal solid refuse waste. The objective was

  20. 78 FR 49117 - Listing of Color Additives Exempt From Certification; Spirulina Extract

    Science.gov (United States)

    2013-08-13

    ... biomass of the cyanobacteria A. platensis, also called spirulina. Spirulina is a blue- green filamentous cyanobacteria that occurs naturally in freshwater and marine habitats. It has a long history as a food in many... safe use of spirulina extract made from the dried biomass of the cyanobacteria Arthrospira platensis...

  1. Solvent extraction as additional purification method for postconsumer plastic packaging waste

    NARCIS (Netherlands)

    Thoden van Velzen, E.U.; Jansen, M.

    2011-01-01

    An existing solvent extraction process currently used to convert lightly polluted post-industrial packaging waste into high quality re-granulates was tested under laboratory conditions with highly polluted post-consumer packaging waste originating from municipal solid refuse waste. The objective was

  2. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus

    OpenAIRE

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Background Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from fre...

  3. Automatic Data Extraction from Websites for Generating Aquatic Product Market Information

    Institute of Scientific and Technical Information of China (English)

    YUAN Hong-chun; CHEN Ying; SUN Yue-fu

    2006-01-01

    The massive web-based information resources have led to an increasing demand for effective automatic retrieval of target information for web applications. This paper introduces a web-based data extraction tool that deploys various algorithms to locate, extract and filter tabular data from HTML pages and to transform them into new web-based representations. The tool has been applied in an aquaculture web application platform for extracting and generating aquatic product market information.Results prove that this tool is very effective in extracting the required data from web pages.

  4. 24 CFR 1710.200 - Instructions for Statement of Record, Additional Information and Documentation.

    Science.gov (United States)

    2010-04-01

    ... 24 Housing and Urban Development 5 2010-04-01 2010-04-01 false Instructions for Statement of Record, Additional Information and Documentation. 1710.200 Section 1710.200 Housing and Urban Development Regulations Relating to Housing and Urban Development (Continued) OFFICE OF ASSISTANT SECRETARY FOR HOUSING-FEDERAL HOUSING COMMISSIONER,...

  5. 75 FR 20672 - Additional Identifying Information Associated With Persons Whose Property and Interests in...

    Science.gov (United States)

    2010-04-20

    ... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF THE TREASURY Office of Foreign Assets Control Additional Identifying Information Associated With Persons Whose... piracy and armed robbery at sea off the coast of Somalia. Section 1 of the Order blocks, with...

  6. 26 CFR 1.852-7 - Additional information required in returns of shareholders.

    Science.gov (United States)

    2010-04-01

    ... shareholders. 1.852-7 Section 1.852-7 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY... Trusts § 1.852-7 Additional information required in returns of shareholders. Any person who fails or....852-6 requires the company to demand from its shareholders shall submit as a part of his income...

  7. 21 CFR 71.15 - Confidentiality of data and information in color additive petitions.

    Science.gov (United States)

    2010-04-01

    ... § 20.61 of this chapter. (3) Adverse reaction reports, product experience reports, consumer complaints... 21 Food and Drugs 1 2010-04-01 2010-04-01 false Confidentiality of data and information in color additive petitions. 71.15 Section 71.15 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF...

  8. Semantic information extracting system for classification of radiological reports in radiology information system (RIS)

    Science.gov (United States)

    Shi, Liehang; Ling, Tonghui; Zhang, Jianguo

    2016-03-01

    Radiologists currently use a variety of terminologies and standards in most hospitals in China, and even there are multiple terminologies being used for different sections in one department. In this presentation, we introduce a medical semantic comprehension system (MedSCS) to extract semantic information about clinical findings and conclusion from free text radiology reports so that the reports can be classified correctly based on medical terms indexing standards such as Radlex or SONMED-CT. Our system (MedSCS) is based on both rule-based methods and statistics-based methods which improve the performance and the scalability of MedSCS. In order to evaluate the over all of the system and measure the accuracy of the outcomes, we developed computation methods to calculate the parameters of precision rate, recall rate, F-score and exact confidence interval.

  9. Optimization of isolation and cultivation of bacterial endophytes through addition of plant extract to nutrient media.

    Science.gov (United States)

    Eevers, N; Gielen, M; Sánchez-López, A; Jaspers, S; White, J C; Vangronsveld, J; Weyens, N

    2015-07-01

    Many endophytes have beneficial effects on plants and can be exploited in biotechnological applications. Studies hypothesize that only 0.001-1% of all plant-associated bacteria are cultivable. Moreover, even after successful isolations, many endophytic bacteria often show reduced regrowth capacity. This research aimed to optimize isolation processes and culturing these bacteria afterwards. We compared several minimal and complex media in a screening. Beside the media themselves, two gelling agents and adding plant extract to media were investigated to enhance the number and diversity of endophytes as well as the growth capacity when regrown after isolation. In this work, 869 medium delivered the highest numbers of cultivable bacteria, as well as the highest diversity. When comparing gelling agents, no differences were observed in the numbers of bacteria. Adding plant extract to the media lead to a slight increase in diversity. However, when adding plant extract to improve the regrowth capacity, sharp increases of viable bacteria occurred in both rich and minimal media.

  10. The effectiveness of mangosteen rind extract as additional therapy on chronic periodontitis (Clinical trials

    Directory of Open Access Journals (Sweden)

    Ina Hendiani

    2017-03-01

    Full Text Available ABSTRACT   Introduction: Periodontitis is an inflammatory disease that attacks the periodontal tissue comprises the gingiva, periodontal ligament, cementum and alveolar bone caused mainly by plaque bacteriophage or other specific dominant type of bacteria. The purpose of this study was to determine the therapeutic effect of clinical application of mangosteen peel extract gel as adjunctive therapy scaling and root planing in patients with chronic periodontitis. This research was expected to developed new treatment in the field of dentistry, particularly in periodontics, which can be used as supporting material for the treatment of chronic periodontitis. Methods: Quasi-experimental research, split mouth, with as many as 14 chronic periodontitis patients. Mangosteen rind was prepared to be formed into extract gel, dried at room temperature, then the dried samples were macerated by using ethanol, then evaporated and decanted for 3 days until obtained condensed extract. The samples were patients with chronic periodontitis in at least 2 teeth with pockets ≥ 5 mm. Clinical parameters of pocket depth, gingival bleeding, and clinical epithelial attachment level were measured at baseline and 1 month after treatment. Analysis of data using the t-test. Results: The comparison of average gap ratio of pockets depth, gingival index, gingival bleeding and epithelium attachment levels, before and after treatment showed significant differences, such as in the test and control sides. Conclusion: The mangosteen rind gel as adjunctive therapy for scaling and root planing is able to reduce pockets depth, gingival index, and gingival bleeding, and improve clinical epithelial attachment.

  11. Tagline: Information Extraction for Semi-Structured Text Elements in Medical Progress Notes

    Science.gov (United States)

    Finch, Dezon Kile

    2012-01-01

    Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in…

  12. Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    NARCIS (Netherlands)

    Habib, Mena B.; Keulen, van Maurice

    2011-01-01

    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration meth

  13. Semantic Preview Benefit in English: Individual Differences in the Extraction and Use of Parafoveal Semantic Information

    Science.gov (United States)

    Veldre, Aaron; Andrews, Sally

    2016-01-01

    Although there is robust evidence that skilled readers of English extract and use orthographic and phonological information from the parafovea to facilitate word identification, semantic preview benefits have been elusive. We sought to establish whether individual differences in the extraction and/or use of parafoveal semantic information could…

  14. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    Science.gov (United States)

    Jonnalagadda, Siddhartha

    2011-01-01

    In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…

  15. Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2011-01-01

    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration meth

  16. Recycling of red muds with the extraction of metals and special additions to cement

    Science.gov (United States)

    Zinoveev, D. V.; Diubanov, V. G.; Shutova, A. V.; Ziniaeva, M. V.

    2015-01-01

    The liquid-phase reduction of iron oxides from red mud is experimentally studied. It is shown that, in addition to a metal, a slag suitable for utilization in the construction industry can be produced as a result of pyrometallurgical processing of red mud. Portland cement is shown to be produced from this slag with mineral additions and a high-aluminate expansion addition to cement.

  17. Spatial interpolation of hourly rainfall – effect of additional information, variogram inference and storm properties

    Directory of Open Access Journals (Sweden)

    A. Verworn

    2010-08-01

    Full Text Available Hydrological modelling of floods relies on precipitation data with a high resolution in space and time. A reliable spatial representation of short time step rainfall is often difficult to achieve due to a low network density. In this study hourly precipitation was spatially interpolated with the multivariate geostatistical method kriging with external drift (KED using additional information from topography, rainfall data from the denser daily networks and weather radar data. Investigations were carried out for several flood events in the time period between 2000 and 2005 caused by different meteorological conditions. The 125 km radius around the radar station Ummendorf in northern Germany covered the overall study region. One objective was to assess the effect of different approaches for estimation of semivariograms on the interpolation performance of short time step rainfall. Another objective was the refined application of the method kriging with external drift. Special attention was not only given to find the most relevant additional information, but also to combine the additional information in the best possible way. A multi-step interpolation procedure was applied to better consider sub-regions without rainfall.

    The impact of different semivariogram types on the interpolation performance was low. While it varied over the events, an averaged semivariogram was sufficient overall. Weather radar data were the most valuable additional information for KED for convective summer events. For interpolation of stratiform winter events using daily rainfall as additional information was sufficient. The application of the multi-step procedure significantly helped to improve the representation of fractional precipitation coverage.

  18. Spatial interpolation of hourly rainfall – effect of additional information, variogram inference and storm properties

    Directory of Open Access Journals (Sweden)

    A. Verworn

    2011-02-01

    Full Text Available Hydrological modelling of floods relies on precipitation data with a high resolution in space and time. A reliable spatial representation of short time step rainfall is often difficult to achieve due to a low network density. In this study hourly precipitation was spatially interpolated with the multivariate geostatistical method kriging with external drift (KED using additional information from topography, rainfall data from the denser daily networks and weather radar data. Investigations were carried out for several flood events in the time period between 2000 and 2005 caused by different meteorological conditions. The 125 km radius around the radar station Ummendorf in northern Germany covered the overall study region. One objective was to assess the effect of different approaches for estimation of semivariograms on the interpolation performance of short time step rainfall. Another objective was the refined application of the method kriging with external drift. Special attention was not only given to find the most relevant additional information, but also to combine the additional information in the best possible way. A multi-step interpolation procedure was applied to better consider sub-regions without rainfall.

    The impact of different semivariogram types on the interpolation performance was low. While it varied over the events, an averaged semivariogram was sufficient overall. Weather radar data were the most valuable additional information for KED for convective summer events. For interpolation of stratiform winter events using daily rainfall as additional information was sufficient. The application of the multi-step procedure significantly helped to improve the representation of fractional precipitation coverage.

  19. Towards an information extraction and knowledge formation framework based on Shannon entropy

    Directory of Open Access Journals (Sweden)

    Iliescu Dragoș

    2017-01-01

    Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.

  20. The influence of barley straw extract addition on the growth of duckweed (Lemna valdiviana Phil. under laboratory conditions

    Directory of Open Access Journals (Sweden)

    Pęczuła W.

    2014-01-01

    Full Text Available Due to its ability to forming dense mats in small waterbodies, duckweeds are often considered as nuisance plants in some freshwaters. Up to now, few techniques had been tested aiming towards managing duckweeds, but all of them had appeared to have some disadvantages. As an attempt to find a new effective management tool, a laboratory experiment assessing the influence of barley straw (BS extract addition – a substance used in algal bloom control, upon the growth of the duckweed Lemna valdiviana, was performed. Reaction on two various concentrations of BS extract were quantified by measurements of changes in duckweed biomass and root length. The results showed that plants which have received the extract increased their biomass slower than that of the control, however only those with the addition of smaller amounts of BS differed significantly from the controls. Furthermore, BS addition stimulated the root growth in both experimental tanks. This implies that the mean roots length was higher, although the statistical differences were insignificant. As possible explanation for the observed changes we suggest that: (1 the growth inhibition of Lemna valvidiana under exposition to BS extract might be induced by an uptake of organic compounds from which some (phenolic substances are (probably toxic; (2 competitive interactions with the microbial communities developed upon the duckweed roots might play a role as well.

  1. Extracting duration information in a picture category decoding task using hidden Markov Models

    Science.gov (United States)

    Pfeiffer, Tim; Heinze, Nicolai; Frysch, Robert; Deouell, Leon Y.; Schoenfeld, Mircea A.; Knight, Robert T.; Rose, Georg

    2016-04-01

    Objective. Adapting classifiers for the purpose of brain signal decoding is a major challenge in brain-computer-interface (BCI) research. In a previous study we showed in principle that hidden Markov models (HMM) are a suitable alternative to the well-studied static classifiers. However, since we investigated a rather straightforward task, advantages from modeling of the signal could not be assessed. Approach. Here, we investigate a more complex data set in order to find out to what extent HMMs, as a dynamic classifier, can provide useful additional information. We show for a visual decoding problem that besides category information, HMMs can simultaneously decode picture duration without an additional training required. This decoding is based on a strong correlation that we found between picture duration and the behavior of the Viterbi paths. Main results. Decoding accuracies of up to 80% could be obtained for category and duration decoding with a single classifier trained on category information only. Significance. The extraction of multiple types of information using a single classifier enables the processing of more complex problems, while preserving good training results even on small databases. Therefore, it provides a convenient framework for online real-life BCI utilizations.

  2. Information technology - Telecommunications and information exchange between systems - Private integrated services network - Inter-exchange signalling protocol - Path replacement additional network feature

    CERN Document Server

    International Organization for Standardization. Geneva

    2003-01-01

    Information technology - Telecommunications and information exchange between systems - Private integrated services network - Inter-exchange signalling protocol - Path replacement additional network feature

  3. Information technology - Telecommunications and information exchange between systems - Private Integrated Services Network - Inter-exchange signalling protocol - Call interception additional network feature

    CERN Document Server

    International Organization for Standardization. Geneva

    2003-01-01

    Information technology - Telecommunications and information exchange between systems - Private Integrated Services Network - Inter-exchange signalling protocol - Call interception additional network feature

  4. Effect of Additional Information on Consumer Acceptance: An Example with Pomegranate Juice and Green Tea Blends

    Directory of Open Access Journals (Sweden)

    Federica Higa

    2017-07-01

    Full Text Available Pomegranate Juice (PJ and Green Tea (GT products have increased in popularity because of their beneficial health properties. Consumers look for healthier beverages, and rely on labels, claims, and product packaging when choosing a product. The objectives of this study were to determine (1 the sensory profiles and acceptance of PJ and GT blends; (2 whether additional information would have an effect on consumer acceptance; and (3 the total phenolic content (TPC of the samples. Six PJ and GT blends were evaluated by a descriptive panel in order to explore sensory differences in flavor characteristics. A consumer panel (n = 100 evaluated the samples before and after beneficial health information about the samples was provided to them. The blends that were higher in tea concentration were higher in Green and GT-like flavors, and lower in berry, beet, floral, sweetness, and cherry flavors. The overall liking scores of all of the samples increased after the information was provided to the consumers. The sample highest in PJ and lowest in GT blend was liked the most. In addition, as the samples increased in PJ, the TPC content increased. These results may be of interest to the beverage industry, providing information of consumer liking of beverage blends, and how information on health related claims affects consumer acceptance.

  5. Genetic and epigenetic changes in oilseed rape (Brassica napus L. extracted from intergeneric allopolyploid and additions with Orychophragmus

    Directory of Open Access Journals (Sweden)

    Mayank eGautam

    2016-04-01

    Full Text Available ABSTRACT Allopolyploidization with the merger of the genomes from different species has been shown to be associated with genetic and epigenetic changes. But the maintenance of such alterations related to one parental species after the genome is extracted from the allopolyploid remains to be detected. In this study, the genome of Brassica napus L. (2n=38, genomes AACC was extracted from its intergeneric allohexaploid (2n=62, genomes AACCOO with another crucifer Orychophragmus violaceus (2n=24, genome OO, by backcrossing and development of alien addition lines. B. napus-type plants identified in the self-pollinated progenies of nine monosomic additions were analyzed by the methods of amplified fragment length polymorphism (AFLP, sequence-specific amplified polymorphism (SSAP, and methylation-sensitive amplified polymorphism (MSAP. They showed modifications to certain extents in genomic components (loss and gain of DNA segments and transposons, introgression of alien DNA segments and DNA methylation, compared with B. napus donor. The significant differences in the changes between the B. napus types extracted from these additions likely resulted from the different effects of individual alien chromosomes. Particularly, the additions which harbored the O. violaceus chromosome carrying dominant rRNA genes over those of B. napus tended to result in the development of plants which showed fewer changes, suggesting a role of the expression levels of alien rRNA genes in genomic stability. These results provided new cues for the genetic alterations in one parental genome that are maintained even after the genome becomes independent.

  6. Genetic and Epigenetic Changes in Oilseed Rape (Brassica napus L.) Extracted from Intergeneric Allopolyploid and Additions with Orychophragmus.

    Science.gov (United States)

    Gautam, Mayank; Dang, Yanwei; Ge, Xianhong; Shao, Yujiao; Li, Zaiyun

    2016-01-01

    Allopolyploidization with the merger of the genomes from different species has been shown to be associated with genetic and epigenetic changes. But the maintenance of such alterations related to one parental species after the genome is extracted from the allopolyploid remains to be detected. In this study, the genome of Brassica napus L. (2n = 38, genomes AACC) was extracted from its intergeneric allohexaploid (2n = 62, genomes AACCOO) with another crucifer Orychophragmus violaceus (2n = 24, genome OO), by backcrossing and development of alien addition lines. B. napus-type plants identified in the self-pollinated progenies of nine monosomic additions were analyzed by the methods of amplified fragment length polymorphism, sequence-specific amplified polymorphism, and methylation-sensitive amplified polymorphism. They showed modifications to certain extents in genomic components (loss and gain of DNA segments and transposons, introgression of alien DNA segments) and DNA methylation, compared with B. napus donor. The significant differences in the changes between the B. napus types extracted from these additions likely resulted from the different effects of individual alien chromosomes. Particularly, the additions which harbored the O. violaceus chromosome carrying dominant rRNA genes over those of B. napus tended to result in the development of plants which showed fewer changes, suggesting a role of the expression levels of alien rRNA genes in genomic stability. These results provided new cues for the genetic alterations in one parental genome that are maintained even after the genome becomes independent.

  7. 78 FR 68713 - Listing of Color Additives Exempt From Certification; Spirulina Extract; Confirmation of...

    Science.gov (United States)

    2013-11-15

    ... final rule published August 13, 2013 (78 FR 49117), is confirmed as September 13, 2013. FOR FURTHER... the Federal Register of August 13, 2013 (78 FR 49117), we amended the color additive regulations...

  8. Automated information extraction of key trial design elements from clinical trial publications.

    Science.gov (United States)

    de Bruijn, Berry; Carini, Simona; Kiritchenko, Svetlana; Martin, Joel; Sim, Ida

    2008-11-06

    Clinical trials are one of the most valuable sources of scientific evidence for improving the practice of medicine. The Trial Bank project aims to improve structured access to trial findings by including formalized trial information into a knowledge base. Manually extracting trial information from published articles is costly, but automated information extraction techniques can assist. The current study highlights a single architecture to extract a wide array of information elements from full-text publications of randomized clinical trials (RCTs). This architecture combines a text classifier with a weak regular expression matcher. We tested this two-stage architecture on 88 RCT reports from 5 leading medical journals, extracting 23 elements of key trial information such as eligibility rules, sample size, intervention, and outcome names. Results prove this to be a promising avenue to help critical appraisers, systematic reviewers, and curators quickly identify key information elements in published RCT articles.

  9. Improvement of bread dough quality by Bacillus subtilis SPB1 biosurfactant addition: optimized extraction using response surface methodology.

    Science.gov (United States)

    Mnif, Inès; Besbes, Souheil; Ellouze-Ghorbel, Raoudha; Ellouze-Chaabouni, Semia; Ghribi, Dhouha

    2013-09-01

    Statistically based experimental designs were applied to Bacillus subtilis SPB1 biosurfactant extraction. The extracted biosurfactant was tested as an additive in dough formulation. The Plackett-Burman screening method showed that methanol volume, agitation speed and operating temperature affect biosurfactant extraction. The effect was studied and adjusted using response surface methodology. The optimal values were identified as 5 mL methanol, 180 rpm and 25 °C, yielding predicted responses of 2.1 ± 0.06 for the purification factor and 87.47% ± 1.58 for the retention yield. Study of the incorporation of purified lipopeptide powder into the dough preparation in comparison with a commercial surfactant - soya lecithin - reveal that SPB1 biosurfactant significantly improves the textural properties of dough (hardness, springiness, cohesion and adhesion) especially at 0.5 g kg⁻¹. At the same concentration (0.5 g kg⁻¹), the effect of SPB1 biosurfactant was more pronounced than that of soya lecithin. Also, this biosurfactant considerably enhanced the gas retention capacity in the course of fermentation. These results show that SPB1 biosurfactant could be of great interest in the bread-making industry. A method for preparative extraction of lipopeptide biosurfactant with methanol as the extraction solvent has been effectively established. © 2013 Society of Chemical Industry.

  10. Extraction of information of targets based on frame buffer

    Science.gov (United States)

    Han, Litao; Kong, Qiaoli; Zhao, Xiangwei

    2008-10-01

    In all ways of perception, vision is the main channel of getting environmental information for intelligent virtual agent (IVA). Reality and real-time computation of behavior simulation of intelligent objects in interactive virtual environment are required. This paper proposes a new method of getting environmental information. Firstly visual images are generated by setting a second view port in the location of viewpoint of IVA, and then the target location, distance, azimuth, and other basic geometric information and semantic information can be acquired based on the images. Experiments show that the method gives full play to the performance of computer graphic hardware with simple process and higher efficiency.

  11. Extracting local information from crowds through betting markets

    Science.gov (United States)

    Weijs, Steven

    2015-04-01

    In this research, a set-up is considered in which users can bet against a forecasting agency to challenge their probabilistic forecasts. From an information theory standpoint, a reward structure is considered that either provides the forecasting agency with better information, paying the successful providers of information for their winning bets, or funds excellent forecasting agencies through users that think they know better. Especially for local forecasts, the approach may help to diagnose model biases and to identify local predictive information that can be incorporated in the models. The challenges and opportunities for implementing such a system in practice are also discussed.

  12. Extraction of spatio-temporal information of earthquake event based on semantic technology

    Science.gov (United States)

    Fan, Hong; Guo, Dan; Li, Huaiyuan

    2015-12-01

    In this paper a web information extraction method is presented which identifies a variety of thematic events utilizing the event knowledge framework derived from text training, and then further uses the syntactic analysis to extract the event key information. The method which combines the text semantic information and domain knowledge of the event makes the extraction of information people interested more accurate. In this paper, web based earthquake news extraction is taken as an example. The paper firstly briefs the overall approaches, and then details the key algorithm and experiments of seismic events extraction. Finally, this paper conducts accuracy analysis and evaluation experiments which demonstrate that the proposed method is a promising way of hot events mining.

  13. Extracting Coherent Information from Noise Based Correlation Processing

    Science.gov (United States)

    2015-09-30

    LONG-TERM GOALS The goal of this research is to establish methodologies to utilize ambient noise in the ocean and to determine what scenarios...None PUBLICATIONS [1] “ Monitoring deep-ocean temperatures using acoustic ambinet noise,”K. W. Woolfe, S. Lani, K.G. Sabra, W. A. Kuperman...Geophys. Res. Lett., 42,2878–2884, doi:10.1002/2015GL063438 (2015). [2] “Optimized extraction of coherent arrivals from ambient noise correlations in

  14. 26 CFR 301.6223(c)-1 - Additional information regarding partners furnished to the Internal Revenue Service.

    Science.gov (United States)

    2010-04-01

    ... shown on the partnership return, the Internal Revenue Service will use additional information as... additional information at any time by filing a written statement with the Internal Revenue Service. However...) of this section. (f) Internal Revenue Service may use other information. In addition to...

  15. Addition of Astra-Ben 20 to Sequester Aflatoxin During Protein Extraction of Contaminated Peanut Meal

    Science.gov (United States)

    Peanut meal is an excellent source of high quality protein; however, the relatively high aflatoxin concentrations typically associated with this commodity currently limit applications within the feed market, in addition to being prohibitive for any future food ingredient markets. Accordingly, the e...

  16. Advanced remote sensing terrestrial information extraction and applications

    CERN Document Server

    Liang, Shunlin; Wang, Jindi

    2012-01-01

    Advanced Remote Sensing is an application-based reference that provides a single source of mathematical concepts necessary for remote sensing data gathering and assimilation. It presents state-of-the-art techniques for estimating land surface variables from a variety of data types, including optical sensors such as RADAR and LIDAR. Scientists in a number of different fields including geography, geology, atmospheric science, environmental science, planetary science and ecology will have access to critically-important data extraction techniques and their virtually unlimited application

  17. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  18. Semiconducting boron carbides with better charge extraction through the addition of pyridine moieties

    Science.gov (United States)

    Echeverria, Elena; Dong, Bin; Peterson, George; Silva, Joseph P.; Wilson, Ethiyal R.; Sky Driver, M.; Jun, Young-Si; Stucky, Galen D.; Knight, Sean; Hofmann, Tino; Han, Zhong-Kang; Shao, Nan; Gao, Yi; Mei, Wai-Ning; Nastasi, Michael; Dowben, Peter A.; Kelber, Jeffry A.

    2016-09-01

    The plasma-enhanced chemical vapor (PECVD) co-deposition of pyridine and 1,2 dicarbadodecaborane, 1,2-B10C2H12 (orthocarborane) results in semiconducting boron carbide composite films with a significantly better charge extraction than plasma-enhanced chemical vapor deposited semiconducting boron carbide synthesized from orthocarborane alone. The PECVD pyridine/orthocarborane based semiconducting boron carbide composites, with pyridine/orthocarborane ratios ~3:1 or 9:1 exhibit indirect band gaps of 1.8 eV or 1.6 eV, respectively. These energies are less than the corresponding exciton energies of 2.0 eV-2.1 eV. The capacitance/voltage and current/voltage measurements indicate the hole carrier lifetimes for PECVD pyridine/orthocarborane based semiconducting boron carbide composites (3:1) films of ~350 µs compared to values of  ⩽35 µs for the PECVD semiconducting boron carbide films fabricated without pyridine. The hole carrier lifetime values are significantly longer than the initial exciton decay times in the region of ~0.05 ns and 0.27 ns for PECVD semiconducting boron carbide films with and without pyridine, respectively, as suggested by the time-resolved photoluminescence. These data indicate enhanced electron-hole separation and charge carrier lifetimes in PECVD pyridine/orthocarborane based semiconducting boron carbide and are consistent with the results of zero bias neutron voltaic measurements indicating significantly enhanced charge collection efficiency.

  19. Extraction of information about periodic orbits from scattering functions

    CERN Document Server

    Bütikofer, T; Seligman, T H; Bütikofer, Thomas; Jung, Christof; Seligman, Thomas H.

    1999-01-01

    As a contribution to the inverse scattering problem for classical chaotic systems, we show that one can select sequences of intervals of continuity, each of which yields the information about period, eigenvalue and symmetry of one unstable periodic orbit.

  20. NEW METHOD OF EXTRACTING WEAK FAILURE INFORMATION IN GEARBOX BY COMPLEX WAVELET DENOISING

    Institute of Scientific and Technical Information of China (English)

    CHEN Zhixin; XU Jinwu; YANG Debin

    2008-01-01

    Because the extract of the weak failure information is always the difficulty and focus of fault detection. Aiming for specific statistical properties of complex wavelet coefficients of gearbox vibration signals, a new signal-denoising method which uses local adaptive algorithm based on dual-tree complex wavelet transform (DT-CWT) is introduced to extract weak failure information in gear, especially to extract impulse components. By taking into account the non-Gaussian probability distribution and the statistical dependencies among wavelet coefficients of some signals, and by taking the advantage of near shift-invariance of DT-CWT, the higher signal-to-noise ratio (SNR) than common wavelet denoising methods can be obtained. Experiments of extracting periodic impulses in gearbox vibration signals indicate that the method can extract incipient fault feature and hidden information from heavy noise, and it has an excellent effect on identifying weak feature signals in gearbox vibration signals.

  1. The Extraction Model of Paddy Rice Information Based on GF-1 Satellite WFV Images.

    Science.gov (United States)

    Yang, Yan-jun; Huang, Yan; Tian, Qing-jiu; Wang, Lei; Geng, Jun; Yang, Ran-ran

    2015-11-01

    In the present, using the characteristics of paddy rice at different phenophase to identify it by remote sensing images is an efficient way in the information extraction. According to the remarkably properties of paddy rice different from other vegetation, which the surface of paddy fields is with a large number of water in the early stage, NDWI (normalized difference water index) which is used to extract water information can reasonably be applied in the extraction of paddy rice at the early stage of the growth. And using NDWI ratio of two phenophase can expand the difference between paddy rice and other surface features, which is an important part for the extraction of paddy rice with high accuracy. Then using the variation of NDVI (normalized differential vegetation index) in different phenophase can further enhance accuracy of paddy rice information extraction. This study finds that making full advantage of the particularity of paddy rice in different phenophase and combining two indices (NDWI and NDVI) associated with paddy rice can establish a reasonable, accurate and effective extraction model of paddy rice. This is also the main way to improve the accuracy of paddy rice extraction. The present paper takes Lai'an in Anhui Province as the research area, and rice as the research object. It constructs the extraction model of paddy rice information using NDVI and NDWI between tillering stage and heading stage. Then the model was applied to GF1-WFV remote sensing image on July 12, 2013 and August 30, 2013. And it effectively extracted out of paddy rice distribution in Lai'an and carried on the mapping. At last, the result of extraction was verified and evaluated combined with field investigation data in the study area. The result shows that using the extraction model can quickly and accurately obtain the distribution of rice information, and it has the very good universality.

  2. Information extraction from FN plots of tungsten microemitters

    Energy Technology Data Exchange (ETDEWEB)

    Mussa, Khalil O. [Department of Physics, Mu' tah University, Al-Karak (Jordan); Mousa, Marwan S., E-mail: mmousa@mutah.edu.jo [Department of Physics, Mu' tah University, Al-Karak (Jordan); Fischer, Andreas, E-mail: andreas.fischer@physik.tu-chemnitz.de [Institut für Physik, Technische Universität Chemnitz, Chemnitz (Germany)

    2013-09-15

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10{sup −8}mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  3. Advanced Extraction of Spatial Information from High Resolution Satellite Data

    Science.gov (United States)

    Pour, T.; Burian, J.; Miřijovský, J.

    2016-06-01

    In this paper authors processed five satellite image of five different Middle-European cities taken by five different sensors. The aim of the paper was to find methods and approaches leading to evaluation and spatial data extraction from areas of interest. For this reason, data were firstly pre-processed using image fusion, mosaicking and segmentation processes. Results going into the next step were two polygon layers; first one representing single objects and the second one representing city blocks. In the second step, polygon layers were classified and exported into Esri shapefile format. Classification was partly hierarchical expert based and partly based on the tool SEaTH used for separability distinction and thresholding. Final results along with visual previews were attached to the original thesis. Results are evaluated visually and statistically in the last part of the paper. In the discussion author described difficulties of working with data of large size, taken by different sensors and different also thematically.

  4. Extraction of Information on the Technical Effect from a Patent Document

    Science.gov (United States)

    Sakai, Hiroyuki; Nonaka, Hirohumi; Masuyama, Shigeru

    We propose a method for extracting information on the technical effect from a patent document. The information on the technical effect extracted by our method is useful for generating patent maps (see e.g., Figure 1.) automatically or analyzing the technical trend from patent documents. Our method extracts expressions containing the information on the technical effect by using frequent expressions and clue expressions effective for extracting them. The frequent expressions and clue expressions are extracted by using statistical information and initial clue expressions automatically. Our method extracts expressions containing the information on the technical effect without predetermined patterns given by hand, and is expected to be applied to other tasks for acquiring expressions that have a particular meaning (e.g., information on the means for solving the problems) not limited to the information on the technical effect. Our method achieves not only high precision (78.0%) but also high recall (77.6%) by acquiring such clue expressions automatically from patent documents.

  5. On Depth Information Extraction from Metal Detector Signals

    NARCIS (Netherlands)

    Schoolderman, A.J.; Wolf, F.J. de; Merlat, L.

    2003-01-01

    Information on the depth of objects detected with the help of a metal detector is useful for safe excavation of these objects in demining operations. Apart from that, depth informatíon may be used in advanced sensor fusion algorithms for a detection system where a metal detector is combíned with eg.

  6. Extracting Conflict-free Information from Multi-labeled Trees

    CERN Document Server

    Deepak, Akshay; McMahon, Michelle M

    2012-01-01

    A multi-labeled tree, or MUL-tree, is a phylogenetic tree where two or more leaves share a label, e.g., a species name. A MUL-tree can imply multiple conflicting phylogenetic relationships for the same set of taxa, but can also contain conflict-free information that is of interest and yet is not obvious. We define the information content of a MUL-tree T as the set of all conflict-free quartet topologies implied by T, and define the maximal reduced form of T as the smallest tree that can be obtained from T by pruning leaves and contracting edges while retaining the same information content. We show that any two MUL-trees with the same information content exhibit the same reduced form. This introduces an equivalence relation in MUL-trees with potential applications to comparing MUL-trees. We present an efficient algorithm to reduce a MUL-tree to its maximally reduced form and evaluate its performance on empirical datasets in terms of both quality of the reduced tree and the degree of data reduction achieved.

  7. PAT-1 safety analysis report addendum author responses to request for additional information.

    Energy Technology Data Exchange (ETDEWEB)

    Weiner, Ruth F.; Schmale, David T.; Kalan, Robert J.; Akin, Lili A.; Miller, David Russell; Knorovsky, Gerald Albert; Yoshimura, Richard Hiroyuki; Lopez, Carlos; Harding, David Cameron; Jones, Perry L.; Morrow, Charles W.

    2010-09-01

    The Plutonium Air Transportable Package, Model PAT-1, is certified under Title 10, Code of Federal Regulations Part 71 by the U.S. Nuclear Regulatory Commission (NRC) per Certificate of Compliance (CoC) USA/0361B(U)F-96 (currently Revision 9). The National Nuclear Security Administration (NNSA) submitted SAND Report SAND2009-5822 to NRC that documented the incorporation of plutonium (Pu) metal as a new payload for the PAT-1 package. NRC responded with a Request for Additional Information (RAI), identifying information needed in connection with its review of the application. The purpose of this SAND report is to provide the authors responses to each RAI. SAND Report SAND2010-6106 containing the proposed changes to the Addendum is provided separately.

  8. Two applications of information extraction to biological science journal articles: enzyme interactions and protein structures.

    Science.gov (United States)

    Humphreys, K; Demetriou, G; Gaizauskas, R

    2000-01-01

    Information extraction technology, as defined and developed through the U.S. DARPA Message Understanding Conferences (MUCs), has proved successful at extracting information primarily from newswire texts and primarily in domains concerned with human activity. In this paper we consider the application of this technology to the extraction of information from scientific journal papers in the area of molecular biology. In particular, we describe how an information extraction system designed to participate in the MUC exercises has been modified for two bioinformatics applications: EMPathIE, concerned with enzyme and metabolic pathways; and PASTA, concerned with protein structure. Progress to date provides convincing grounds for believing that IE techniques will deliver novel and effective ways for scientists to make use of the core literature which defines their disciplines.

  9. Architecture and data processing alternatives for the TSE computer. Volume 2: Extraction of topological information from an image by the Tse computer

    Science.gov (United States)

    Jones, J. R.; Bodenheimer, R. E.

    1976-01-01

    A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.

  10. Financial Information Extraction Using Pre-defined and User-definable Templates in the LOLITA System

    OpenAIRE

    Costantino, Marco; Morgan, Richard G.; Collingham, Russell J.

    1996-01-01

    This paper addresses the issue of information extraction in the financial domain within the framework of a large Natural Language Processing system: LOLITA. The LOLITA system, Large-scale Object-based Linguistic Interactor Translator and Analyser, is a general purpose natural language processing system. Different kinds of applications have been built around the system's core. One of these is the financial information extraction application, which has been designed in close contact with expert...

  11. Extracting information masked by the chaotic signal of a time-delay system.

    Science.gov (United States)

    Ponomarenko, V I; Prokhorov, M D

    2002-08-01

    We further develop the method proposed by Bezruchko et al. [Phys. Rev. E 64, 056216 (2001)] for the estimation of the parameters of time-delay systems from time series. Using this method we demonstrate a possibility of message extraction for a communication system with nonlinear mixing of information signal and chaotic signal of the time-delay system. The message extraction procedure is illustrated using both numerical and experimental data and different kinds of information signals.

  12. Transforming a research-oriented dataset for evaluation of tactical information extraction technologies

    Science.gov (United States)

    Roy, Heather; Kase, Sue E.; Knight, Joanne

    2016-05-01

    The most representative and accurate data for testing and evaluating information extraction technologies is real-world data. Real-world operational data can provide important insights into human and sensor characteristics, interactions, and behavior. However, several challenges limit the feasibility of experimentation with real-world operational data. Realworld data lacks the precise knowledge of a "ground truth," a critical factor for benchmarking progress of developing automated information processing technologies. Additionally, the use of real-world data is often limited by classification restrictions due to the methods of collection, procedures for processing, and tactical sensitivities related to the sources, events, or objects of interest. These challenges, along with an increase in the development of automated information extraction technologies, are fueling an emerging demand for operationally-realistic datasets for benchmarking. An approach to meet this demand is to create synthetic datasets, which are operationally-realistic yet unclassified in content. The unclassified nature of these unclassified synthetic datasets facilitates the sharing of data between military and academic researchers thus increasing coordinated testing efforts. This paper describes the expansion and augmentation of two synthetic text datasets, one initially developed through academic research collaborations with the Army. Both datasets feature simulated tactical intelligence reports regarding fictitious terrorist activity occurring within a counterinsurgency (COIN) operation. The datasets were expanded and augmented to create two military relevant datasets. The first resulting dataset was created by augmenting and merging the two to create a single larger dataset containing ground-truth. The second resulting dataset was restructured to more realistically represent the format and content of intelligence reports. The dataset transformation effort, the final datasets, and their

  13. CTSS: A Tool for Efficient Information Extraction with Soft Matching Rules for Text Mining

    Directory of Open Access Journals (Sweden)

    A. Christy

    2008-01-01

    Full Text Available The abundance of information available digitally in modern world had made a demand for structured information. The problem of text mining which dealt with discovering useful information from unstructured text had attracted the attention of researchers. The role of Information Extraction (IE software was to identify relevant information from texts, extracting information from a variety of sources and aggregating it to create a single view. Information extraction systems depended on particular corpora and were poor in recall values. Therefore, developing the system as domain-independent as well as improving the recall was an important challenge for IE. In this research, the authors proposed a domain-independent algorithm for information extraction, called SOFTRULEMINING for extracting the aim, methodology and conclusion from technical abstracts. The algorithm was implemented by combining trigram model with softmatching rules. A tool CTSS was constructed using SOFTRULEMINING and was tested with technical abstracts of www.computer.org and www.ansinet.org and found that the tool had improved its recall value and therefore the precision value in comparison with other search engines.

  14. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  15. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters.

    Directory of Open Access Journals (Sweden)

    Hongchun Zhu

    Full Text Available Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme.

  16. ADVANCED EXTRACTION OF SPATIAL INFORMATION FROM HIGH RESOLUTION SATELLITE DATA

    Directory of Open Access Journals (Sweden)

    T. Pour

    2016-06-01

    Full Text Available In this paper authors processed five satellite image of five different Middle-European cities taken by five different sensors. The aim of the paper was to find methods and approaches leading to evaluation and spatial data extraction from areas of interest. For this reason, data were firstly pre-processed using image fusion, mosaicking and segmentation processes. Results going into the next step were two polygon layers; first one representing single objects and the second one representing city blocks. In the second step, polygon layers were classified and exported into Esri shapefile format. Classification was partly hierarchical expert based and partly based on the tool SEaTH used for separability distinction and thresholding. Final results along with visual previews were attached to the original thesis. Results are evaluated visually and statistically in the last part of the paper. In the discussion author described difficulties of working with data of large size, taken by different sensors and different also thematically.

  17. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    CERN Document Server

    Jonnalagadda, Siddhartha

    2011-01-01

    Overall, the two main contributions of this work include the application of sentence simplification to association extraction as described above, and the use of distributional semantics for concept extraction. The proposed work on concept extraction amalgamates for the first time two diverse research areas -distributional semantics and information extraction. This approach renders all the advantages offered in other semi-supervised machine learning systems, and, unlike other proposed semi-supervised approaches, it can be used on top of different basic frameworks and algorithms. http://gradworks.umi.com/34/49/3449837.html

  18. Omnidirectional vision systems calibration, feature extraction and 3D information

    CERN Document Server

    Puig, Luis

    2013-01-01

    This work focuses on central catadioptric systems, from the early step of calibration to high-level tasks such as 3D information retrieval. The book opens with a thorough introduction to the sphere camera model, along with an analysis of the relation between this model and actual central catadioptric systems. Then, a new approach to calibrate any single-viewpoint catadioptric camera is described.  This is followed by an analysis of existing methods for calibrating central omnivision systems, and a detailed examination of hybrid two-view relations that combine images acquired with uncalibrated

  19. Data-Driven Information Extraction from Chinese Electronic Medical Records.

    Directory of Open Access Journals (Sweden)

    Dong Xu

    Full Text Available This study aims to propose a data-driven framework that takes unstructured free text narratives in Chinese Electronic Medical Records (EMRs as input and converts them into structured time-event-description triples, where the description is either an elaboration or an outcome of the medical event.Our framework uses a hybrid approach. It consists of constructing cross-domain core medical lexica, an unsupervised, iterative algorithm to accrue more accurate terms into the lexica, rules to address Chinese writing conventions and temporal descriptors, and a Support Vector Machine (SVM algorithm that innovatively utilizes Normalized Google Distance (NGD to estimate the correlation between medical events and their descriptions.The effectiveness of the framework was demonstrated with a dataset of 24,817 de-identified Chinese EMRs. The cross-domain medical lexica were capable of recognizing terms with an F1-score of 0.896. 98.5% of recorded medical events were linked to temporal descriptors. The NGD SVM description-event matching achieved an F1-score of 0.874. The end-to-end time-event-description extraction of our framework achieved an F1-score of 0.846.In terms of named entity recognition, the proposed framework outperforms state-of-the-art supervised learning algorithms (F1-score: 0.896 vs. 0.886. In event-description association, the NGD SVM is superior to SVM using only local context and semantic features (F1-score: 0.874 vs. 0.838.The framework is data-driven, weakly supervised, and robust against the variations and noises that tend to occur in a large corpus. It addresses Chinese medical writing conventions and variations in writing styles through patterns used for discovering new terms and rules for updating the lexica.

  20. Emerging Technologies in the Built Environment: Geographic Information Science (GIS), 3D Printing, and Additive Manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    New, Joshua Ryan [ORNL

    2014-01-01

    Abstract 1: Geographic information systems emerged as a computer application in the late 1960s, led in part by projects at ORNL. The concept of a GIS has shifted through time in response to new applications and new technologies, and is now part of a much larger world of geospatial technology. This presentation discusses the relationship of GIS and estimating hourly and seasonal energy consumption profiles in the building sector at spatial scales down to the individual parcel. The method combines annual building energy simulations for city-specific prototypical buildings and commonly available geospatial data in a GIS framework. Abstract 2: This presentation focuses on 3D printing technologies and how they have rapidly evolved over the past couple of years. At a basic level, 3D printing produces physical models quickly and easily from 3D CAD, BIM (Building Information Models), and other digital data. Many AEC firms have adopted 3D printing as part of commercial building design development and project delivery. This presentation includes an overview of 3D printing, discusses its current use in building design, and talks about its future in relation to the HVAC industry. Abstract 3: This presentation discusses additive manufacturing and how it is revolutionizing the design of commercial and residential facilities. Additive manufacturing utilizes a broad range of direct manufacturing technologies, including electron beam melting, ultrasonic, extrusion, and laser metal deposition for rapid prototyping. While there is some overlap with the 3D printing talk, this presentation focuses on the materials aspect of additive manufacturing and also some of the more advanced technologies involved with rapid prototyping. These technologies include design of carbon fiber composites, lightweight metals processing, transient field processing, and more.

  1. IMPACTS OF LIGNIN CONTENTS AND YEAST EXTRACT ADDITION ON THE INTERACTION BETWEEN SPRUCE PULPS AND CRUDE RECOMBINANT PAENIBACILLUS ENDOGLUCANASE

    Directory of Open Access Journals (Sweden)

    Chun-Han Ko

    2011-02-01

    Full Text Available Crude recombinant Paenibacillus endoglucanase was employed to investigate its ability to gain access into and to degrade spruce pulps having different lignin and pentosan contents. Since yeast extract is commonly present in the simultaneous saccharification and fermentation processes as a nitrogen source, its effect on the accessibility and degradability of crude endoglucanase was examined. Pulps with more lignin contents adsorbed more overall proteins. More protein impurities other than the recombinant Paenibacillus endoglucanase were found to be preferentially adsorbed on the surfaces of pulp with higher lignin contents. The addition of yeast extracts further enhanced the above trends, which might reduce the non-productive binding by pulp lignin. Pulps with more lignin contents were more difficult to be degraded by the crude endoglucanase; the reductions of degree of polymerization (DP for pulps were more sensitive to the dosage of endoglucanase applied. The presence of yeast extracts increased the DP degradation rate constants, but decreased the release of reducing sugars during hydrolysis for pulp with higher lignin contents.

  2. Brazilian propolis extract used as an additive to decrease methane emissions from the rumen microbial population in vitro.

    Science.gov (United States)

    Santos, Nadine Woruby; Zeoula, Lucia Maria; Yoshimura, Emerson Henri; Machado, Erica; Macheboeuf, Didier; Cornu, Agnès

    2016-06-01

    Propolis is a product that is rich in phenolic compounds and can be utilized in animal nutrition as a dietary additive. In this study, the effects of a Brazilian green propolis extract on rumen fermentation and gas production were determined. The fate of propolis phenolic compounds in the rumen medium was also investigated. Fermentation was done in 24-h batches over three periods. Inoculates were obtained from cows fed on grassland hay and concentrate. Propolis extract in a hydroalcoholic solution was applied at increasing doses to the substrate (1 to 56 g/kg). The fermentation substrate consisted on a mixture of alfalfa hay, soybean meal, and wheat grain mixture in dry matter. After 24 h of fermentation, seven new compounds were observed in the medium in amounts that correlated to the propolis dose. The dose of propolis extract linearly decreased the pH of the medium and linearly increased propionate production, which reduced the acetate-to-propionate ratio and influenced the total production of short-chain fatty acids. Propolis also linearly reduced methane production and increased the carbon dioxide-to-methane ratio. Ammonia nitrogen levels and in vitro digestibility of organic matter were similar among the treatments. The combination of increased propionate production and decreased methane production suggests better energy utilization from the feed.

  3. Extraction of Left Ventricular Ejection Fraction Information from Various Types of Clinical Reports.

    Science.gov (United States)

    Kim, Youngjun; Garvin, Jennifer H; Goldstein, Mary K; Hwang, Tammy S; Redd, Andrew; Bolton, Dan; Heidenreich, Paul A; Meystre, Stéphane M

    2017-02-02

    Efforts to improve the treatment of congestive heart failure, a common and serious medical condition, include the use of quality measures to assess guideline-concordant care. The goal of this study is to identify left ventricular ejection fraction (LVEF) information from various types of clinical notes, and to then use this information for heart failure quality measurement. We analyzed the annotation differences between a new corpus of clinical notes from the Echocardiography, Radiology, and Text Integrated Utility package and other corpora annotated for natural language processing (NLP) research in the Department of Veterans Affairs. These reports contain varying degrees of structure. To examine whether existing LVEF extraction modules we developed in prior research improve the accuracy of LVEF information extraction from the new corpus, we created two sequence-tagging NLP modules trained with a new data set, with or without predictions from the existing LVEF extraction modules. We also conducted a set of experiments to examine the impact of training data size on information extraction accuracy. We found that less training data is needed when reports are highly structured, and that combining predictions from existing LVEF extraction modules improves information extraction when reports have less structured formats and a rich set of vocabulary.

  4. Abstract Information Extraction From Consumer's Comments On Internet Media

    Directory of Open Access Journals (Sweden)

    Kadriye Ergün

    2013-01-01

    Full Text Available In this study, a system developed to summarize by automatically evaluating comments about product with using text mining techniques will be described. The data has been primarily went through morphological analysis process, because they are texts written in natural language. Words and adjectives meaning positive or negative are determined. They show product features in texts. The tree structure is established according to Turkish grammar rules as subordinate and modified words are designated. The software which uses the depth-first search algorithm on the tree structure is developed. Data from result of software is stored in the SQL database. When any inquiry is made from these data depending on any property of product, numerical information which indicates the degree of satisfaction about this property is obtained.

  5. Effect of diet supplemented with propolis extract and probiotic additives on performance, carcass characteristics and meat composition of broiler chickens

    Directory of Open Access Journals (Sweden)

    Peter Haščík

    2016-05-01

    Full Text Available The present research focused on the effects of propolis extract and probiotic preparation based on Lactobacillus fermentum (1 × 109 CFU per 1 g of bearing medium on performance, carcass characteristics and meat composition of broiler chickens. The experiment was performed with 360 one day-old Ross 308 broiler chicks of mixed sex. The chicks were randomly allocated into 3 groups (n = 120 pcs chicks per group, namely, control (C and experimental (E1, E2. Each group consisted of 3 replicated pens with 40 broiler chickens per pen. The experiment employed a randomized design, and dietary treatments were as follows: 1. basal diet with no supplementation as control (group C, 2. basal diet plus 400 mg propolis extract per 1 kg of feed mixture (group E1, 3. basal diet plus 3.3 g probiotic preparation added to drinking water (group E2. Besides, the groups were kept under the same conditions. Fattening period lasted for 42 days. Feed mixtures were produced without any antibiotic preparations and coccidiostats. As regards performance of broilers, all the investigated parameters were improved after addition of the supplements, especially after probiotic supplementation. However, neither propolis extract nor probiotic in diet of broiler chickens had any significant effect (p ≥0.05 on performance. Meat composition was evaluated as proximate composition (dry matter, crude protein, fat and ash, cholesterol content and energy value in the most valuable parts of chicken meat (breast and thigh muscles. The statistically significant results (p ≤0.05 were attained in fat, ash and cholesterol content, as well as energy value in both breast and thigh muscles after the propolis supplementation. To sum up, the present study demonstrated the promising potential of propolis extract and probiotic to enhance the performance, carcass characteristics and meat composition under conditions of the experiment with, however, statistical significance of results in a few

  6. MR-tomography of the breast - additional informations in chosen cases

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, M.; Semmler, W.

    1987-04-01

    Magnetic resonance tomography (MRT), as the newest imaging procedure in breast diagnosis, has to be compared with the combined use of conventional imaging methods. Many advantages of MRT, which have been emphasized in comparison with mammography, are not superior to the complementary use of mammography and sonography. Additional diagnostic information through MRT is yielded only in the following cases: - differentiation between fibrous fibroadenoma of the elderly woman and circumscribed carcinoma with similar mammographic and sonographic appearances - distinction of stellated fibrous scar tissue from the scirrhotic configuration of the more vascular tumour fibrosis around a carcinoma by using contrast medium MRT - differentiation of periductal fibrosis from inflammatory or carcinomatous ductal and periductal infiltration - better delineation of fat or silicon implants. Finally, a significant advantage of mammography over all other imaging methods should be pointed out: the detailed visualization of microcalcifications for the early diagnosis of breast carcinoma.

  7. STUDY ON EXTRACTING METHODS OF BURIED GEOLOGICAL INFORMATION IN HUAIBEI COAL FIELD

    Institute of Scientific and Technical Information of China (English)

    王四龙; 赵学军; 凌贻棕; 刘玉荣; 宁书年; 侯德文

    1999-01-01

    It is discussed features and the producing mechanism of buried geological information in geological, geophysical and remote sensing data in Huaibei coal field, and studied the methods extracting buried tectonic and igneous rock information from various geological data using digital image processing techniques.

  8. Analysis of Automated Modern Web Crawling and Testing Tools and Their Possible Employment for Information Extraction

    Directory of Open Access Journals (Sweden)

    Tomas Grigalis

    2012-04-01

    Full Text Available World Wide Web has become an enormously big repository of data. Extracting, integrating and reusing this kind of data has a wide range of applications, including meta-searching, comparison shopping, business intelligence tools and security analysis of information in websites. However, reaching information in modern WEB 2.0 web pages, where HTML tree is often dynamically modified by various JavaScript codes, new data are added by asynchronous requests to the web server and elements are positioned with the help of cascading style sheets, is a difficult task. The article reviews automated web testing tools for information extraction tasks.Article in Lithuanian

  9. Additional information to the in vitro antioxidant activity of Ginkgo biloba L

    NARCIS (Netherlands)

    Lugasi, A; Horvahovich, P; Dworschák, E

    1999-01-01

    The in vitro antioxidant and free radical scavenging activity of the ethanol extract from Ginkgo biloba L. was examined in different systems. The extract showed hydrogen-donating ability, reducing power, copper-binding property, free radical scavenging activity in a H2O2/.OH-luminol system and it co

  10. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    Science.gov (United States)

    Xu, Xiaoli; Liu, Xiuli

    2017-04-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  11. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    Science.gov (United States)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  12. The research of road and vehicle information extraction algorithm based on high resolution remote sensing image

    Science.gov (United States)

    Zhou, Tingting; Gu, Lingjia; Ren, Ruizhi; Cao, Qiong

    2016-09-01

    With the rapid development of remote sensing technology, the spatial resolution and temporal resolution of satellite imagery also have a huge increase. Meanwhile, High-spatial-resolution images are becoming increasingly popular for commercial applications. The remote sensing image technology has broad application prospects in intelligent traffic. Compared with traditional traffic information collection methods, vehicle information extraction using high-resolution remote sensing image has the advantages of high resolution and wide coverage. This has great guiding significance to urban planning, transportation management, travel route choice and so on. Firstly, this paper preprocessed the acquired high-resolution multi-spectral and panchromatic remote sensing images. After that, on the one hand, in order to get the optimal thresholding for image segmentation, histogram equalization and linear enhancement technologies were applied into the preprocessing results. On the other hand, considering distribution characteristics of road, the normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used to suppress water and vegetation information of preprocessing results. Then, the above two processing result were combined. Finally, the geometric characteristics were used to completed road information extraction. The road vector extracted was used to limit the target vehicle area. Target vehicle extraction was divided into bright vehicles extraction and dark vehicles extraction. Eventually, the extraction results of the two kinds of vehicles were combined to get the final results. The experiment results demonstrated that the proposed algorithm has a high precision for the vehicle information extraction for different high resolution remote sensing images. Among these results, the average fault detection rate was about 5.36%, the average residual rate was about 13.60% and the average accuracy was approximately 91.26%.

  13. A comparison of techniques for extracting emissivity information from thermal infrared data for geologic studies

    Science.gov (United States)

    Hook, Simon J.; Gabell, A. R.; Green, A. A.; Kealy, P. S.

    1992-01-01

    This article evaluates three techniques developed to extract emissivity information from multispectral thermal infrared data. The techniques are the assumed Channel 6 emittance model, thermal log residuals, and alpha residuals. These techniques were applied to calibrated, atmospherically corrected thermal infrared multispectral scanner (TIMS) data acquired over Cuprite, Nevada in September 1990. Results indicate that the two new techniques (thermal log residuals and alpha residuals) provide two distinct advantages over the assumed Channel 6 emittance model. First, they permit emissivity information to be derived from all six TIMS channels. The assumed Channel 6 emittance model only permits emissivity values to be derived from five of the six TIMS channels. Second, both techniques are less susceptible to noise than the assumed Channel 6 emittance model. The disadvantage of both techniques is that laboratory data must be converted to thermal log residuals or alpha residuals to facilitate comparison with similarly processed image data. An additional advantage of the alpha residual technique is that the processed data are scene-independent unlike those obtained with the other techniques.

  14. The Technology of Extracting Content Information from Web Page Based on DOM Tree

    Science.gov (United States)

    Yuan, Dingrong; Mo, Zhuoying; Xie, Bing; Xie, Yangcai

    There are huge amounts of information on Web pages, which includes content information and other useless information, such as navigation, advertisement and flash of animation etc. Reducing the toils of Web users, we estabished a thechnique to extract the content information from web page. Fristly, we analyzed the semantic of web documents by V8 engine of Google and parsed the web document into DOM tree. And then, traversed the DOM tree, pruned the DOM tree in the light of the characteristic of Web page's edit language. Finally, we extracted the content information from Web page. Theoretics and experiments showed that the technique could simplify the web page, present the content information to web users and supply clean data for applicable area, such as retrieval, KDD and DM from web.

  15. What do professional forecasters' stock market expectations tell us about herding, information extraction and beauty contests?

    DEFF Research Database (Denmark)

    Rangvid, Jesper; Schmeling, M.; Schrimpf, A.

    2013-01-01

    We study how professional forecasters form equity market expectations based on a new micro-level dataset which includes rich cross-sectional information about individual characteristics. We focus on testing whether agents rely on the beliefs of others, i.e., consensus expectations, when forming t...... that neither information extraction to incorporate dispersed private information, nor herding for reputational reasons can fully explain these results, leaving Keynes' beauty contest argument as a potential candidate for explaining forecaster behavior....

  16. Extraction of Hidden Social Networks from Wiki-Environment Involved in Information Conflict

    OpenAIRE

    Alguliyev, Rasim M.; Ramiz M. Aliguliyev; Irada Y. Alakbarova

    2016-01-01

    Social network analysis is a widely used technique to analyze relationships among wiki-users in Wikipedia. In this paper the method to identify hidden social networks participating in information conflicts in wiki-environment is proposed. In particular, we describe how text clustering techniques can be used for extraction of hidden social networks of wiki-users caused information conflict. By clustering unstructured text articles caused information conflict we ...

  17. Extracting information from the data flood of new solar telescopes. Brainstorming

    CERN Document Server

    Ramos, A Asensio

    2012-01-01

    Extracting magnetic and thermodynamic information from spectropolarimetric observations is a difficult and time consuming task. The amount of science-ready data that will be generated by the new family of large solar telescopes is so large that we will be forced to modify the present approach to inference. In this contribution, I propose several possible ways that might be useful for extracting the thermodynamic and magnetic properties of solar plasmas from such observations quickly.

  18. Extracting Information from the Data Flood of New Solar Telescopes: Brainstorming

    Science.gov (United States)

    Asensio Ramos, A.

    2012-12-01

    Extracting magnetic and thermodynamic information from spectropolarimetric observations is a difficult and time consuming task. The amount of science-ready data that will be generated by the new family of large solar telescopes is so large that we will be forced to modify the present approach to inference. In this contribution, I propose several possible ways that might be useful for extracting the thermodynamic and magnetic properties of solar plasmas from such observations quickly.

  19. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    Science.gov (United States)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  20. Systematics of the family Plectopylidae in Vietnam with additional information on Chinese taxa (Gastropoda, Pulmonata, Stylommatophora

    Directory of Open Access Journals (Sweden)

    Barna Páll-Gergely

    2015-01-01

    Full Text Available Vietnamese species from the family Plectopylidae are revised based on the type specimens of all known taxa, more than 600 historical non-type museum lots, and almost 200 newly-collected samples. Altogether more than 7000 specimens were investigated. The revision has revealed that species diversity of the Vietnamese Plectopylidae was previously overestimated. Overall, thirteen species names (anterides Gude, 1909, bavayi Gude, 1901, congesta Gude, 1898, fallax Gude, 1909, gouldingi Gude, 1909, hirsuta Möllendorff, 1901, jovia Mabille, 1887, moellendorffi Gude, 1901, persimilis Gude, 1901, pilsbryana Gude, 1901, soror Gude, 1908, tenuis Gude, 1901, verecunda Gude, 1909 were synonymised with other species. In addition to these, Gudeodiscus hemmeni sp. n. and G. messageri raheemi ssp. n. are described from north-western Vietnam. Sixteen species and two subspecies are recognized from Vietnam. The reproductive anatomy of eight taxa is described. Based on anatomical information, Halongella gen. n. is erected to include Plectopylis schlumbergeri and P. fruhstorferi. Additionally, the genus Gudeodiscus is subdivided into two subgenera (Gudeodiscus and Veludiscus subgen. n. on the basis of the morphology of the reproductive anatomy and the radula. The Chinese G. phlyarius werneri Páll-Gergely, 2013 is moved to synonymy of G. phlyarius. A spermatophore was found in the organ situated next to the gametolytic sac in one specimen. This suggests that this organ in the Plectopylidae is a diverticulum. Statistically significant evidence is presented for the presence of calcareous hook-like granules inside the penis being associated with the absence of embryos in the uterus in four genera. This suggests that these probably play a role in mating periods before disappearing when embryos develop. Sicradiscus mansuyi is reported from China for the first time.

  1. Extracting important information from Chinese Operation Notes with natural language processing methods.

    Science.gov (United States)

    Wang, Hui; Zhang, Weide; Zeng, Qiang; Li, Zuofeng; Feng, Kaiyan; Liu, Lei

    2014-04-01

    Extracting information from unstructured clinical narratives is valuable for many clinical applications. Although natural Language Processing (NLP) methods have been profoundly studied in electronic medical records (EMR), few studies have explored NLP in extracting information from Chinese clinical narratives. In this study, we report the development and evaluation of extracting tumor-related information from operation notes of hepatic carcinomas which were written in Chinese. Using 86 operation notes manually annotated by physicians as the training set, we explored both rule-based and supervised machine-learning approaches. Evaluating on unseen 29 operation notes, our best approach yielded 69.6% in precision, 58.3% in recall and 63.5% F-score. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Automatically extracting clinically useful sentences from UpToDate to support clinicians' information needs.

    Science.gov (United States)

    Mishra, Rashmi; Del Fiol, Guilherme; Kilicoglu, Halil; Jonnalagadda, Siddhartha; Fiszman, Marcelo

    2013-01-01

    Clinicians raise several information needs in the course of care. Most of these needs can be met by online health knowledge resources such as UpToDate. However, finding relevant information in these resources often requires significant time and cognitive effort. To design and assess algorithms for extracting from UpToDate the sentences that represent the most clinically useful information for patient care decision making. We developed algorithms based on semantic predications extracted with SemRep, a semantic natural language processing parser. Two algorithms were compared against a gold standard composed of UpToDate sentences rated in terms of clinical usefulness. Clinically useful sentences were strongly correlated with predication frequency (correlation= 0.95). The two algorithms did not differ in terms of top ten precision (53% vs. 49%; p=0.06). Semantic predications may serve as the basis for extracting clinically useful sentences. Future research is needed to improve the algorithms.

  3. Extraction of Informative Blocks from Deep Web Page Using Similar Layout Feature

    OpenAIRE

    Zeng,Jun; Flanagan, Brendan; Hirokawa, Sachio

    2013-01-01

    Due to the explosive growth and popularity of the deep web, information extraction from deep web page has gained more and more attention. However, the HTML structure of web page has become more complicated, making it difficult to recognize target content by only analyzing the HTML source code. In this paper, we propose a method to extract the informative blocks from a deep web using the layout feature. We consider the visual rectangular region of an HTML element as a visual block in web page....

  4. Information extraction for legal knowledge representation – a review of approaches and trends

    Directory of Open Access Journals (Sweden)

    Denis Andrei de Araujo

    2014-11-01

    Full Text Available This work presents an introduction to Information Extraction systems and a survey of the known approaches of Information Extraction in the legal area. This work analyzes with particular attention the techniques that rely on the representation of legal knowledge as a means to achieve better performance, with emphasis on those techniques including ontologies and linguistic support. Some details of the systems implementations are presented, followed by an analysis of the positive and negative points of each approach, aiming to bring the reader a critical position regarding the solutions studied.

  5. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.;

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...

  6. Ultrasonic Signal Processing Algorithm for Crack Information Extraction on the Keyway of Turbine Rotor Disk

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hong Kyu; Seo, Won Chan; Park, Chan [Pukyong National University, Busan (Korea, Republic of); Lee, Jong O; Son, Young Ho [KIMM, Daejeon (Korea, Republic of)

    2009-10-15

    An ultrasonic signal processing algorithm was developed for extracting the information of cracks generated around the keyway of a turbine rotor disk. B-scan images were obtained by using keyway specimens and an ultrasonic scan system with x-y position controller. The B-scan images were used as input images for 2-Dimensional signal processing, and the algorithm was constructed with four processing stages of pre-processing, crack candidate region detection, crack region classification and crack information extraction. It is confirmed by experiments that the developed algorithm is effective for the quantitative evaluation of cracks generated around the keyway of turbine rotor disk

  7. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  8. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.

    Science.gov (United States)

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single

  9. Studies on the effects of carbon:nitrogen ratio, inoculum type and yeast extract addition on jasmonic acid production by Botryodiplodia theobromae Pat. strain RC1

    National Research Council Canada - National Science Library

    Eng Sánchez, Felipe; Gutiérrez-Rojas, Mariano; Favela-Torres, Ernesto

    2008-01-01

    .... Studies concerning the effects of carbon: nitrogen ratio (C/Nr: 17, 35 and 70), type of inoculum (spores or mycelium) and the yeast extract addition in the media on jasmonic acid production by Botryodiplodia theobromae were evaluated...

  10. Information retrieval and terminology extraction in online resources for patients with diabetes.

    Science.gov (United States)

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

  11. Butanedithiol Solvent Additive Extracting Fullerenes from Donor Phase To Improve Performance and Photostability in Polymer Solar Cells.

    Science.gov (United States)

    Xie, Yuanpeng; Hu, Xiaotian; Yin, Jingping; Zhang, Lin; Meng, Xiangchuan; Xu, Guodong; Ai, Qingyun; Zhou, Weihua; Chen, Yiwang

    2017-03-22

    In this work, we demonstrated that the excited poly[4,8-bis(5-(2-ethylhexyl)thiophen-2-yl)benzo[1,2-b;4,5-b']dithiophene-2,6-diyl-alt-(4-(2-ethylhexyl)-3-fluorothieno[3,4-b]thiophene-)-2-carboxylate-2,6-diyl)] (PTB7-Th) will be degraded by [6,6]-phenyl-C71-butyric acid methyl ester (PC71BM) or photolysis fragment of 1,8-diiodooctane (DIO) in the presence of oxygen and under irradiation of red light. From the previous reports, the fragment of DIO may be involved in the reaction directly. Our work indicates the PC71BM is not directly involved in the reaction, but is acting as a catalyst to promote the reaction of excited donors with oxygen. Thus, PTB7-Th urgently needs a kind of nonresidual iodine-free additive to replace DIO and remove the fullerene from the donor phase at the same time. Taking into consideration PC71BM solubility and boiling point difference between solvent additives and host solvents, 1,4-butanedithiol solvent was selected to fabricate PTB7-Th:PC71BM-based solar cells achieving a best power conversion efficiency (PCE) of 10.2% (8.5% for PTB7:PC71BM). Iodine-free butanedithiol can not only avoid excited polymer reacting with the photolysis fragment of DIO but also suppress the degradation of the excited PTB7-Th caused by synergistic effect between the fullerene and oxygen via extracting the free/trapped PC71BM from the donor phase. Eventually, the film prepared with 1,4-butanedithiol shows higher stability than the film prepared without any additives and much better than the film with DIO in macro-/micromorphology, light absorption, and device performance.

  12. [An improved N-FINDR endmember extraction algorithm based on manifold learning and spatial information].

    Science.gov (United States)

    Tang, Xiao-yan; Gao, Kun; Ni, Guo-qiang; Zhu, Zhen-yu; Cheng, Hao-bo

    2013-09-01

    An improved N-FINDR endmember extraction algorithm by combining manifold learning and spatial information is presented under nonlinear mixing assumptions. Firstly, adaptive local tangent space alignment is adapted to seek potential intrinsic low-dimensional structures of hyperspectral high-diemensional data and reduce original data into a low-dimensional space. Secondly, spatial preprocessing is used by enhancing each pixel vector in spatially homogeneous areas, according to the continuity of spatial distribution of the materials. Finally, endmembers are extracted by looking for the largest simplex volume. The proposed method can increase the precision of endmember extraction by solving the nonlinearity of hyperspectral data and taking advantage of spatial information. Experimental results on simulated and real hyperspectral data demonstrate that the proposed approach outperformed the geodesic simplex volume maximization (GSVM), vertex component analysis (VCA) and spatial preprocessing N-FINDR method (SPPNFINDR).

  13. A method of building information extraction based on mathematical morphology and multiscale

    Science.gov (United States)

    Li, Jing-wen; Wang, Ke; Zhang, Zi-ping; Xue, Long-li; Yin, Shou-qiang; Zhou, Song

    2015-12-01

    In view of monitoring the changes of buildings on Earth's surface ,by analyzing the distribution characteristics of building in remote sensing image, combined with multi-scale in image segmentation and the advantages of mathematical morphology, this paper proposes a multi-scale combined with mathematical morphology of high resolution remote sensing image segmentation method, and uses the multiple fuzzy classification method and the shadow of auxiliary method to extract information building, With the comparison of k-means classification, and the traditional maximum likelihood classification method, the results of experiment object based on multi-scale combined with mathematical morphology of image segmentation and extraction method, can accurately extract the structure of the information is more clear classification data, provide the basis for the intelligent monitoring of earth data and theoretical support.

  14. Extraction and Network Sharing of Forest Vegetation Information based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Hannv

    2013-05-01

    Full Text Available The support vector machine (SVM is a new method of data mining, which can deal with regression problems (time series analysis, pattern recognition (classification, discriminant analysis and many other issues very well. In recent years, SVM has been widely used in computer classification and recognition of remote sensing images. This paper is based on Landsat TM image data, using a classification method which is based on support vector machine to extract the forest cover information of Dahuanggou tree farm of Changbai Mountain area, and compare with the conventional maximum likelihood classification. The results show that extraction accuracy of forest information based on support vector machine, Kappa values are 0.9810, 0.9716, 0.9753, which are exceeding the extraction accuracy of maximum likelihood method (MLC and Kappa value of 0.9634, the method has good maneuverability and practicality.

  15. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  16. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies

    Science.gov (United States)

    Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A

    2017-01-01

    Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265

  17. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies.

    Science.gov (United States)

    Zheng, Shuai; Lu, James J; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A; Wang, Fusheng

    2017-05-09

    Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports-each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. IDEAL-X adopts a unique online machine learning-based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable.

  18. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  19. An Information Extraction Core System for Real World German Text Processing

    CERN Document Server

    Neumann, G; Baur, J; Becker, M; Braun, C

    1997-01-01

    This paper describes SMES, an information extraction core system for real world German text processing. The basic design criterion of the system is of providing a set of basic powerful, robust, and efficient natural language components and generic linguistic knowledge sources which can easily be customized for processing different tasks in a flexible manner.

  20. User-centered evaluation of Arizona BioPathway: an information extraction, integration, and visualization system.

    Science.gov (United States)

    Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun

    2007-09-01

    Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.

  1. Information Extraction to Generate Visual Simulations of Car Accidents from Written Descriptions

    NARCIS (Netherlands)

    Nugues, P.; Dupuy, S.; Egges, A.

    2003-01-01

    This paper describes a system to create animated 3D scenes of car accidents from written reports. The text-to-scene conversion process consists of two stages. An information extraction module creates a tabular description of the accident and a visual simulator generates and animates the scene. We

  2. Web信息抽取系统的设计%Design of Web Information Extraction System

    Institute of Scientific and Technical Information of China (English)

    刘斌; 张晓婧

    2013-01-01

    In order to obtain the scattered information hidden in Web pages,Web information extraction system design.The system first uses a modified HITS algorithm for topic selection information collection; then the Web page's HTML document structure of the data pre-processing; Finally,based on the XPath DOM tree generation algorithm to obtain the absolute path is an XPath node marked expression,and use the XPath language with XSLT technology to write extraction rules,resulting in a structured database or XML file,to achieve the positioning and Web information extraction.Extraction through a shopping site experiments show that the extraction system works well,can achieve similar batch extract Web page.%为了获取分散Web页面中隐含信息,设计了Web信息抽取系统.该系统首先使用一种改进的HITS主题精选算法进行信息采集;然后对Web页面的HTML结构进行文档的数据预处理;最后,基于DOM树的XPath绝对路径生成算法来获取被标注结点的XPath表达式,并使用XPath语言结合XSLT技术来编写抽取规则,从而得到结构化的数据库或XML文件,实现了Web信息的定位和抽取.通过一个购物网站的抽取实验证明,该系统的抽取效果良好,可以实现相似Web页面的批量抽取.

  3. Additional information on heavy quark parameters from charged lepton forward-backward asymmetry

    Energy Technology Data Exchange (ETDEWEB)

    Turczyk, Sascha [PRISMA Cluster of Excellence & Mainz Institute for Theoretical Physics,Johannes Gutenberg University,55099 Mainz (Germany)

    2016-04-20

    The determination of |V{sub cb}| using inclusive and exclusive (semi-)leptonic decays exhibits a long-standing tension of varying O(3σ) significance. For the inclusive determination the decay rate is expanded in 1/m{sub b} using heavy quark expansion, and from moments of physical observables the higher order heavy quark parameters are extracted from experimental data in order to assess |V{sub cb}| from the normalisation. The drawbacks are high correlations both theoretically as well as experimentally among these observables. We will scrutinise the inclusive determination in order to add a new and less correlated observable. This observable is related to the decay angle of the charged lepton and can help to constrain the important heavy quark parameters in a new way. It may validate the current seemingly stable extraction of |V{sub cb}| from inclusive decays or hints to possible issues, and even may be sensitive to New Physics operators.

  4. Additional Information on Heavy Quark Parameters from Charged Lepton Forward-Backward Asymmetry

    CERN Document Server

    Turczyk, Sascha

    2016-01-01

    The determination of $|V_{cb}|$ using inclusive and exclusive (semi-)leptonic decays exhibits a long-standing tension of varying ${\\cal O}(3 \\sigma)$ significance. For the inclusive determination the decay rate is expanded in $1/m_b$ using heavy quark expansion, and from moments of physical observables the higher order heavy quark parameters are extracted from experimental data in order to assess $|V_{cb}|$ from the normalisation. The drawbacks are high correlations both theoretically as well as experimentally among these observables. We will scrutinise the inclusive determination in order to add a new and less correlated observable. This observable is related to the decay angle of the charged lepton and can help to constrain the important heavy quark parameters in a new way. It may validate the current seemingly stable extraction of $|V_{cb}|$ from inclusive decays or hints to possible issues, and even may be sensitive to New Physics operators.

  5. Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report

    Science.gov (United States)

    Malin, Jane T.

    2009-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.

  6. A Framework For Extracting Information From Web Using VTD-XML‘s XPath

    Directory of Open Access Journals (Sweden)

    C. Subhashini

    2012-03-01

    Full Text Available The exponential growth of WWW (World Wide Web is the cause for vast pool of information as well as several challenges posed by it, such as extracting potentially useful and unknown information from WWW. Many websites are built with HTML, because of its unstructured layout, it is difficult to obtain effective and precise data from web using HTML. The advent of XML (Extensible Markup Language proposes a better solution to extract useful knowledge from WWW. Web Data Extraction based on XML Technology solves this problem because XML is a general purpose specification for exchanging data over the Web. In this paper, a framework is suggested to extract the data from the web.Here the semi-structured data in the web page is transformed into well-structured data using standard XML technologies and the new parsing technique called extended VTD-XML (Virtual Token Descriptorfor XML along with Xpath implementation has been used to extract data from the well-structured XML document.

  7. Framework for automatic information extraction from research papers on nanocrystal devices

    Directory of Open Access Journals (Sweden)

    Thaer M. Dieb

    2015-09-01

    Full Text Available To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called “ NaDev” (Nanocrystal Device Development for this purpose. We also proposed an automatic information extraction system called “NaDevEx” (Nanocrystal Device Automatic Information Extraction Framework. NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms, the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material. However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39–73%; however, precision is better (75–97%. The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for

  8. Framework for automatic information extraction from research papers on nanocrystal devices.

    Science.gov (United States)

    Dieb, Thaer M; Yoshioka, Masaharu; Hara, Shinjiro; Newton, Marcus C

    2015-01-01

    To support nanocrystal device development, we have been working on a computational framework to utilize information in research papers on nanocrystal devices. We developed an annotated corpus called " NaDev" (Nanocrystal Device Development) for this purpose. We also proposed an automatic information extraction system called "NaDevEx" (Nanocrystal Device Automatic Information Extraction Framework). NaDevEx aims at extracting information from research papers on nanocrystal devices using the NaDev corpus and machine-learning techniques. However, the characteristics of NaDevEx were not examined in detail. In this paper, we conduct system evaluation experiments for NaDevEx using the NaDev corpus. We discuss three main issues: system performance, compared with human annotators; the effect of paper type (synthesis or characterization) on system performance; and the effects of domain knowledge features (e.g., a chemical named entity recognition system and list of names of physical quantities) on system performance. We found that overall system performance was 89% in precision and 69% in recall. If we consider identification of terms that intersect with correct terms for the same information category as the correct identification, i.e., loose agreement (in many cases, we can find that appropriate head nouns such as temperature or pressure loosely match between two terms), the overall performance is 95% in precision and 74% in recall. The system performance is almost comparable with results of human annotators for information categories with rich domain knowledge information (source material). However, for other information categories, given the relatively large number of terms that exist only in one paper, recall of individual information categories is not high (39-73%); however, precision is better (75-97%). The average performance for synthesis papers is better than that for characterization papers because of the lack of training examples for characterization papers

  9. EFFECTS OF ARTICHOKE (CYNARA SCOLYMUS L. EXTRACT ADDITION ON MICROBIOLOGICAL AND PHYSICO-CHEMICAL PROPERTIES OF PROBIOTIC YOGURT

    Directory of Open Access Journals (Sweden)

    Jalal Ehsani

    2015-06-01

    Full Text Available In this study, the effects of addition of artichoke (Cynara scolymus L. leaf extract into yogurt (0 or 0.5% on biochemical parameters (pH, titrable acidity and the viability of probiotic bacteria (Lactobacillus acidophilus LA-5, Bifidobacterium lactis BB-12 during fermentation and over 28 days of refrigerated storage (4°C were investigated. Moreover, the amounts of syneresis, total phenolic content, antioxidant activity and sensory attributes of yogurts at the end of fermentation were assessed. Yogurts contained the two yogurt bacteria (Streptococcus thermophilus and Lactobacillus delbrueckii ssp. bulgaricus: ABY or only S. thermophilus (ABT as adjunct culture to probiotics. Yogurts containing Cynara scolymus L. (ABT-C and ABY-C had faster acidity increase, shorter incubation time and greater final titrable acidity than control yogurts (ABT and ABY. Also, yogurts containing Cynara scolymus L. had lower syneresis, higher total phenolic content and greater antioxidant activity. ABT-C yogurt had the ever greatest viability of probiotics. In case of samples sensory evaluation, generally, the highest total score was related to ABT yogurt whereas lowest total score belonged to ABT-C yogurt.

  10. Antioxidant and bacteriostatic effects of the addition of extract of quillay polyphenols (Quillaja saponaria in the marinade of broiler chicken

    Directory of Open Access Journals (Sweden)

    MA Fellenberg

    2011-03-01

    Full Text Available The nutritional and sensorial characteristics of chicken meat can be affected by oxidative rancidity, process of oxidation of lipids in the meat that constitutes one of the main forms of food deterioration. This problem may be prevented or reduced by adding antioxidants to the meat during the process of marination. In the present study, the addition of a polyphenol-rich quillay extract (QLPerm® at 5 levels (0, 0.05, 0.10, 0.15, and 0.20% to the marinade of chicken meat was evaluated. The marinated meat was stored under refrigeration (6 ºC for 0, 2, 4, 6, and 8 days. Basal and induced lipid oxidation was evaluated by TBARS analysis. Microbiological quality was assessed by total coliforms and mesophillic aerobe counts. The application of this natural antioxidant reduced, in some cases, meat lipidic oxidation, improved its microbiological quality, and did not leave any perceivable residues as analyzed by a sensorial evaluation panel.

  11. Clinic expert information extraction based on domain model and block importance model.

    Science.gov (United States)

    Zhang, Yuanpeng; Wang, Li; Qian, Danmin; Geng, Xingyun; Yao, Dengfu; Dong, Jiancheng

    2015-11-01

    To extract expert clinic information from the Deep Web, there are two challenges to face. The first one is to make a judgment on forms. A novel method based on a domain model, which is a tree structure constructed by the attributes of query interfaces is proposed. With this model, query interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from response Web pages indexed by query interfaces. To filter the noisy information on a Web page, a block importance model is proposed, both content and spatial features are taken into account in this model. The experimental results indicate that the domain model yields a precision 4.89% higher than that of the rule-based method, whereas the block importance model yields an F1 measure 10.5% higher than that of the XPath method.

  12. Extraction of Hidden Social Networks from Wiki-Environment Involved in Information Conflict

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliyev

    2016-03-01

    Full Text Available Social network analysis is a widely used technique to analyze relationships among wiki-users in Wikipedia. In this paper the method to identify hidden social networks participating in information conflicts in wiki-environment is proposed. In particular, we describe how text clustering techniques can be used for extraction of hidden social networks of wiki-users caused information conflict. By clustering unstructured text articles caused information conflict we create social network of wiki-users. For clustering of the conflict articles a hybrid weighted fuzzy-c-means method is proposed.

  13. Extraction of spatial information from remotely sensed image data - an example: gloria sidescan sonar images

    Science.gov (United States)

    Chavez, Pat S.; Gardner, James V.

    1994-01-01

    A method to extract spatial amplitude and variability information from remotely sensed digital imaging data is presented. High Pass Filters (HPFs) are used to compute both a Spatial Amplitude Image/Index (SAI) and Spatial Variability Image/Index (SVI) at the local, intermediate, and regional scales. Used as input to principal component analysis and automatic clustering classification, the results indicate that spatial information at scales other than local is useful in the analysis of remotely sensed data. The resultant multi-spatial data set allows the user to study and analyze an image based more on the complete spatial characteristics of an image than only local textural information.

  14. 14 CFR 121.317 - Passenger information requirements, smoking prohibitions, and additional seat belt requirements.

    Science.gov (United States)

    2010-01-01

    ... prohibitions, and additional seat belt requirements. 121.317 Section 121.317 Aeronautics and Space FEDERAL... prohibitions, and additional seat belt requirements. (a) Except as provided in paragraph (l) of this section... paragraph (l) of this section, the “Fasten Seat Belt” sign shall be turned on during any movement on...

  15. Integrating semantic information into multiple kernels for protein-protein interaction extraction from biomedical literatures.

    Directory of Open Access Journals (Sweden)

    Lishuang Li

    Full Text Available Protein-Protein Interaction (PPI extraction is an important task in the biomedical information extraction. Presently, many machine learning methods for PPI extraction have achieved promising results. However, the performance is still not satisfactory. One reason is that the semantic resources were basically ignored. In this paper, we propose a multiple-kernel learning-based approach to extract PPIs, combining the feature-based kernel, tree kernel and semantic kernel. Particularly, we extend the shortest path-enclosed tree kernel (SPT by a dynamic extended strategy to retrieve the richer syntactic information. Our semantic kernel calculates the protein-protein pair similarity and the context similarity based on two semantic resources: WordNet and Medical Subject Heading (MeSH. We evaluate our method with Support Vector Machine (SVM and achieve an F-score of 69.40% and an AUC of 92.00%, which show that our method outperforms most of the state-of-the-art systems by integrating semantic information.

  16. Orchard spatial information extraction from SPOT-5 image based on CART model

    Science.gov (United States)

    Li, Deyi; Zhang, Shuwen

    2009-07-01

    Orchard is an important agricultural industry and typical land use type in Shandong peninsula of China. This article focused on the automatic information extraction of orchard using SPOT-5 image. After analyzing every object's spectrum, we proposed a CART model based on sub-region and hierarchy theory by exploring spectrum, texture and topography attributes. The whole area was divided into coastal plain region and hill region based on SRTM data and extracted respectively. The accuracy reached to 86.40%, which was much higher than supervised classification method.

  17. Extracting directed information flow networks: an application to genetics and semantics

    CERN Document Server

    Masucci, A P; Hernández-García, E; Kalampokis, A

    2010-01-01

    We introduce a general method to infer the directional information flow between populations whose elements are described by n-dimensional vectors of symbolic attributes. The method is based on the Jensen-Shannon divergence and on the Shannon entropy and has a wide range of application. We show here the results of two applications: first extracting the network of genetic flow between the meadows of the seagrass Poseidonia Oceanica, where the meadow elements are specified by sets of microsatellite markers, then we extract the semantic flow network from a set of Wikipedia pages, showing the semantic channels between different areas of knowledge.

  18. A theoretical extraction scheme of transport information based on exclusion models

    Institute of Scientific and Technical Information of China (English)

    Chen Hua; Du Lei; Qu Cheng-Li; Li Wei-Hua; He Liang; Chen Wen-Hao; Sun Peng

    2010-01-01

    In order to explore how to extract more transport information from current fluctuation, a theoretical extraction scheme is presented in a single barrier structure based on exclusion models, which include counter-flows model and tunnel model. The first four cumulants of these two exclusion models are computed in a single barrier structure, and their characteristics are obtained. A scheme with the help of the first three cumulants is devised to check a transport process to follow the counter-flows model, the tunnel model or neither of them. Time series generated by Monte Carlo techniques is adopted to validate the abstraction procedure, and the result is reasonable.

  19. Study of total dry matter and protein extraction from canola meal as affected by the pH, salt addition and use of zeta-potential/turbidimetry analysis to optimize the extraction conditions.

    Science.gov (United States)

    Gerzhova, Alina; Mondor, Martin; Benali, Marzouk; Aider, Mohammed

    2016-06-15

    Total dry matter and proteins were differentially and preferentially extracted from canola meal (CM) under different conditions. The effect of the extraction medium pH, CM concentration and salt concentrations were found to have different influences on the extractability of total dry matter and proteins from CM. The pH of the extracting medium had the most significant effect. The maximal total dry matter (42.8±1.18%) extractability was obtained with 5% CM at pH 12 without salt addition, whereas the maximal for total protein (58.12±1.47%) was obtained with 15% CM under the same conditions. The minimal extractability for the dry matter (26.63±0.67%) was obtained with 5% CM at pH 10 without salt added and the minimal protein extractability was observed in a 10% CM at pH 10, in 0.01 NaCl. Turbidity and ζ-potential measurements indicated that pH 5 was the optimum condition for the highest protein extraction yield. SDS-PAGE analysis showed that salt addition contributes to higher solubility of canola proteins specifically cruciferin fraction, although it reduces napin extraction.

  20. 3D local feature BKD to extract road information from mobile laser scanning point clouds

    Science.gov (United States)

    Yang, Bisheng; Liu, Yuan; Dong, Zhen; Liang, Fuxun; Li, Bijun; Peng, Xiangyang

    2017-08-01

    Extracting road information from point clouds obtained through mobile laser scanning (MLS) is essential for autonomous vehicle navigation, and has hence garnered a growing amount of research interest in recent years. However, the performance of such systems is seriously affected due to varying point density and noise. This paper proposes a novel three-dimensional (3D) local feature called the binary kernel descriptor (BKD) to extract road information from MLS point clouds. The BKD consists of Gaussian kernel density estimation and binarization components to encode the shape and intensity information of the 3D point clouds that are fed to a random forest classifier to extract curbs and markings on the road. These are then used to derive road information, such as the number of lanes, the lane width, and intersections. In experiments, the precision and recall of the proposed feature for the detection of curbs and road markings on an urban dataset and a highway dataset were as high as 90%, thus showing that the BKD is accurate and robust against varying point density and noise.

  1. Extraction of palaeochannei information from remote sensing imagery in the east of Chaohu Lake, China

    Institute of Scientific and Technical Information of China (English)

    Xinyuan WANG; Zhenya GUO; Li WU; Cheng ZHU; Hui HE

    2012-01-01

    Palaeochannels are deposits of unconsolidated sediments or semi-consolidated sedimentary rocks deposited in ancient,currently inactive fiver and stream channel systems.It is distinct from the overbank deposits of currently active river channels,including ephemeral water courses which do not regularly flow.We have introduced a spectral characteristics-based palaeochannel information extraction model from SPOT-5 imagery with special time phase,which has been built by virtue of an analysis of remote sensing mechanism and spectral characteristics of the palaeochannel,combined with its distinction from the spatial distribution and spectral features of currently active river channels,also with the establishment of remote sensing judging features of the palaeochannel in remote sensing image.This model follows the process of supervised classification → farmland masking and primary component analysis → underground palaeochannel information extractioninformation combination → palaeochannel system image.The Zhegao River Valley in the east of Chaohu Lake was selected as a study area,and SPOT-5 imagery was used as a source of data.The result was satisfactory when this method has been successfully applied to extract the palaeochannel information,which can provide good reference for regional remote sensing archeology and neotectonic research.However,the applicability of this method needs to be tested further in other areas as the spatial characteristics and spectral response of palaeochannel might be different.

  2. A study of extraction of petal region on flower picture using HSV color information

    Science.gov (United States)

    Yanagihara, Yoshio; Nakayama, Ryo

    2014-01-01

    It is one of useful and interesting applications to discriminate the kind of the flower or recognize the name of the flower, as example of retrieving flower database. As its contour line of the petal region of flower is useful for such problems, it is important to extract the precise region of the petal of a flower picture. In this paper, the method which extracts petal regions on a flower picture using HSV color information is proposed, such to discriminate the kind of the flower. The experiments show that the proposed method can extract petal regions at the success rate of about 90%, which is thought to be satisfied. In detail, the success rates of one-colored flower, plural-colored flower, and white flower are about 98%, 85%, and 83%, respectively.

  3. Extraction process of palm kernel cake as a source of mannan for feed additive on poultry diet

    Science.gov (United States)

    Tafsin, M.; Hanafi, N. D.; Yusraini, E.

    2017-05-01

    Palm Kernel Cake (PKC) is a by-product of palm kernel oil extraction and found in large quantity in Indonesia. The inclusion of PKC on poultry diet are limited due to some nutritional problems such as anti-nutritional properties (mannan). On the other hand, mannan containing polysaccharides play in various biological functions particularly in enhancing the immune response and to control pathogen in poultry. The research objective to find out the extraction process of PKC and conducted at animal nutrition and Feed Science Laboratory, Agricultural Faculty, University of Sumatera Utara. Various extraction methode were used in this experiment, including fraction analysis used 7 number sieves, and followed by water and acetic acid extraction. The result indicated that PKC had different particle size according to sieve size and dominated by particle size 850 um. The analysis of sugar content indicated that each particle size had different characteristic on treatment by hot water extraction. The particle size 180—850 um had higher sugar content than coarse PKC (2000—3000 um). The total sugar content were recovered vary between 0.9—3,2% from PKC were extracted. Treatment grinding method followed by hot water extraction (100—120 °C, 1 h) increased total sugar content than previous treatments and reach 8% from PKC were extracted. Utilisation acetic acid decreased the total amount of total sugar from PKC were extracted. It is concluded that treatment by hot temperature (110—120 °C) for 1 h show highest yield to extract sugar from PKC.

  4. Web Page Information Extraction Technology%网页信息提取技术

    Institute of Scientific and Technical Information of China (English)

    邵振凯

    2013-01-01

    With the rapid development of the Internet,the amount of information in the Web page has become very large,how to quickly and efficiently search and find valuable information has become an important aspect of Web research. In this regard a tag extraction meth-od is proposed. Optimize the Web page into good HTML format documents with JTidy,and resolve to a DOM tree. Then use tag extrac-tion approach to extract the tags contain the text message content from DOM tree,remove the tags used to control the Web interaction and display,and use the method based on the punctuation information extraction method to remove the copyright notice and other informa-tion. The results on a number of different sites extraction show that the tags extraction methods not only have a great generality but also can accurately extract site theme.%随着互联网的快速发展,Web页面上的信息量已变得非常巨大,面对网页上海量的信息资源,如何快速有效地检索及发现有价值的信息已成为Web研究的一个重要方面。对此提出了一种标签提取方法。利用JTidy将网页优化为格式良好的HTML文档并解析为DOM树,然后用标签提取方法对该DOM树中包含有文本信息内容的叶子节点标签进行提取,把用于控制网页交互性和显示的标签删除掉,并运用基于标点符号的信息提取方法去除版权说明等信息。对不同网站的网页进行抽取实验,结果表明标签提取方法不但通用性强,而且能够准确地提取网页的主题信息。

  5. Computerized analysis of isometric tension studies provides important additional information about vasomotor activity

    Directory of Open Access Journals (Sweden)

    Vincent M.B.

    1997-01-01

    Full Text Available Concentration-response curves of isometric tension studies on isolated blood vessels are obtained traditionally. Although parameters such as Imax, EC50 and pA2 may be readily calculated, this method does not provide information on the temporal profile of the responses or the actual nature of the reaction curves. Computerized data acquisition systems can be used to obtain average data that represent a new source of otherwise inaccessible information, since early and late responses may be observed separately in detail

  6. Twenty-five additional cases of trisomy 9 mosaic: Birth information, medical conditions, and developmental status.

    Science.gov (United States)

    Bruns, Deborah A; Campbell, Emily

    2015-05-01

    Limited literature exists on children and adults diagnosed with the mosaic form of trisomy 9. Data from the Tracking Rare Incidence Syndromes (TRIS) project has provided physical characteristics and medical conditions for 14 individuals. This article provides TRIS Survey results of 25 additional cases at two data points (birth and survey completion) as well as developmental status. Results confirmed a number of phenotypic features and medical conditions. In addition, a number of cardiac anomalies were reported along with feeding and respiratory difficulties in the immediate postnatal period. In addition, developmental status data indicated a range in functioning level up to skills in the 36 and 48-month range. Strengths were also noted across the sample in language and communication, fine motor and social-emotional development. Implications for professionals caring for children with this genetic condition are offered.

  7. Multilevel spatial semantic model for urban house information extraction automatically from QuickBird imagery

    Science.gov (United States)

    Guan, Li; Wang, Ping; Liu, Xiangnan

    2006-10-01

    Based on the introduction to the characters and constructing flow of space semantic model, the feature space and context of house information in high resolution remote sensing image are analyzed, and the house semantic network model of Quick Bird image is also constructed. Furthermore, the accuracy and practicability of space semantic model are checked up through extracting house information automatically from Quick Bird image after extracting candidate semantic nodes to the image by taking advantage of grey division method, window threshold value method and Hough transformation. Sample result indicates that its type coherence, shape coherence and area coherence are 96.75%, 89.5 % and 88 % respectively. Thereinto the effect of the extraction of the houses with rectangular roof is the best and that with herringbone and the polygonal roofs is just ideal. However, the effect of the extraction of the houses with round roof is not satisfied and thus they need the further perfection to the semantic model to make them own higher applied value.

  8. Extraction of Remote Sensing Information Ofbanana Under Support of 3S Technology Inguangxi Province

    Science.gov (United States)

    Yang, Xin; Sun, Han; Tan, Zongkun; Ding, Meihua

    This paper presents an automatic approach to planting areas extraction for mixed vegetation and hilly region, more cloud using moderate spatial resolution and high temporal resolution MODIS data around Guangxi province, south of China. According to banana growth lasting more 9 to 11 months, and the areas are reduced during crush season, the Maximum likelihood was used to extract the information of banana planting and their spatial distribution through the calculation of multiple-phase MODIS-NDVI in Guangxi and stylebook training regions of banana of being selected by GPS. Compared with the large and little regions of banana planting in monitoring image and the investigation of on the spot with GPS, the resolute shows that the banana planting information in remote sensing image are true. In this research, multiple-phase MODIS data were received during banana main growing season and preprocessed; NDVI temporal profiles of banana were generated;models for planting areas extraction were developed based on the analysis of temporal NDVI curves; and spatial distribution map of planting areas of banana in Guangxi in 2006 were created. The study suggeststhat it is possible to extract planting areas automatically from MODIS data for large areas.

  9. 75 FR 62404 - Agency Information Collection Activities; Proposed Collection; Comment Request; Additional...

    Science.gov (United States)

    2010-10-08

    ... ``time and extent'' application (TEA) to be submitted to us by any party for our consideration to include... submit reports, keep records, or provide information to a third party. Section 3506(c)(2)(A) of the PRA... misbranded (2002 TEA final rule). The regulations in Sec. 330.14 state that OTC drug products introduced into...

  10. 75 FR 68608 - Information Collection; Request for Authorization of Additional Classification and Rate, Standard...

    Science.gov (United States)

    2010-11-08

    ...(c), and 5.15 (records to be kept by employers under the Fair Labor Standards Act (FLSA), 29 CFR 516... . SUPPLEMENTARY INFORMATION: A. Purpose This regulation prescribes labor standards for federally financed and assisted construction contracts subject to the Davis-Bacon and Related Acts (DBRA), as well as labor...

  11. 18 CFR 33.3 - Additional information requirements for applications involving horizontal competitive impacts.

    Science.gov (United States)

    2010-04-01

    ... transmission rights held by the potential supplier that are not committed to long-term transactions. For each... reserve existing transmission capacity needed for native load growth and network transmission customer... years. (ii) Transmission capability data must include the following information: (A) Transmission...

  12. 19 CFR 141.89 - Additional information for certain classes of merchandise.

    Science.gov (United States)

    2010-04-01

    ... Is it made on a base or platform of wood? P Does it have open toes or open heels? Q Is it made by the... analysis or mill test certificate. Iron oxide (T.D. 49989, 50107)—For iron oxide to which a reduced rate of... covering such matter, the following information: (1) Heading 4901—(a) Whether the books are:...

  13. 13 CFR 126.403 - May SBA require additional information from a HUBZone SBC?

    Science.gov (United States)

    2010-01-01

    ... information from a HUBZone SBC? 126.403 Section 126.403 Business Credit and Assistance SMALL BUSINESS... HUBZone SBC? (a) At the discretion of the D/HUB, SBA has the right to require that a HUBZone SBC submit... adverse inference from the failure of a HUBZone SBC to cooperate with a program examination or...

  14. 26 CFR 54.9802-3T - Additional requirements prohibiting discrimination based on genetic information (temporary).

    Science.gov (United States)

    2010-04-01

    ... both parents). (A) First-degree relatives include parents, spouses, siblings, and children. (B) Second... request only the minimum amount of information necessary to make a decision regarding payment. Because the results of the test are not necessary for the issuer to make a decision regarding the payment of A's...

  15. 17 CFR 229.1118 - (Item 1118) Reports and additional information.

    Science.gov (United States)

    2010-04-01

    ... 1934 AND ENERGY POLICY AND CONSERVATION ACT OF 1975-REGULATION S-K Asset-Backed Securities (Regulation... or entities under which reports about the asset-backed securities will be filed with the Securities... holders or information about the asset-backed securities will be made available in this manner. (3) If...

  16. An Useful Information Extraction using Image Mining Techniques from Remotely Sensed Image (RSI

    Directory of Open Access Journals (Sweden)

    Dr. C. Jothi Venkateswaran,

    2010-11-01

    Full Text Available Information extraction using mining techniques from remote sensing image (RSI is rapidly gaining attention among researchers and decision makers because of its potential in application oriented studies. Knowledge discovery from image poses many interesting challenges such as preprocessing the image data set, training the data and discovering useful image patterns applicable to many newapplication frontiers. In the image rich domain of RSI, image mining implies the synergy of data mining and image processing technology. Such culmination of techniques renders a valuable tool in information extraction. Also, this encompasses the problem of handling a larger data base of varied image data formats representing various levels ofinformation such as pixel, local and regional. In the present paper, various preprocessing corrections and techniques of image mining are discussed.

  17. Different approaches for extracting information from the co-occurrence matrix.

    Directory of Open Access Journals (Sweden)

    Loris Nanni

    Full Text Available In 1979 Haralick famously introduced a method for analyzing the texture of an image: a set of statistics extracted from the co-occurrence matrix. In this paper we investigate novel sets of texture descriptors extracted from the co-occurrence matrix; in addition, we compare and combine different strategies for extending these descriptors. The following approaches are compared: the standard approach proposed by Haralick, two methods that consider the co-occurrence matrix as a three-dimensional shape, a gray-level run-length set of features and the direct use of the co-occurrence matrix projected onto a lower dimensional subspace by principal component analysis. Texture descriptors are extracted from the co-occurrence matrix evaluated at multiple scales. Moreover, the descriptors are extracted not only from the entire co-occurrence matrix but also from subwindows. The resulting texture descriptors are used to train a support vector machine and ensembles. Results show that our novel extraction methods improve the performance of standard methods. We validate our approach across six medical datasets representing different image classification problems using the Wilcoxon signed rank test. The source code used for the approaches tested in this paper will be available at: http://www.dei.unipd.it/wdyn/?IDsezione=3314&IDgruppo_pass=124&preview=.

  18. 30 CFR 75.1200-1 - Additional information on mine map.

    Science.gov (United States)

    2010-07-01

    ... SAFETY AND HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Maps § 75.1200-1 Additional... symbols; (g) The location of railroad tracks and public highways leading to the mine, and mine buildings... permanent base line points coordinated with the underground and surface mine traverses, and the location and...

  19. Information extraction and transmission techniques for spaceborne synthetic aperture radar images

    Science.gov (United States)

    Frost, V. S.; Yurovsky, L.; Watson, E.; Townsend, K.; Gardner, S.; Boberg, D.; Watson, J.; Minden, G. J.; Shanmugan, K. S.

    1984-01-01

    Information extraction and transmission techniques for synthetic aperture radar (SAR) imagery were investigated. Four interrelated problems were addressed. An optimal tonal SAR image classification algorithm was developed and evaluated. A data compression technique was developed for SAR imagery which is simple and provides a 5:1 compression with acceptable image quality. An optimal textural edge detector was developed. Several SAR image enhancement algorithms have been proposed. The effectiveness of each algorithm was compared quantitatively.

  20. An Useful Information Extraction using Image Mining Techniques from Remotely Sensed Image (RSI)

    OpenAIRE

    Dr. C. Jothi Venkateswaran,; Murugan, S.; Dr. N. Radhakrishnan

    2010-01-01

    Information extraction using mining techniques from remote sensing image (RSI) is rapidly gaining attention among researchers and decision makers because of its potential in application oriented studies. Knowledge discovery from image poses many interesting challenges such as preprocessing the image data set, training the data and discovering useful image patterns applicable to many newapplication frontiers. In the image rich domain of RSI, image mining implies the synergy of data mining and ...

  1. Monadic datalog and the expressive power of languages for Web information extraction

    OpenAIRE

    Gottlob, Georg; Koch, Christoph

    2004-01-01

    Research on information extraction from Web pages (wrapping) has seen much activity recently (particularly systems implementations), but little work has been done on formally studying the expressiveness of the formalisms proposed or on the theoretical foundations of wrapping. In this paper, we first study monadic datalog over trees as a wrapping language. We show that this simple language is equivalent to monadic second order logic (MSO) in its ability to specify wrappers. We believe that MSO...

  2. KneeTex: an ontology-driven system for information extraction from MRI reports.

    Science.gov (United States)

    Spasić, Irena; Zhao, Bo; Jones, Christopher B; Button, Kate

    2015-01-01

    In the realm of knee pathology, magnetic resonance imaging (MRI) has the advantage of visualising all structures within the knee joint, which makes it a valuable tool for increasing diagnostic accuracy and planning surgical treatments. Therefore, clinical narratives found in MRI reports convey valuable diagnostic information. A range of studies have proven the feasibility of natural language processing for information extraction from clinical narratives. However, no study focused specifically on MRI reports in relation to knee pathology, possibly due to the complexity of knee anatomy and a wide range of conditions that may be associated with different anatomical entities. In this paper we describe KneeTex, an information extraction system that operates in this domain. As an ontology-driven information extraction system, KneeTex makes active use of an ontology to strongly guide and constrain text analysis. We used automatic term recognition to facilitate the development of a domain-specific ontology with sufficient detail and coverage for text mining applications. In combination with the ontology, high regularity of the sublanguage used in knee MRI reports allowed us to model its processing by a set of sophisticated lexico-semantic rules with minimal syntactic analysis. The main processing steps involve named entity recognition combined with coordination, enumeration, ambiguity and co-reference resolution, followed by text segmentation. Ontology-based semantic typing is then used to drive the template filling process. We adopted an existing ontology, TRAK (Taxonomy for RehAbilitation of Knee conditions), for use within KneeTex. The original TRAK ontology expanded from 1,292 concepts, 1,720 synonyms and 518 relationship instances to 1,621 concepts, 2,550 synonyms and 560 relationship instances. This provided KneeTex with a very fine-grained lexico-semantic knowledge base, which is highly attuned to the given sublanguage. Information extraction results were evaluated

  3. Implications of the Cressie-Read Family of Additive Divergences for Information Recovery

    Directory of Open Access Journals (Sweden)

    George G. Judge

    2012-12-01

    Full Text Available To address the unknown nature of probability-sampling models, in this paper we use information theoretic concepts and the Cressie-Read (CR family of information divergence measures to produce a flexible family of probability distributions, likelihood functions, estimators, and inference procedures. The usual case in statistical modeling is that the noisy indirect data are observed and known and the sampling model-error distribution-probability space, consistent with the data, is unknown. To address the unknown sampling process underlying the data, we consider a convex combination of two or more estimators derived from members of the flexible CR family of divergence measures and optimize that combination to select an estimator that minimizes expected quadratic loss. Sampling experiments are used to illustrate the finite sample properties of the resulting estimator and the nature of the recovered sampling distribution.

  4. Road Extraction from High-resolution Remote Sensing Images Based on Multiple Information Fusion

    Directory of Open Access Journals (Sweden)

    LI Xiao-feng

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing images has been considered to be a significant but very difficult task.Especially the spectrum of some buildings is similar with that of roads,which makes the surfaces being connect with each other after classification and difficult to be distinguished.Based on the cooperation between road surfaces and edges,this paper presents an approach to purify roads from high-resolution remote sensing images.Firstly,we try to improve the extraction accuracy of road surfaces and edges respectively.The logic cooperation between these two binary images is used to separate road and non-road objects.Then the road objects are confirmed by the cooperation between surfaces and edges.And the effective shape indices(e.g.polar moment of inertia and narrow extent index are applied to eliminate non-road objects.So the road information is refined.The experiments indicate that the proposed approach is efficient for eliminating non-road information and extracting road information from high-resolution remote sensing image.

  5. Automatically extracting clinically useful sentences from UpToDate to support clinicians’ information needs

    Science.gov (United States)

    Mishra, Rashmi; Fiol, Guilherme Del; Kilicoglu, Halil; Jonnalagadda, Siddhartha; Fiszman, Marcelo

    2013-01-01

    Clinicians raise several information needs in the course of care. Most of these needs can be met by online health knowledge resources such as UpToDate. However, finding relevant information in these resources often requires significant time and cognitive effort. Objective: To design and assess algorithms for extracting from UpToDate the sentences that represent the most clinically useful information for patient care decision making. Methods: We developed algorithms based on semantic predications extracted with SemRep, a semantic natural language processing parser. Two algorithms were compared against a gold standard composed of UpToDate sentences rated in terms of clinical usefulness. Results: Clinically useful sentences were strongly correlated with predication frequency (correlation= 0.95). The two algorithms did not differ in terms of top ten precision (53% vs. 49%; p=0.06). Conclusions: Semantic predications may serve as the basis for extracting clinically useful sentences. Future research is needed to improve the algorithms. PMID:24551389

  6. Defense Health Care: Additional Information Needed about Mental Health Provider Staffing Needs

    Science.gov (United States)

    2015-01-01

    Members, and Military Families (Aug. 31, 2012). Page 2 GAO-15-184 DOD Mental Health Staffing of mental health providers.4 These...improve these services. See Executive Order 13625, Improving Access to Mental Health Services for Veterans, Service Members, and Military Families ...beneficiary population has missing information for one or more risk factor data elements. PHRAMS assigns these individuals to an “unknown” group

  7. Defense Inventory: DOD Needs Additional Information for Managing War Reserve Levels of Meals Ready to Eat

    Science.gov (United States)

    2015-05-01

    affect future demand. Without obtaining this information from the military services, DLA may be limited in its ability to optimize the supply chain ...future demand, DLA may be limited in its ability to optimize the supply chain across the department. Forecasting demand for supplies has been a long...across the department. DLA uses various supply - chain strategies to balance cost with readiness in meeting the need for items identified as WRM and needed

  8. PLM Process and Information Mapping for Mass Customization Based on Additive Manufacturing

    OpenAIRE

    Senzi Zancul, Eduardo,; Delage e Silva, Gabriel; Durão, Luiz,; Rocha, Alexandre,

    2015-01-01

    Part 15: PLM Processes and Applications; International audience; Mass customization (MC) aims to support product individualization while maintaining scale advantages. There are different options to allow individual client influence throughout the product production process. Most efforts to bring the customer closer to manufacturing of their customized product are concentrated in the assembly stage, given the complexity to consider individual needs since design. Emerging information management...

  9. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning play a critical role for visual perception tasks. We focus on improving the robustness of the kernel descriptors (KDES) by embedding context cues and further learning a compact and discriminative feature codebook for feature reduction using Rényi entropy based mutual...... improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting...

  10. Additive insulinogenic action of Opuntia ficus-indica cladode and fruit skin extract and leucine after exercise in healthy males

    Science.gov (United States)

    2013-01-01

    Background Oral intake of a specific extract of Opuntia ficus-indica cladode and fruit skin (OpunDia™) (OFI) has been shown to increase serum insulin concentration while reducing blood glucose level for a given amount of glucose ingestion after an endurance exercise bout in healthy young volunteers. However, it is unknown whether OFI-induced insulin stimulation after exercise is of the same magnitude than the stimulation by other insulinogenic agents like leucine as well as whether OFI can interact with those agents. Therefore, the aims of the present study were: 1) to compare the degree of insulin stimulation by OFI with the effect of leucine administration; 2) to determine whether OFI and leucine have an additive action on insulin stimulation post-exercise. Methods Eleven subjects participated in a randomized double-blind cross-over study involving four experimental sessions. In each session the subjects successively underwent a 2-h oral glucose tolerance test (OGTT) after a 30-min cycling bout at ~70% VO2max. At t0 and t60 during the OGTT, subjects ingested 75 g glucose and capsules containing either 1) a placebo; 2) 1000 mg OFI; 3) 3 g leucine; 4) 1000 mg OFI + 3 g leucine. Blood samples were collected before and at 30-min intervals during the OGTT for determination of blood glucose and serum insulin. Results Whereas no effect of leucine was measured, OFI reduced blood glucose at t90 by ~7% and the area under the glucose curve by ~15% and increased serum insulin concentration at t90 by ~35% compared to placebo (P<0.05). From t60 to the end of the OGTT, serum insulin concentration was higher in OFI+leucine than in placebo which resulted in a higher area under the insulin curve (+40%, P<0.05). Conclusion Carbohydrate-induced insulin stimulation post-exercise can be further increased by the combination of OFI with leucine. OFI and leucine could be interesting ingredients to include together in recovery drinks to resynthesize muscle glycogen faster post

  11. BioDARA: Data Summarization Approach to Extracting Bio-Medical Structuring Information

    Directory of Open Access Journals (Sweden)

    Chung S. Kheau

    2011-01-01

    Full Text Available Problem statement: Due to the ever growing amount of biomedical datasets stored in multiple tables, Information Extraction (IE from these datasets is increasingly recognized as one of the crucial technologies in bioinformatics. However, for IE to be practically applicable, adaptability of a system is crucial, considering extremely diverse demands in biomedical IE application. One should be able to extract a set of hidden patterns from these biomedical datasets at low cost. Approach: In this study, a new method is proposed, called Bio-medical Data Aggregation for Relational Attributes (BioDARA, for automatic structuring information extraction for biomedical datasets. BioDARA summarizes biomedical data stored in multiple tables in order to facilitate data modeling efforts in a multi-relational setting. BioDARA has the advantages or capabilities to transform biomedical data stored in multiple tables or databases into a Vector Space model, summarize biomedical data using the Information Retrieval theory and finally extract frequent patterns that describe the characteristics of these biomedical datasets. Results: the results show that data summarization performed by DARA, can be beneficial in summarizing biomedical datasets in a complex multi-relational environment, in which biomedical datasets are stored in a multi-level of one-to-many relationships and also in the case of datasets stored in more than one one-to-many relationships with non-target tables. Conclusion: This study concludes that data summarization performed by BioDARA, can be beneficial in summarizing biomedical datasets in a complex multi-relational environment, in which biomedical datasets are stored in a multi-level of one-to-many relationships.

  12. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have the...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/....

  13. Information extraction and CT reconstruction of liver images based on diffraction enhanced imaging

    Institute of Scientific and Technical Information of China (English)

    Chunhong Hu; Tao Zhao; Lu Zhang; Hui Li; Xinyan Zhao; Shuqian Luo

    2009-01-01

    X-ray phase-contrast imaging (PCI) is a new emerging imaging technique that generates a high spatial resolution and high contrast of biological soft tissues compared to conventional radiography. Herein a biomedical application of diffraction enhanced imaging (DEI) is presented. As one of the PCI methods, DEI derives contrast from many different kinds of sample information, such as the sample's X-ray absorption, refraction gradient and ultra-small-angle X-ray scattering (USAXS) properties, and the sample information is expressed by three parametric images. Combined with computed tomography (CT), DEI-CT can produce 3D volumetric images of the sample and can be used for investigating micro-structures of biomedical samples. Our DEI experiments for fiver samples were implemented at the topog-raphy station of Beijing Synchrotron Radiation Facility (BSRF). The results show that by using our provided information extraction method and DEI-CT reconstruction approach, the obtained parametric images clearly display the inner structures of liver tissues and the morphology of blood vessels. Furthermore, the reconstructed 3D view of the fiver blood vessels exhibits the micro blood vessels whose minimum diameter is on the order of about tens of microns, much better than its conventional CT reconstruction at a millimeter resolution.In conclusion, both the information extraction method and DEI-CT have the potential for use in biomedical micro-structures analysis.

  14. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    Science.gov (United States)

    Wang, Huaqing; Chen, Peng

    2009-01-01

    This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to. PMID:22574021

  15. A Feature Extraction Method Based on Information Theory for Fault Diagnosis of Reciprocating Machinery

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2009-04-01

    Full Text Available This paper proposes a feature extraction method based on information theory for fault diagnosis of reciprocating machinery. A method to obtain symptom parameter waves is defined in the time domain using the vibration signals, and an information wave is presented based on information theory, using the symptom parameter waves. A new way to determine the difference spectrum of envelope information waves is also derived, by which the feature spectrum can be extracted clearly and machine faults can be effectively differentiated. This paper also compares the proposed method with the conventional Hilbert-transform-based envelope detection and with a wavelet analysis technique. Practical examples of diagnosis for a rolling element bearing used in a diesel engine are provided to verify the effectiveness of the proposed method. The verification results show that the bearing faults that typically occur in rolling element bearings, such as outer-race, inner-race, and roller defects, can be effectively identified by the proposed method, while these bearing faults are difficult to detect using either of the other techniques it was compared to.

  16. Automated Building Extraction from High-Resolution Satellite Imagery in Urban Areas Using Structural, Contextual, and Spectral Information

    Directory of Open Access Journals (Sweden)

    Curt H. Davis

    2005-08-01

    Full Text Available High-resolution satellite imagery provides an important new data source for building extraction. We demonstrate an integrated strategy for identifying buildings in 1-meter resolution satellite imagery of urban areas. Buildings are extracted using structural, contextual, and spectral information. First, a series of geodesic opening and closing operations are used to build a differential morphological profile (DMP that provides image structural information. Building hypotheses are generated and verified through shape analysis applied to the DMP. Second, shadows are extracted using the DMP to provide reliable contextual information to hypothesize position and size of adjacent buildings. Seed building rectangles are verified and grown on a finely segmented image. Next, bright buildings are extracted using spectral information. The extraction results from the different information sources are combined after independent extraction. Performance evaluation of the building extraction on an urban test site using IKONOS satellite imagery of the City of Columbia, Missouri, is reported. With the combination of structural, contextual, and spectral information, 72.7% of the building areas are extracted with a quality percentage 58.8%.

  17. The effect of yeast extract addition on quality of fermented sausages at low NaCl content.

    Science.gov (United States)

    Campagnol, Paulo Cezar Bastianello; dos Santos, Bibiana Alves; Wagner, Roger; Terra, Nelcindo Nascimento; Pollonio, Marise Aparecida Rodrigues

    2011-03-01

    Fermented sausages with 25% or 50% of their NaCl replaced by KCl and supplemented with 1% or 2% concentrations of yeast extract were produced. The sausage production process was monitored with physical, chemical and microbiological analyses. After production, the sausage samples were submitted to a consumer study and their volatile compounds were extracted by solid-phase microextraction and analyzed by GC-MS. The replacement of NaCl by KCl did not significantly influence the physical, chemical or microbiological characteristics. The sensory quality of the fermented sausages with a 50% replacement was poor compared with the full-salt control samples. The use of yeast extract at a 2% concentration increased volatile compounds that arose from amino acids and carbohydrate catabolism. These compounds contributed to the suppression of the sensory-quality defects caused by the KCl introduction, thus enabling the production of safe fermented sausages that have acceptable sensory qualities with half as much sodium content.

  18. A weighted information criterion for multiple minor components and its adaptive extraction algorithms.

    Science.gov (United States)

    Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an

    2017-05-01

    Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  20. 36 CFR 1290.4 - Types of materials included in scope of assassination record and additional records and information.

    Science.gov (United States)

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false Types of materials included in scope of assassination record and additional records and information. 1290.4 Section 1290.4 Parks... COLLECTION ACT OF 1992 (JFK ACT) § 1290.4 Types of materials included in scope of assassination record...

  1. The benefit of using additional hydrological information from earth observations and reanalysis data on water allocation decisions in irrigation districts

    Science.gov (United States)

    Kaune, Alexander; López, Patricia; Werner, Micha; de Fraiture, Charlotte

    2017-04-01

    Hydrological information on water availability and demand is vital for sound water allocation decisions in irrigation districts, particularly in times of water scarcity. However, sub-optimal water allocation decisions are often taken with incomplete hydrological information, which may lead to agricultural production loss. In this study we evaluate the benefit of additional hydrological information from earth observations and reanalysis data in supporting decisions in irrigation districts. Current water allocation decisions were emulated through heuristic operational rules for water scarce and water abundant conditions in the selected irrigation districts. The Dynamic Water Balance Model based on the Budyko framework was forced with precipitation datasets from interpolated ground measurements, remote sensing and reanalysis data, to determine the water availability for irrigation. Irrigation demands were estimated based on estimates of potential evapotranspiration and coefficient for crops grown, adjusted with the interpolated precipitation data. Decisions made using both current and additional hydrological information were evaluated through the rate at which sub-optimal decisions were made. The decisions made using an amended set of decision rules that benefit from additional information on demand in the districts were also evaluated. Results show that sub-optimal decisions can be reduced in the planning phase through improved estimates of water availability. Where there are reliable observations of water availability through gauging stations, the benefit of the improved precipitation data is found in the improved estimates of demand, equally leading to a reduction of sub-optimal decisions.

  2. Note: Sound recovery from video using SVD-based information extraction.

    Science.gov (United States)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  3. Taming Big Data: An Information Extraction Strategy for Large Clinical Text Corpora.

    Science.gov (United States)

    Gundlapalli, Adi V; Divita, Guy; Carter, Marjorie E; Redd, Andrew; Samore, Matthew H; Gupta, Kalpana; Trautner, Barbara

    2015-01-01

    Concepts of interest for clinical and research purposes are not uniformly distributed in clinical text available in electronic medical records. The purpose of our study was to identify filtering techniques to select 'high yield' documents for increased efficacy and throughput. Using two large corpora of clinical text, we demonstrate the identification of 'high yield' document sets in two unrelated domains: homelessness and indwelling urinary catheters. For homelessness, the high yield set includes homeless program and social work notes. For urinary catheters, concepts were more prevalent in notes from hospitalized patients; nursing notes accounted for a majority of the high yield set. This filtering will enable customization and refining of information extraction pipelines to facilitate extraction of relevant concepts for clinical decision support and other uses.

  4. Note: Sound recovery from video using SVD-based information extraction

    Science.gov (United States)

    Zhang, Dashan; Guo, Jie; Lei, Xiujun; Zhu, Chang'an

    2016-08-01

    This note reports an efficient singular value decomposition (SVD)-based vibration extraction approach that recovers sound information in silent high-speed video. A high-speed camera of which frame rates are in the range of 2 kHz-10 kHz is applied to film the vibrating objects. Sub-images cut from video frames are transformed into column vectors and then reconstructed to a new matrix. The SVD of the new matrix produces orthonormal image bases (OIBs) and image projections onto specific OIB can be recovered as understandable acoustical signals. Standard frequencies of 256 Hz and 512 Hz tuning forks are extracted offline from their vibrating surfaces and a 3.35 s speech signal is recovered online from a piece of paper that is stimulated by sound waves within 1 min.

  5. National information service in mining, mineral processing and extractive metallurgy. [MINTEC

    Energy Technology Data Exchange (ETDEWEB)

    Romaniuk, A.S.; MacDonald, R.J.C.

    1979-03-01

    More than a dedade ago, CANMET management recognized the need to make better use of existing technological information in mining and extractive metallurgy, two fields basic to the economic well-being of Canada. There were at that time no indexes or files didicated to disseminating technical information for the many minerals mined and processed in Canada, including coal. CANMET, with the nation's largest research and library resources in the minerals field, was in a unique position to fill this need. Initial efforts were concentrated on building a mining file beginning with identification of world sources of published information, development of a special thesaurus of terms for language control and adoption of a manual indexing/retrieval system. By early 1973, this file held 8,300 references, with source, abstract and keywords given for each reference. In mid-1973, operations were computerized. Software for indexing and retrieval by batch mode was written by CANMET staff to utilize the hardware facilities of EMR's Computer Science Center. The resulting MINTEC file, one of the few files of technological information produced in Canada, is the basis for the national literature search service in mining offered by CANMET. Attention is now focussed on building a sister-file in extractive metallurgy using the system already developed. Published information sources have been identified and a thesaurus of terms is being compiled and tested. The software developed for CANMET's file-building operations has several features, including the selective dissemination of information and production from magnetic tape of photoready copy for publication, as in a bi-monthly abstracts journal.

  6. Applications of the Addition of extract and cinnamon leaf flour in the Diet on the Quality of Meat of Catfish

    Directory of Open Access Journals (Sweden)

    Suardi Laheng

    2016-04-01

    Full Text Available This study aimed to evaluate the effect of extract and cinnamon (Cinnamomum burmannii leaf flourin the diet in increasing quality of meat of catfish (Pangsianodon hypopthalmus. Catfish with a weight of319.64 ± 35.99 g/nets reared in 9 nets with dimension 2x1x1,5 m3 at a density of 15 fish/nets for 60 daysof maintenance. The fish were fed with leaves of cinnamon at a dose that was: 0% cinnamon leaf, 0.1%cinnamon leaf extract, and 1% cinnamon leaf flour. The fish were fed 2 times a day with the feeding rate of3% of the average weight of the body. The results showed the treatment of leaf extract and flour, cinnamoncapable of decreasing levels of body fat, fat meat by 10,31-12,27%, 37,26-50,23%, respectively, compared tocontrols (p<0,05, however, cinnamon leaf extract treatment is more effective in improving the quality ofmeat catfish that looks meat texture compact, white flesh color and a slightly sweet taste.Keywords: cinnamon leaf, meat quality, Pangsianodon hypopthalmus

  7. Applications of the Addition of extract and cinnamon leaf flour in the Diet on the Quality of Meat of Catfish

    Directory of Open Access Journals (Sweden)

    Suardi Laheng

    2016-04-01

    Full Text Available This study aimed to evaluate the effect of extract and cinnamon (Cinnamomum burmannii leaf flour in the diet in increasing quality of meat of catfish (Pangsianodon hypopthalmus. Catfish with a weight of 319.64 ± 35.99 g/nets reared in 9 nets with dimension 2x1x1,5 m3 at a density of 15 fish/nets for 60 days of maintenance. The fish were fed with leaves of cinnamon at a dose that was: 0% cinnamon leaf, 0.1% cinnamon leaf extract, and 1% cinnamon leaf flour. The fish were fed 2 times a day with the feeding rate of 3% of the average weight of the body. The results showed the treatment of leaf extract and flour, cinnamon capable of decreasing levels of body fat, fat meat by 10,31-12,27%, 37,26-50,23%, respectively, compared to controls (p<0,05, however, cinnamon leaf extract treatment is more effective in improving the quality of meat catfish that looks meat texture compact, white flesh color and a slightly sweet taste.

  8. Study on Dynamic Behavior of Additional Investment in Exhaustible Resources Extraction%可耗竭资源开采追加投资行为研究

    Institute of Scientific and Technical Information of China (English)

    葛世龙; 唐丁祥; 周德群

    2011-01-01

    Nowadays, the extraction recovery rate of exhaustible resources in China is generally lower than other countries'. Resources firms can increase the recovery rate and reduce resources wasted by increasing management level and technology level with additional investments. Based on the standard Hotelling model, a three-stage dynamic game model is built to analyze price discrimination and the dynamic additional investment behavior of resources firm by using reverse regression analysis. Firstly, in the third stage, the firms can take price discrimination based on the price and market share obtained at the second stage. Using this information, the competitive strategy is proposed; in the second stage, two firms compete in price with each other and record the purchase information of consumers.Given the profit and price of the third stage, the paper analyzes the equilibrium of price competition and consumer demand; and then in the first stage, there is only one firm has additional investment inducing cost-reducing of extraction. The existence conditions of only subgame perfect Nash equilibrium of the dynamic game model and optimal additional investment scale of the firm are found. The results show: if the firm can do additional investment, the return on additional investment must be high enough. When the return on additional investment is very high, a small additional investment will have a greater marginal cost reduction, and the firm can obtain more strong competitive advantage. Even the firm without additional investment will be out of the market. When the return on additional investment is given, the incentive of additional investment for the firm depends on the firm's patient. That is, the more attention on future profits the firm pays, the bigger scale of additional investment will be.%目前,我国可耗竭资源开采回采率普遍较低,企业可以通过追加投资来提高资源的管理及技术水平,进而提高资源开采回采率,减少资源浪费.文

  9. Utilization of plant bioactives as feed additives for poultry: The effect of Aloe vera gel and its extract on performance of broilers

    OpenAIRE

    A.P Sinurat; T Purwadaria; M.H Togatorop; T Pasaribu

    2003-01-01

    Feed additives are commonly added in poultry feed as a growth promotant or to improve feed efficiency. The most common feed additive used is antibiotic at sub-therapheutic doses, although there is a controversy on its impact on human health. Previous results showed that Aloe vera gel could improve feed efficiency in broilers and an in vitro study showed that the extract have an antibacterial effect. Therefore, a further experiment was designed to study the response of broilers to Aloe vera ge...

  10. Effect of Solvent Additive on Generation, Recombination, and Extraction in PTB7:PCBM Solar Cells : A Conclusive Experimental and Numerical Simulation Study

    NARCIS (Netherlands)

    Kniepert, Juliane; Lange, Ilja; Heidbrink, Jan; Kurpiers, Jona; Brenner, Thomas J. K.; Koster, L. Jan Anton; Neher, Dieter

    2015-01-01

    Time-delayed collection field (TDCF), bias-assisted charge extraction (BACE), and space charge-limited current (SCLC) measurements are combined with complete numerical device simulations to unveil the effect of the solvent additive 1,8-diiodooctane (DIO) on the performance of PTB7:PCBM bulk

  11. Automated DICOM metadata and volumetric anatomical information extraction for radiation dosimetry

    Science.gov (United States)

    Papamichail, D.; Ploussi, A.; Kordolaimi, S.; Karavasilis, E.; Papadimitroulas, P.; Syrgiamiotis, V.; Efstathopoulos, E.

    2015-09-01

    Patient-specific dosimetry calculations based on simulation techniques have as a prerequisite the modeling of the modality system and the creation of voxelized phantoms. This procedure requires the knowledge of scanning parameters and patients’ information included in a DICOM file as well as image segmentation. However, the extraction of this information is complicated and time-consuming. The objective of this study was to develop a simple graphical user interface (GUI) to (i) automatically extract metadata from every slice image of a DICOM file in a single query and (ii) interactively specify the regions of interest (ROI) without explicit access to the radiology information system. The user-friendly application developed in Matlab environment. The user can select a series of DICOM files and manage their text and graphical data. The metadata are automatically formatted and presented to the user as a Microsoft Excel file. The volumetric maps are formed by interactively specifying the ROIs and by assigning a specific value in every ROI. The result is stored in DICOM format, for data and trend analysis. The developed GUI is easy, fast and and constitutes a very useful tool for individualized dosimetry. One of the future goals is to incorporate a remote access to a PACS server functionality.

  12. Information extraction approaches to unconventional data sources for "Injury Surveillance System": the case of newspapers clippings.

    Science.gov (United States)

    Berchialla, Paola; Scarinzi, Cecilia; Snidero, Silvia; Rahim, Yousif; Gregori, Dario

    2012-04-01

    Injury Surveillance Systems based on traditional hospital records or clinical data have the advantage of being a well established, highly reliable source of information for making an active surveillance on specific injuries, like choking in children. However, they suffer the drawback of delays in making data available to the analysis, due to inefficiencies in data collection procedures. In this sense, the integration of clinical based registries with unconventional data sources like newspaper articles has the advantage of making the system more useful for early alerting. Usage of such sources is difficult since information is only available in the form of free natural-language documents rather than structured databases as required by traditional data mining techniques. Information Extraction (IE) addresses the problem of transforming a corpus of textual documents into a more structured database. In this paper, on a corpora of Italian newspapers articles related to choking in children due to ingestion/inhalation of foreign body we compared the performance of three IE algorithms- (a) a classical rule based system which requires a manual annotation of the rules; (ii) a rule based system which allows for the automatic building of rules; (b) a machine learning method based on Support Vector Machine. Although some useful indications are extracted from the newspaper clippings, this approach is at the time far from being routinely implemented for injury surveillance purposes.

  13. Visualization and Analysis of Geology Word Vectors for Efficient Information Extraction

    Science.gov (United States)

    Floyd, J. S.

    2016-12-01

    allow one to extract information from hundreds of papers or more and find relationships in less time than it would take to read all of the papers. As machine learning tools become more commonly available, more and more scientists will be able to use and refine these tools for their individual needs.

  14. Extraction of depth information for 3D imaging using pixel aperture technique

    Science.gov (United States)

    Choi, Byoung-Soo; Bae, Myunghan; Kim, Sang-Hwan; Lee, Jimin; Oh, Chang-Woo; Chang, Seunghyuk; Park, JongHo; Lee, Sang-Jin; Shin, Jang-Kyoo

    2017-02-01

    A 3dimensional (3D) imaging is an important area which can be applied to face detection, gesture recognition, and 3D reconstruction. In this paper, extraction of depth information for 3D imaging using pixel aperture technique is presented. An active pixel sensor (APS) with in-pixel aperture has been developed for this purpose. In the conventional camera systems using a complementary metal-oxide-semiconductor (CMOS) image sensor, an aperture is located behind the camera lens. However, in our proposed camera system, the aperture implemented by metal layer of CMOS process is located on the White (W) pixel which means a pixel without any color filter on top of the pixel. 4 types of pixels including Red (R), Green (G), Blue (B), and White (W) pixels were used for pixel aperture technique. The RGB pixels produce a defocused image with blur, while W pixels produce a focused image. The focused image is used as a reference image to extract the depth information for 3D imaging. This image can be compared with the defocused image from RGB pixels. Therefore, depth information can be extracted by comparing defocused image with focused image using the depth from defocus (DFD) method. Size of the pixel for 4-tr APS is 2.8 μm × 2.8 μm and the pixel structure was designed and simulated based on 0.11 μm CMOS image sensor (CIS) process. Optical performances of the pixel aperture technique were evaluated using optical simulation with finite-difference time-domain (FDTD) method and electrical performances were evaluated using TCAD.

  15. Analysis on health information extracted from an urban professional population in Beijing

    Institute of Scientific and Technical Information of China (English)

    ZHANG Tie-mei; ZHANG Yan; LIU Bin; JIA Hong-bo; LIU Yun-jie; ZHU Ling; LUO Sen-lin; HAN Yi-wen; ZHANG Yan; YANG Shu-wen; LIU An-nan; MA Lan-jun; ZHAO Yan-yan

    2011-01-01

    Background The assembled data from a population could provide information on health trends within the population.The aim of this research was to extract and know basic health information from an urban professional population in Beijing.Methods Data analysis was carried out in a population who underwent a routine medical check-up and aged >20 years,including 30 058 individuals.General information,data from physical examinations and blood samples were collected in the same method.The health status was separated into three groups by the criteria generated in this study,i.e.,people with common chronic diseases,people in a sub-clinic situation,and healthy people.The proportion of both common diseases suffered and health risk distribution of different age groups were also analyzed.Results The proportion of people with common chronic diseases,in the sub-clinic group and in the healthy group was 28.6%,67.8% and 3.6% respectively.There were significant differences in the health situation in different age groups.Hypertension was on the top of list of self-reported diseases.The proportion of chronic diseases increased significantly in people after 35 years of age.Meanwhile,the proportion of sub-clinic conditions was decreasing at the same rate.The complex risk factors to health in this population were metabolic disturbances (61.3%),risk for tumor (2.7%),abnormal results of morphological examination (8.2%) and abnormal results of lab tests of serum (27.8%).Conclusions Health information could be extracted from a complex data set from the heath check-ups of the general population.The information should be applied to support prevention and control chronic diseases as well as for directing intervention for patients with risk factors for disease.

  16. Red Tide Information Extraction Based on Multi-source Remote Sensing Data in Haizhou Bay

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    [Objective] The aim was to extract red tide information in Haizhou Bay on the basis of multi-source remote sensing data.[Method] Red tide in Haizhou Bay was studied based on multi-source remote sensing data,such as IRS-P6 data on October 8,2005,Landsat 5-TM data on May 20,2006,MODIS 1B data on October 6,2006 and HY-1B second-grade data on April 22,2009,which were firstly preprocessed through geometric correction,atmospheric correction,image resizing and so on.At the same time,the synchronous environment mon...

  17. The method of earthquake landslide information extraction with high-resolution remote sensing

    Science.gov (United States)

    Wu, Jian; Chen, Peng; Liu, Yaolin; Wang, Jing

    2014-05-01

    As a kind of secondary geological disaster caused by strong earthquake, the earthquake-induced landslide has drawn much attention in the world due to the severe hazard. The high-resolution remote sensing, as a new technology for investigation and monitoring, has been widely applied in landslide susceptibility and hazard mapping. The Ms 8.0 Wenchuan earthquake, occurred on 12 May 2008, caused many buildings collapse and half million people be injured. Meanwhile, damage caused by earthquake-induced landslides, collapse and debris flow became the major part of total losses. By analyzing the property of the Zipingpu landslide occurred in the Wenchuan earthquake, the present study advanced a quick-and-effective way for landslide extraction based on NDVI and slope information, and the results were validated with pixel-oriented and object-oriented methods. The main advantage of the idea lies in the fact that it doesn't need much professional knowledge and data such as crustal movement, geological structure, fractured zone, etc. and the researchers can provide the landslide monitoring information for earthquake relief as soon as possible. In pixel-oriented way, the NDVI-differential image as well as slope image was analyzed and segmented to extract the landslide information. When it comes to object-oriented method, the multi-scale segmentation algorithm was applied in order to build up three-layer hierarchy. The spectral, textural, shape, location and contextual information of individual object classes, and GLCM (Grey Level Concurrence Matrix homogeneity, shape index etc. were extracted and used to establish the fuzzy decision rule system of each layer for earthquake landslide extraction. Comparison of the results generated from the two methods, showed that the object-oriented method could successfully avoid the phenomenon of NDVI-differential bright noise caused by the spectral diversity of high-resolution remote sensing data and achieved better result with an overall

  18. Multi-Paradigm and Multi-Lingual Information Extraction as Support for Medical Web Labelling Authorities

    Directory of Open Access Journals (Sweden)

    Martin Labsky

    2010-10-01

    Full Text Available Until recently, quality labelling of medical web content has been a pre-dominantly manual activity. However, the advances in automated text processing opened the way to computerised support of this activity. The core enabling technology is information extraction (IE. However, the heterogeneity of websites offering medical content imposes particular requirements on the IE techniques to be applied. In the paper we discuss these requirements and describe a multi-paradigm approach to IE addressing them. Experiments on multi-lingual data are reported. The research has been carried out within the EU MedIEQ project.

  19. Dimension reduction: additional benefit of an optimal filter for independent component analysis to extract event-related potentials.

    Science.gov (United States)

    Cong, Fengyu; Leppänen, Paavo H T; Astikainen, Piia; Hämäläinen, Jarmo; Hietanen, Jari K; Ristaniemi, Tapani

    2011-09-30

    The present study addresses benefits of a linear optimal filter (OF) for independent component analysis (ICA) in extracting brain event-related potentials (ERPs). A filter such as the digital filter is usually considered as a denoising tool. Actually, in filtering ERP recordings by an OF, the ERP' topography should not be changed by the filter, and the output should also be able to be modeled by the linear transformation. Moreover, an OF designed for a specific ERP source or component may remove noise, as well as reduce the overlap of sources and even reject some non-targeted sources in the ERP recordings. The OF can thus accomplish both the denoising and dimension reduction (reducing the number of sources) simultaneously. We demonstrated these effects using two datasets, one containing visual and the other auditory ERPs. The results showed that the method including OF and ICA extracted much more reliable components than the sole ICA without OF did, and that OF removed some non-targeted sources and made the underdetermined model of EEG recordings approach to the determined one. Thus, we suggest designing an OF based on the properties of an ERP to filter recordings before using ICA decomposition to extract the targeted ERP component. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Enhancing the oxidative stability of rice crackers by addition of the ethanolic extract of phytochemicals from Cratoxylum formosum Dyer.

    Science.gov (United States)

    Maisuthisakul, Pitchaon; Gordon, Michael H; Pongsawatmanit, Rungnaphar; Suttajit, Maitree

    2007-01-01

    Cratoxylum formosum Dyer is consumed throughout the year as food and medicine in Thailand. It contains large amounts of chlorogenic acid and quinic acid derivatives. The antioxidative activity of the extract was studied in refined soybean oil coating on rice crackers without any seasoning. They were stored in accelerated oxidation conditions at 40oC, 80% relative humidity (RH) in the dark for 18 days. The oxidative state of each sample was monitored by analyzing of the peroxide value (PV) and thiobarbituric acid reactive substances (TBARS) as well as by odor analysis by quantitative descriptive analysis (QDA). The C. formosum extract was more effective than alpha-tocopherol due to metal ions present in the crackers, which resulted in alpha-tocopherol being less effective as an antioxidant. Sensory odor attributes of rice crackers were related more closely to TBARS than to PV values by linear regression analysis. The present study indicated that C. formosum extract was a promising source of a natural food antioxidant and was effective in inhibiting lipid oxidation in rice crackers.

  1. Functional network and its application to extract information from chaotic communication

    Institute of Scientific and Technical Information of China (English)

    李卫斌; 焦李成

    2004-01-01

    In chaotic communication system, the useful signal is hidden in chaotic signal, so the general method does not work well. Due to the random feature of chaotic signal, a functional network-based method is presented. In this method,the neural functions are selected from some complete function set for the functional network to reconstruct the chaotic signal, so the useful signal hidden in chaotic background is extracted. In addition, its learning algorithm is presented here and the example proves its good preformance.

  2. Is it Possible to Extract Brain Metabolic Pathways Information from In Vivo H Nuclear Magnetic Resonance Spectroscopy Data?

    CERN Document Server

    de Lara, Alejandro Chinea Manrique

    2010-01-01

    In vivo H nuclear magnetic resonance (NMR) spectroscopy is an important tool for performing non-invasive quantitative assessments of brain tumour glucose metabolism. Brain tumours are considered as fast-growth tumours because of their high rate of proliferation. In addition, tumour cells exhibit profound genetic, biochemical and histological differences with respect to the original non-transformed cellular types. Therefore, there is a strong interest from the clinical investigator point of view in understanding the role of brain metabolites in normal and pathological conditions and especially on the development of early tumour detection techniques. Unfortunately, current diagnosis techniques ignore the dynamic aspects of these signals. It is largely believed that temporal variations of NMR Spectra are noisy or just simply do not carry enough information to be exploited by any reliable diagnosis procedure. Thus, current diagnosis procedures are mainly based on empirical observations extracted from single avera...

  3. The information extraction of Gannan citrus orchard based on the GF-1 remote sensing image

    Science.gov (United States)

    Wang, S.; Chen, Y. L.

    2017-02-01

    The production of Gannan oranges is the largest in China, which occupied an important part in the world. The extraction of citrus orchard quickly and effectively has important significance for fruit pathogen defense, fruit production and industrial planning. The traditional spectra extraction method of citrus orchard based on pixel has a lower classification accuracy, difficult to avoid the “pepper phenomenon”. In the influence of noise, the phenomenon that different spectrums of objects have the same spectrum is graveness. Taking Xunwu County citrus fruit planting area of Ganzhou as the research object, aiming at the disadvantage of the lower accuracy of the traditional method based on image element classification method, a decision tree classification method based on object-oriented rule set is proposed. Firstly, multi-scale segmentation is performed on the GF-1 remote sensing image data of the study area. Subsequently the sample objects are selected for statistical analysis of spectral features and geometric features. Finally, combined with the concept of decision tree classification, a variety of empirical values of single band threshold, NDVI, band combination and object geometry characteristics are used hierarchically to execute the information extraction of the research area, and multi-scale segmentation and hierarchical decision tree classification is implemented. The classification results are verified with the confusion matrix, and the overall Kappa index is 87.91%.

  4. [Studies on the effects of carbon:nitrogen ratio, inoculum type and yeast extract addition on jasmonic acid production by Botryodiplodia theobromae Pat. strain RC1].

    Science.gov (United States)

    Eng Sánchez, Felipe; Gutiérrez-Rojas, Mariano; Favela-Torres, Ernesto

    2008-09-30

    Jasmonic acid is a native plant growth regulator produced by algae, microorganisms and higher plants. This regulator is involved in the activation of defence mechanisms against pathogens and wounding in plants. Studies concerning the effects of carbon: nitrogen ratio (C/Nr: 17, 35 and 70), type of inoculum (spores or mycelium) and the yeast extract addition in the media on jasmonic acid production by Botryodiplodia theobromae were evaluated. Jasmonic acid production was stimulated at the carbon: nitrogen ratio of 17. Jasmonic acid productivity was higher in the media inoculated with mycelium and in the media with yeast extract 1.7 and 1.3 times, respectively.

  5. The Exponentially Embedded Family of Distributions for Effective Data Representation, Information Extraction, and Decision Making

    Science.gov (United States)

    2013-03-01

    unlimited. This is equivalent to Gram-Schmidt orthogonalization for Gaussian PDFs (see Figure 2). Pt (true PDF) Pr(t; Ho) -~- · Prr (best...approximation) additional information of T2 Figure 2: Best Approximation For one sensor we construct and for two sensors we construct Prr = Pryi ,7]2

  6. Foreground and Background Lexicons and Word Sense Disambiguation for Information Extraction

    CERN Document Server

    Kilgarriff, A

    1999-01-01

    Lexicon acquisition from machine-readable dictionaries and corpora is currently a dynamic field of research, yet it is often not clear how lexical information so acquired can be used, or how it relates to structured meaning representations. In this paper I look at this issue in relation to Information Extraction (hereafter IE), and one subtask for which both lexical and general knowledge are required, Word Sense Disambiguation (WSD). The analysis is based on the widely-used, but little-discussed distinction between an IE system's foreground lexicon, containing the domain's key terms which map onto the database fields of the output formalism, and the background lexicon, containing the remainder of the vocabulary. For the foreground lexicon, human lexicography is required. For the background lexicon, automatic acquisition is appropriate. For the foreground lexicon, WSD will occur as a by-product of finding a coherent semantic interpretation of the input. WSD techniques as discussed in recent literature are suit...

  7. Non-linear correlation of content and metadata information extracted from biomedical article datasets.

    Science.gov (United States)

    Theodosiou, Theodosios; Angelis, Lefteris; Vakali, Athena

    2008-02-01

    Biomedical literature databases constitute valuable repositories of up to date scientific knowledge. The development of efficient machine learning methods in order to facilitate the organization of these databases and the extraction of novel biomedical knowledge is becoming increasingly important. Several of these methods require the representation of the documents as vectors of variables forming large multivariate datasets. Since the amount of information contained in different datasets is voluminous, an open issue is to combine information gained from various sources to a concise new dataset, which will efficiently represent the corpus of documents. This paper investigates the use of the multivariate statistical approach, called Non-Linear Canonical Correlation Analysis (NLCCA), for exploiting the correlation among the variables of different document representations and describing the documents with only one new dataset. Experiments with document datasets represented by text words, Medical Subject Headings (MeSH) and Gene Ontology (GO) terms showed the effectiveness of NLCCA.

  8. Solution of Multiple——Point Statistics to Extracting Information from Remotely Sensed Imagery

    Institute of Scientific and Technical Information of China (English)

    Ge Yong; Bai Hexiang; Cheng Qiuming

    2008-01-01

    Two phenomena of similar objects with different spectra and different objects with similar spectrum often result in the difficulty of separation and identification of all types of geographical objects only using spectral information.Therefore,there is a need to incorporate spatial structural and spatial association properties of the surfaces of objects into image processing to improve the accuracy of classification of remotely sensed imagery.In the current article,a new method is proposed on the basis of the principle of multiple-point statistics for combining spectral information and spatial information for image classification.The method was validated by applying to a case study on road extraction based on Landsat TM taken over the Chinese YeHow River delta on August 8,1999. The classification results have shown that this new method provides overall better results than the traditional methods such as maximum likelihood classifier (MLC)

  9. Detailed design specification for the ALT Shuttle Information Extraction Subsystem (SIES)

    Science.gov (United States)

    Clouette, G. L.; Fitzpatrick, W. N.

    1976-01-01

    The approach and landing test (ALT) shuttle information extraction system (SIES) is described in terms of general requirements and system characteristics output products and processing options, output products and data sources, and system data flow. The ALT SIES is a data reduction system designed to satisfy certain data processing requirements for the ALT phase of the space shuttle program. The specific ALT SIES data processing requirements are stated in the data reduction complex approach and landing test data processing requirements. In general, ALT SIES must produce time correlated data products as a result of standardized data reduction or special purpose analytical processes. The main characteristics of ALT SIES are: (1) the system operates in a batch (non-interactive) mode; (2) the processing is table driven; (3) it is data base oriented; (4) it has simple operating procedures; and (5) it requires a minimum of run time information.

  10. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Iwano Koji

    2007-01-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  11. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  12. Multiplicativity of p-norms of completely positive maps and the additivity problem in quantum information theory

    Energy Technology Data Exchange (ETDEWEB)

    Holevo, A S [Steklov Mathematical Institute, Russian Academy of Sciences (Russian Federation)

    2006-04-30

    The additivity problem is one of the most profound mathematical problems of quantum information theory. From an analytical point of view it is closely related to the multiplicative property, with respect to tensor products, of norms of maps on operator spaces equipped with the Schatten norms (non-commutative analogue of l{sub p}-norms). In this paper we survey the current state of the problem.

  13. [An object-based information extraction technology for dominant tree species group types].

    Science.gov (United States)

    Tian, Tian; Fan, Wen-yi; Lu, Wei; Xiao, Xiang

    2015-06-01

    Information extraction for dominant tree group types is difficult in remote sensing image classification, howevers, the object-oriented classification method using high spatial resolution remote sensing data is a new method to realize the accurate type information extraction. In this paper, taking the Jiangle Forest Farm in Fujian Province as the research area, based on the Quickbird image data in 2013, the object-oriented method was adopted to identify the farmland, shrub-herbaceous plant, young afforested land, Pinus massoniana, Cunninghamia lanceolata and broad-leave tree types. Three types of classification factors including spectral, texture, and different vegetation indices were used to establish a class hierarchy. According to the different levels, membership functions and the decision tree classification rules were adopted. The results showed that the method based on the object-oriented method by using texture, spectrum and the vegetation indices achieved the classification accuracy of 91.3%, which was increased by 5.7% compared with that by only using the texture and spectrum.

  14. Extracting Urban Ground Object Information from Images and LiDAR Data

    Science.gov (United States)

    Yi, Lina; Zhao, Xuesheng; Li, Luan; Zhang, Guifeng

    2016-06-01

    To deal with the problem of urban ground object information extraction, the paper proposes an object-oriented classification method using aerial image and LiDAR data. Firstly, we select the optimal segmentation scales of different ground objects and synthesize them to get accurate object boundaries. Then, this paper uses ReliefF algorithm to select the optimal feature combination and eliminate the Hughes phenomenon. Eventually, the multiple classifier combination method is applied to get the outcome of the classification. In order to validate the feasible of this method, this paper selects two experimental regions in Stuttgart and Germany (Region A and B, covers 0.21 km2 and 1.1 km2 respectively). The aim of the first experiment on the Region A is to get the optimal segmentation scales and classification features. The overall accuracy of the classification reaches to 93.3 %. The purpose of the experiment on region B is to validate the application-ability of this method for a large area, which is turned out to be reaches 88.4 % overall accuracy. In the end of this paper, the conclusion shows that the proposed method can be performed accurately and efficiently in terms of urban ground information extraction and be of high application value.

  15. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  16. High-resolution multispectral satellite imagery for extracting bathymetric information of Antarctic shallow lakes

    Science.gov (United States)

    Jawak, Shridhar D.; Luis, Alvarinho J.

    2016-05-01

    High-resolution pansharpened images from WorldView-2 were used for bathymetric mapping around Larsemann Hills and Schirmacher oasis, east Antarctica. We digitized the lake features in which all the lakes from both the study areas were manually extracted. In order to extract the bathymetry values from multispectral imagery we used two different models: (a) Stumpf model and (b) Lyzenga model. Multiband image combinations were used to improve the results of bathymetric information extraction. The derived depths were validated against the in-situ measurements and root mean square error (RMSE) was computed. We also quantified the error between in-situ and satellite-estimated lake depth values. Our results indicated a high correlation (R = 0.60 0.80) between estimated depth and in-situ depth measurements, with RMSE ranging from 0.10 to 1.30 m. This study suggests that the coastal blue band in the WV-2 imagery could retrieve accurate bathymetry information compared to other bands. To test the effect of size and dimension of lake on bathymetry retrieval, we distributed all the lakes on the basis of size and depth (reference data), as some of the lakes were open, some were semi frozen and others were completely frozen. Several tests were performed on open lakes on the basis of size and depth. Based on depth, very shallow lakes provided better correlation (≈ 0.89) compared to shallow (≈ 0.67) and deep lakes (≈ 0.48). Based on size, large lakes yielded better correlation in comparison to medium and small lakes.

  17. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  18. Citizen-Centric Urban Planning through Extracting Emotion Information from Twitter in an Interdisciplinary Space-Time-Linguistics Algorithm

    Directory of Open Access Journals (Sweden)

    Bernd Resch

    2016-07-01

    Full Text Available Traditional urban planning processes typically happen in offices and behind desks. Modern types of civic participation can enhance those processes by acquiring citizens’ ideas and feedback in participatory sensing approaches like “People as Sensors”. As such, citizen-centric planning can be achieved by analysing Volunteered Geographic Information (VGI data such as Twitter tweets and posts from other social media channels. These user-generated data comprise several information dimensions, such as spatial and temporal information, and textual content. However, in previous research, these dimensions were generally examined separately in single-disciplinary approaches, which does not allow for holistic conclusions in urban planning. This paper introduces TwEmLab, an interdisciplinary approach towards extracting citizens’ emotions in different locations within a city. More concretely, we analyse tweets in three dimensions (space, time, and linguistics, based on similarities between each pair of tweets as defined by a specific set of functional relationships in each dimension. We use a graph-based semi-supervised learning algorithm to classify the data into discrete emotions (happiness, sadness, fear, anger/disgust, none. Our proposed solution allows tweets to be classified into emotion classes in a multi-parametric approach. Additionally, we created a manually annotated gold standard that can be used to evaluate TwEmLab’s performance. Our experimental results show that we are able to identify tweets carrying emotions and that our approach bears extensive potential to reveal new insights into citizens’ perceptions of the city.

  19. Improving the extraction of Ara h 6 (a peanut allergen) from a chocolate-based matrix for immunosensing detection: Influence of time, temperature and additives.

    Science.gov (United States)

    Alves, Rita C; Pimentel, Filipa B; Nouws, Henri P A; Silva, Túlio H B; Oliveira, M Beatriz P P; Delerue-Matos, Cristina

    2017-03-01

    The extraction of Ara h 6 (a peanut allergen) from a complex chocolate-based food matrix was optimized by testing different temperatures, extraction times, and the influence of additives (NaCl and skimmed milk powder) in a total of 36 different conditions. Analyses were carried out using an electrochemical immunosensor. Three conditions were selected since they allowed the extraction of the highest levels of Ara h 6. These extractions were performed using 2g of sample and 20ml of Tris-HNO3 (pH=8) containing: a) 0.1M NaCl and 2g of skimmed milk powder at 21°C for 60min; b) 1M NaCl and 1g of skimmed milk powder at 21°C for 60min; and c) 2g of skimmed milk powder at 60°C for 60min. Recoveries were similar or higher than 94.7%. This work highlights the importance to adjust extraction procedures regarding the target analyte and food matrix components. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. The Study on Height Information Extraction of Cultural Features in Remote Sensing Images Based on Shadow Areas

    Science.gov (United States)

    Bao-Ming, Z.; Hai-Tao, G.; Jun, L.; Zhi-Qing, L.; Hong, H.

    2011-09-01

    Cultural feature is important element in geospatial information library and the height information is important information of cultural features. The existences of the height information and its precision have direct influence over topographic map, especially the quality of large-scale and medium-scale topographic map, and the level of surveying and mapping support. There are a lot of methods about height information extraction, in which the main methods are ground survey (field direct measurement) spatial sensor and photogrammetric ways. However, automatic extraction is very tough. This paper has had an emphasis on segmentation algorithm on shadow areas under multiple constraints and realized automatic extraction of height information by using shadow. Binarization image can be obtained using gray threshold estimated under the multiple constraints. On the interesting area, spot elimination and region splitting are made. After region labeling and non-shadowed regions elimination, shadow area of cultural features can be found. Then height of the cultural features can be calculated using shadow length, sun altitude angle, azimuth angle, and sensor altitude angle, azimuth angle. A great many of experiments have shown that mean square error of the height information of cultural features extraction is close to 2 meter and automatic extraction rate is close to 70%.

  1. THE STUDY ON HEIGHT INFORMATION EXTRACTION OF CULTURAL FEATURES IN REMOTE SENSING IMAGES BASED ON SHADOW AREAS

    Directory of Open Access Journals (Sweden)

    Z. Bao-Ming

    2012-09-01

    Full Text Available Cultural feature is important element in geospatial information library and the height information is important information of cultural features. The existences of the height information and its precision have direct influence over topographic map, especially the quality of large-scale and medium-scale topographic map, and the level of surveying and mapping support. There are a lot of methods about height information extraction, in which the main methods are ground survey (field direct measurement spatial sensor and photogrammetric ways. However, automatic extraction is very tough. This paper has had an emphasis on segmentation algorithm on shadow areas under multiple constraints and realized automatic extraction of height information by using shadow. Binarization image can be obtained using gray threshold estimated under the multiple constraints. On the interesting area, spot elimination and region splitting are made. After region labeling and non-shadowed regions elimination, shadow area of cultural features can be found. Then height of the cultural features can be calculated using shadow length, sun altitude angle, azimuth angle, and sensor altitude angle, azimuth angle. A great many of experiments have shown that mean square error of the height information of cultural features extraction is close to 2 meter and automatic extraction rate is close to 70%.

  2. Classification and Extraction of Urban Land-Use Information from High-Resolution Image Based on Object Multi-features

    Institute of Scientific and Technical Information of China (English)

    Kong Chunfang; Xu Kai; Wu Chonglong

    2006-01-01

    Urban land provides a suitable location for various economic activities which affect the development of surrounding areas. With rapid industrialization and urbanization, the contradictions in land-use become more noticeable. Urban administrators and decision-makers seek modern methods and technology to provide information support for urban growth. Recently, with the fast development of high-resolution sensor technology, more relevant data can be obtained, which is an advantage in studying the sustainable development of urban land-use. However, these data are only information sources and are a mixture of "information" and "noise". Processing, analysis and information extraction from remote sensing data is necessary to provide useful information. This paper extracts urban land-use information from a high-resolution image by using the multi-feature information of the image objects, and adopts an object-oriented image analysis approach and multi-scale image segmentation technology. A classification and extraction model is set up based on the multi-features of the image objects, in order to contribute to information for reasonable planning and effective management. This new image analysis approach offers a satisfactory solution for extracting information quickly and efficiently.

  3. Effect of the addition of soy lecithin and Yucca schidigera extract on the properties of gelatin and glycerol based biodegradable films

    OpenAIRE

    Tatiana P. Dias; Grosso,Carlos R.F.; Caroline Andreuccetti; Rosemary A. de Carvalho; Tomás Galicia-García; Fernando Martinez-Bustos

    2013-01-01

    Gelatin-based films containing soy lecithin or Yucca schidigera extract and glycerol as plasticizer were produced by casting and characterized for their mechanical properties, water vapor permeability (WVP), water solubility, opacity and morphology. The addition of glycerol reduced the tensile strength, with a difference of ~ 68% between the values for the minimum and maximum concentrations evaluated, both for the plasticizer and the surfactant. Elongation values reached 52% and 40%, for film...

  4. Enriching a document collection by integrating information extraction and PDF annotation

    Science.gov (United States)

    Powley, Brett; Dale, Robert; Anisimoff, Ilya

    2009-01-01

    Modern digital libraries offer all the hyperlinking possibilities of the World Wide Web: when a reader finds a citation of interest, in many cases she can now click on a link to be taken to the cited work. This paper presents work aimed at providing the same ease of navigation for legacy PDF document collections that were created before the possibility of integrating hyperlinks into documents was ever considered. To achieve our goal, we need to carry out two tasks: first, we need to identify and link citations and references in the text with high reliability; and second, we need the ability to determine physical PDF page locations for these elements. We demonstrate the use of a high-accuracy citation extraction algorithm which significantly improves on earlier reported techniques, and a technique for integrating PDF processing with a conventional text-stream based information extraction pipeline. We demonstrate these techniques in the context of a particular document collection, this being the ACL Anthology; but the same approach can be applied to other document sets.

  5. Face Contour Extraction of Information%人脸轮廓信息的提取

    Institute of Scientific and Technical Information of China (English)

    原瑾

    2011-01-01

    边缘提取在模式识别、机器视觉、图像分析及图像编码等领域都有着重要的研究价值。人脸检测技术是一种人脸识别技术的前提。文章针对人脸检测中人脸定位提出了人脸轮廓信息提取技术,确定人脸检测的主要区域。首先介绍了几种边缘检测算子,然后提出了动态阈值方法来改进图像阈值,提高了边缘检测精度。%Edge extraction has important research value in the fields of pattern recognition, machine vision, image analysis and image coding. Face detection technology is prerequisite of face recognition technology. In view of person face localization in person face detection, the dissertation proposes an extraction technology of face outline information to identify the main regional of face. This article first introduced several edge detection operators, and then proposed the method of dynamic threshold value to improves the image threshold value, which increased the edge detection accuracy.

  6. Metaproteomics: extracting and mining proteome information to characterize metabolic activities in microbial communities.

    Science.gov (United States)

    Abraham, Paul E; Giannone, Richard J; Xiong, Weili; Hettich, Robert L

    2014-06-17

    Contemporary microbial ecology studies usually employ one or more "omics" approaches to investigate the structure and function of microbial communities. Among these, metaproteomics aims to characterize the metabolic activities of the microbial membership, providing a direct link between the genetic potential and functional metabolism. The successful deployment of metaproteomics research depends on the integration of high-quality experimental and bioinformatic techniques for uncovering the metabolic activities of a microbial community in a way that is complementary to other "meta-omic" approaches. The essential, quality-defining informatics steps in metaproteomics investigations are: (1) construction of the metagenome, (2) functional annotation of predicted protein-coding genes, (3) protein database searching, (4) protein inference, and (5) extraction of metabolic information. In this article, we provide an overview of current bioinformatic approaches and software implementations in metaproteome studies in order to highlight the key considerations needed for successful implementation of this powerful community-biology tool.

  7. Metaproteomics: extracting and mining proteome information to characterize metabolic activities in microbial communities

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, Paul E [ORNL; Giannone, Richard J [ORNL; Xiong, Weili [ORNL; Hettich, Robert {Bob} L [ORNL

    2014-01-01

    Contemporary microbial ecology studies usually employ one or more omics approaches to investigate the structure and function of microbial communities. Among these, metaproteomics aims to characterize the metabolic activities of the microbial membership, providing a direct link between the genetic potential and functional metabolism. The successful deployment of metaproteomics research depends on the integration of high-quality experimental and bioinformatic techniques for uncovering the metabolic activities of a microbial community in a way that is complementary to other meta-omic approaches. The essential, quality-defining informatics steps in metaproteomics investigations are: (1) construction of the metagenome, (2) functional annotation of predicted protein-coding genes, (3) protein database searching, (4) protein inference, and (5) extraction of metabolic information. In this article, we provide an overview of current bioinformatic approaches and software implementations in metaproteome studies in order to highlight the key considerations needed for successful implementation of this powerful community-biology tool.

  8. Optimal Extraction of Cosmological Information from Supernova Datain the Presence of Calibration Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Alex G.; Miquel, Ramon

    2005-09-26

    We present a new technique to extract the cosmological information from high-redshift supernova data in the presence of calibration errors and extinction due to dust. While in the traditional technique the distance modulus of each supernova is determined separately, in our approach we determine all distance moduli at once, in a process that achieves a significant degree of self-calibration. The result is a much reduced sensitivity of the cosmological parameters to the calibration uncertainties. As an example, for a strawman mission similar to that outlined in the SNAP satellite proposal, the increased precision obtained with the new approach is roughly equivalent to a factor of five decrease in the calibration uncertainty.

  9. Information Extraction for System-Software Safety Analysis: Calendar Year 2007 Year-End Report

    Science.gov (United States)

    Malin, Jane T.

    2008-01-01

    This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis on the models to identify possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations; 4) perform discrete-time-based simulation on the models to investigate scenarios where these paths may play a role in failures and mishaps; and 5) identify resulting candidate scenarios for software integration testing. This paper describes new challenges in a NASA abort system case, and enhancements made to develop the integrated tool set.

  10. Developing a Process Model for the Forensic Extraction of Information from Desktop Search Applications

    Directory of Open Access Journals (Sweden)

    Timothy Pavlic

    2008-03-01

    Full Text Available Desktop search applications can contain cached copies of files that were deleted from the file system. Forensic investigators see this as a potential source of evidence, as documents deleted by suspects may still exist in the cache. Whilst there have been attempts at recovering data collected by desktop search applications, there is no methodology governing the process, nor discussion on the most appropriate means to do so. This article seeks to address this issue by developing a process model that can be applied when developing an information extraction application for desktop search applications, discussing preferred methods and the limitations of each. This work represents a more structured approach than other forms of current research.

  11. Combining information preserved in fluvial topography and strath terraces to extract rock uplift rates in the Apennines

    Science.gov (United States)

    Fox, M.; Brandon, M. T.

    2015-12-01

    Longitudinal river profiles respond to changes in tectonic uplift rates through climate-modulated erosion. Therefore, rock uplift rate information should be recorded in fluvial topography and extracting this information provides crucial constraints on tectonic processes. In addition to the shape of the modern river profile, paleo-river profiles can often be mapped in the field by connecting strath terraces. These strath terraces act as markers that record complex incision histories in response to rock uplift rates that vary in space and time. We exploit an analytical linear solution to the linear version (n=1) of the stream-power equation to efficiently extract uplift histories from river networks and strath terraces. The analytical solution is based on the transient solution to the linear version (n=1) of the stream-power equation. The general solution to this problem states that the elevation of a point in a river channel is equal to the time integral of its uplift history, where integration is carried out over the time required for an uplift signal to propagate from the baselevel of the river network to the point of interest. A similar expression can be written for each strath terrace in the dataset. Through discretization of these expressions into discrete timesteps and spatial nodes, a linear system of equations can be solved using linear inverse methods. In this way, strath terraces and river profiles can be interpreted in an internally consistent framework, without the requirement that the river profile is in a steady state. We apply our approach to the Northern Apennines where strath terraces have been extensively mapped and dated. Comparison of our inferred rock uplift rate history with modern rock uplift rates enables us to distinguish short-term deformation on a buried thrust fault with long-term mountain building processes.

  12. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  13. Intelligent information extraction to aid science decision making in autonomous space exploration

    Science.gov (United States)

    Merényi, Erzsébet; Tasdemir, Kadim; Farrand, William H.

    2008-04-01

    Effective scientific exploration of remote targets such as solar system objects increasingly calls for autonomous data analysis and decision making on-board. Today, robots in space missions are programmed to traverse from one location to another without regard to what they might be passing by. By not processing data as they travel, they can miss important discoveries, or will need to travel back if scientists on Earth find the data warrant backtracking. This is a suboptimal use of resources even on relatively close targets such as the Moon or Mars. The farther mankind ventures into space, the longer the delay in communication, due to which interesting findings from data sent back to Earth are made too late to command a (roving, floating, or orbiting) robot to further examine a given location. However, autonomous commanding of robots in scientific exploration can only be as reliable as the scientific information extracted from the data that is collected and provided for decision making. In this paper, we focus on the discovery scenario, where information extraction is accomplished with unsupervised clustering. For high-dimensional data with complicated structure, detailed segmentation that identifies all significant groups and discovers the small, surprising anomalies in the data, is a challenging task at which conventional algorithms often fail. We approach the problem with precision manifold learning using self-organizing neural maps with non-standard features developed in the course of our research. We demonstrate the effectiveness and robustness of this approach on multi-spectral imagery from the Mars Exploration Rovers Pancam, and on synthetic hyperspectral imagery.

  14. Phenol-oxidizing enzyme expression in Lentinula edodes by the addition of sawdust extract, aromatic compounds, or copper in liquid culture media.

    Science.gov (United States)

    Tanesaka, Eiji; Takeda, Hironori; Yoshida, Motonobu

    2013-01-01

    This study examined how the addition of a sawdust extract from Castanopsis cuspidata, several aromatic compounds, and copper affected the expression of a phenol-oxidizing enzyme in the white-rot basidiomycete, Lentinula edodes. Compared to liquid media that had not been supplemented with sawdust extract (MYPG), MYPG containing low (MYPG-S100) or high (MYPG-S500) concentrations of sawdust extract had a marked effect on the promotion of mycelial growth. No manganese peroxidase (MnP) production was observed in either MYPG or MYPG-S100 media until 35 days after inoculation. However, MnP production was enhanced by culture in MYPG-S500, with a marked increase observed suddenly at 14 days after inoculation. Northern blot analysis revealed that the transcription of the lemnp2 gene coding extracellular MnP was initially observed at detectable levels at day 10 after the initial inoculation of MYPG-S500, increasing gradually thereafter until days 22-25. However, laccase (Lcc) production was not observed in any of the media until 35 days after inoculation. Addition of 10 mM aromatic compounds - 1,2-benzenediol, 2-methoxyphenol, hydroquinone, and 4-anisidine--into the MYPG-S500 medium completely inhibited MnP production and did not enhance any Lcc production. While the addition of 1 or 2 mM Cu2+ (CuSO4 x 5H2O) to MYPG-S500 medium completely inhibited MnP production, this Cu2+ addition caused a marked increase in Lcc production at 17 and 6 days after the addition, respectively.

  15. Utilization of plant bioactives as feed additives for poultry: The effect of Aloe vera gel and its extract on performance of broilers

    Directory of Open Access Journals (Sweden)

    A.P Sinurat

    2003-10-01

    Full Text Available Feed additives are commonly added in poultry feed as a growth promotant or to improve feed efficiency. The most common feed additive used is antibiotic at sub-therapheutic doses, although there is a controversy on its impact on human health. Previous results showed that Aloe vera gel could improve feed efficiency in broilers and an in vitro study showed that the extract have an antibacterial effect. Therefore, a further experiment was designed to study the response of broilers to Aloe vera gel or its extract as feed additives. Aloe vera was prepared in dry gel or chloroform-extract and included in the diet at levels of 0.25; 0.50 and 1.00 g/kg (equal to dry gel. Standard diets with or without antibiotic were also formulated as control and a commercial diet was included for comparison. The diets were fed to broilers from day old to 5 weeks. Each treatment has 9 replicates and 6chicks/replicate. Parameters observed were: feed consumption, weight gain and feed convertion ratios. Carcass yield, abdominal fat levels, relative weight of liver, gizzard, tractus digestivus and length of tractus digestivus were also measured at the end of feeding trial. The results showed that Aloe gel and its extract did not influence body weight gain and feed consumption of broilers significantly (P>0.05, but improved feed convertion slightly (3.50%. The response in this trial was similar as thosecommercial diet and diet added with antibiotic. There was no significant (P>0.05 effect of Aloe vera bioactives on carcass yield, abdominal fat level and relative weight of liver. However, Aloe vera gel and its extract tend to increase gizzard weight, gastro intestinal weight and length. The Aloe vera gel and its extract also reduced the total count of aerobic bacteria in the digesta of tractus digestivus. It is concluded that the Aloe vera gel improve feed efficiency in broilers by increasing the size of tractus digestivus and reducing the total count of aerobic bacteria in

  16. Improvement of microbiological safety and sensorial quality of pork jerky by electron beam irradiation and by addition of onion peel extract and barbecue flavor

    Science.gov (United States)

    Kim, Hyun-Joo; Jung, Samooel; Yong, Hae In; Bae, Young Sik; Kang, Suk Nam; Kim, Il Suk; Jo, Cheorun

    2014-05-01

    The combined effects of electron-beam (EB) irradiation and addition of onion peel (OP) extract and barbecue flavor (BF) on inactivation of foodborne pathogens and the quality of pork jerky was investigated. Prepared pork jerky samples were irradiated (0, 1, 2, and 4 kGy) and stored for 2 month at 25 °C. The D10 values of Listeria monocytogenes, Escherichia coli, and Salmonella typhimurium observed in the OP treated samples were 0.19, 0.18, and 0.19 kGy, whereas those in control were 0.25, 0.23, and 0.20 kGy, respectively. Irradiated samples with OP extract and BF had substantially lower total aerobic bacterial counts than the control had. Samples with added OP extract and BF had lower peroxide values than the control had. Sensory evaluation indicated that overall acceptability of treated samples was not changed up to 2 kGy. Therefore, EB irradiation, combined with OP extract and BF, has improved the microbiological safety with no negative effects on the quality of pork jerky.

  17. The effects of green tea extract additive feeds on the growth performance and survival rate of the giant freshwater prawn (Macrobrachium rosenbergii

    Directory of Open Access Journals (Sweden)

    Pimpimol, T.

    2005-02-01

    Full Text Available This study was designed to examine the effects of green tea extract (GTE additive feeds on the growth performance in the giant freshwater prawn. Two separate trials were determined by using two different stages of prawn for initial stocking, one was the small post larva (PL10, the other was 5.6 g prawn. A Completely Randomized Design was applied in this study. The small post larva (PL10 were raised in cement tanks(1x1.5 m2. Three treatments with three replications each were performed applied as following: treatment 1 (control was the commercial pellet feed; treatment 2 and 3 were feeds with 1% and 2% green tea extract, respectively. This assay was run for 8 weeks. The prawns were randomly selected for weight determination every week. The result showed there was no significant difference in the specific growth rate and survival (P > 0.05 but the feed conversion ratio was reduced in prawn fed with green tea extracts (P 0.05. Therefore, green tea extract has potential as growth enhancer in giant freshwater prawn culture.

  18. Extraction of orientation-and-scale-dependent information from GPR B-scans with tunable two-dimensional wavelet filters

    Science.gov (United States)

    Tzanis, A.

    2012-04-01

    GPR is an invaluable tool for civil and geotechnical engineering applications. One of the most significant objectives of such applications is the detection of fractures, inclined interfaces, empty or filled cavities frequently associated with jointing/faulting and a host of other oriented features. These types of target, especially fractures, are usually not good reflectors and are spatially localized. Their scale is therefore a factor significantly affecting their detectability. Quite frequently, systemic or extraneous noise, or other significant structural characteristics swamp the data with information which blurs, or even masks reflections from such targets, rendering their recognition difficult. This paper reports a method of extracting information (isolating) oriented and scale-dependent structural characteristics, based on oriented two-dimensional B-spline wavelet filters and Gabor wavelet filters. In addition to their advantageous properties (e.g. compact support, orthogonality etc), B-spline wavelets comprise a family with a broad spectrum of frequency localization properties and frequency responses that mimic, more or less, the shape of the radar source wavelet. For instance, the Ricker wavelet is also approximated by derivatives of Cardinal B-splines. An oriented two-dimensional B-spline filter is built by sidewise arranging a number of identical one-dimensional wavelets to create a matrix, tapering the edge-parallel direction with an orthogonal window function and rotating the resulting matrix to the desired orientation. The length of the one-dimensional wavelet (edge-normal direction) determines the width of the topographic features to be isolated. The number of parallel wavelets (edge-parallel direction) determines the feature length over which to smooth. The Gabor wavelets were produced by a Gabor kernel that is a product of an elliptical Gaussian and a complex plane wave: it is two-dimensional by definition. Their applications have hitherto focused

  19. Synthesis and characterization of methylcellulose from cellulose extracted from mango seeds for use as a mortar additive

    Directory of Open Access Journals (Sweden)

    Júlia G. Vieira

    2012-01-01

    Full Text Available Methylcellulose was produced from the fibers of Mangifera indica L. Ubá mango seeds. MCD and MCI methylcellulose samples were made by heterogeneous methylation, using dimethyl sulfate and iodomethane as alkylating agents, respectively. The materials produced were characterized for their thermal properties (DSC and TGA, crystallinity (XRD and Degree of Substitution (DS in the chemical route. The cellulose derivatives were employed as mortar additive in order to improve mortar workability and adhesion to the substrate. These properties were evaluated by means of the consistency index (CI and bond tensile strength (TS tests. The methylcellulose (MCD and MCI samples had CI increased by 27.75 and 71.54% and TS increased by 23.33 and 29.78%, respectively, in comparison to the reference sample. Therefore, the polymers can be used to produce adhesive mortars.

  20. Information Management Processes for Extraction of Student Dropout Indicators in Courses in Distance Mode

    Directory of Open Access Journals (Sweden)

    Renata Maria Abrantes Baracho

    2016-04-01

    Full Text Available This research addresses the use of information management processes in order to extract student dropout indicators in distance mode courses. Distance education in Brazil aims to facilitate access to information. The MEC (Ministry of Education announced, in the second semester of 2013, that the main obstacles faced by institutions offering courses in this mode were students dropping out and the resistance of both educators and students to this mode. The research used a mixed methodology, qualitative and quantitative, to obtain student dropout indicators. The factors found and validated in this research were: the lack of interest from students, insufficient training in the use of the virtual learning environment for students, structural problems in the schools that were chosen to offer the course, students without e-mail, incoherent answers to activities to the course, lack of knowledge on the part of the student when using the computer tool. The scenario considered was a course offered in distance mode called Aluno Integrado (Integrated Student

  1. Use of an additive canthaxanthin based and annatto extract in diets of laying hens and its effect on the color of the yolk and the egg shelf life

    Directory of Open Access Journals (Sweden)

    Víctor Rojas V.

    2015-09-01

    Full Text Available The aim of this study was to evaluate the use of an additive canthaxanthin based and annatto extract (Bixa orellana L. in diets of laying hens and its effect on the color of the yolk and the egg shelf life. Position 864 hens 34 to 45 weeks old, distributed in a completely randomized design with six replicates per treatment were used. Treatments were T0 (control diet, T1 (T0 + 30 g of canthaxanthin and annatto extract and T2 (T0 + 60 g of canthaxanthin and annatto extract. The results were 88.6; 91.9 and 90.8% for laying percentage; 60.5; 61.6 and 61.5 g for egg weight; 53.6; 56.4 and 55.7 g for egg mass. The yolk color temperature 7 °C for Roche scale was 6, 9 and 12 and colorimetric Minolta was to "L" of 42.10; 40.24 and 39.65; for "a" of 0.07; 3.68 and 6.44 and for "b" of 19.35; 18.36 and 18.18. Shelf life at room temperature 7 °C was 81, 86 and 90 UH. Lipid peroxidation was 0.10; 0.07 and 0.05 μmol MDA.g-1 yolk; for T0, T1 and T2 respectively. In all variables indicated statistically significant differences between treatments (p < 0.05. Food consumption was 103.9; 109.2 and 107.5 g and feed conversion of 1.94; 1.93 and 1.92. It is concluded that the addition of canthaxanthin and annatto extract to 30 and 60 g t-1 feed than the control, improved performance parameters, yolk color and egg shelf life.

  2. Information Extraction and Dependency on Open Government Data (ogd) for Environmental Monitoring

    Science.gov (United States)

    Abdulmuttalib, Hussein

    2016-06-01

    Environmental monitoring practices support decision makers of different government / private institutions, besides environmentalists and planners among others. This support helps them act towards the sustainability of our environment, and also take efficient measures for protecting human beings in general, but it is difficult to explore useful information from 'OGD' and assure its quality for the purpose. On the other hand, Monitoring itself comprises detecting changes as happens, or within the mitigation period range, which means that any source of data, that is to be used for monitoring, should replicate the information related to the period of environmental monitoring, or otherwise it's considered almost useless or history. In this paper the assessment of information extraction and structuring from Open Government Data 'OGD', that can be useful to environmental monitoring is performed, looking into availability, usefulness to environmental monitoring of a certain type, checking its repetition period and dependences. The particular assessment is being performed on a small sample selected from OGD, bearing in mind the type of the environmental change monitored, such as the increase and concentrations of built up areas, and reduction of green areas, or monitoring the change of temperature in a specific area. The World Bank mentioned in its blog that Data is open if it satisfies both conditions of, being technically open, and legally open. The use of Open Data thus, is regulated by published terms of use, or an agreement which implies some conditions without violating the above mentioned two conditions. Within the scope of the paper I wish to share the experience of using some OGD for supporting an environmental monitoring work, that is performed to mitigate the production of carbon dioxide, by regulating energy consumption, and by properly designing the test area's landscapes, thus using Geodesign tactics, meanwhile wish to add to the results achieved by many

  3. Urban Built-Up Area Extraction from Landsat TM/ETM+ Images Using Spectral Information and Multivariate Texture

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2014-08-01

    Full Text Available Urban built-up area information is required by various applications. However, urban built-up area extraction using moderate resolution satellite data, such as Landsat series data, is still a challenging task due to significant intra-urban heterogeneity and spectral confusion with other land cover types. In this paper, a new method that combines spectral information and multivariate texture is proposed. The multivariate textures are separately extracted from multispectral data using a multivariate variogram with different distance measures, i.e., Euclidean, Mahalanobis and spectral angle distances. The multivariate textures and the spectral bands are then combined for urban built-up area extraction. Because the urban built-up area is the only target class, a one-class classifier, one-class support vector machine, is used. For comparison, the classical gray-level co-occurrence matrix (GLCM is also used to extract image texture. The proposed method was evaluated using bi-temporal Landsat TM/ETM+ data of two megacity areas in China. Results demonstrated that the proposed method outperformed the use of spectral information alone and the joint use of the spectral information and the GLCM texture. In particular, the inclusion of multivariate variogram textures with spectral angle distance achieved the best results. The proposed method provides an effective way of extracting urban built-up areas from Landsat series images and could be applicable to other applications.

  4. Modal damping factor detected with an impulse-forced vibration method provides additional information on osseointegration during dental implant healing.

    Science.gov (United States)

    Feng, Sheng-Wei; Chang, Wei-Jen; Lin, Che-Tong; Lee, Sheng-Yang; Teng, Nai-Chia; Huang, Haw-Ming

    2015-01-01

    To evaluate whether resonance frequency (RF) analysis combined with modal damping factor (MDF) analysis provides additional information on dental implant healing status. In in vitro tests, epoxy resin was used to simulate the implant healing process. The RF and MDF values of the implants were measured during the entire polymerization process. Implant stability quotient (ISQ) and Periotest values (PTVs) from Ostell and Periotest devices were used to validate the apparatus. In in vivo experiments, vibrational analysis was performed on 17 dental implants in 12 patients. The RF and MDF values of the tested implants were recorded during the first 10 weeks after surgery. The effects of jaw types and primary stability on MDF healing curves were analyzed. In the in vitro model, the RF values obtained from the apparatus used in this study were similar to those obtained from the Osstell device. Unlike the Periotest healing curve, the MDF curve showed a 1.8-fold increase during the early phase. In clinical experiments, the mean RF values were unchanged during the first 2 weeks and increased continuously until 6 weeks. The corresponding mean MDF value decreased over time and reached 0.045 ± 0.011 at 10 weeks, which is approximately 50% lower than the initial value. Although the RF values of the implants with higher initial frequency remained unchanged during the healing period, the MDF values decreased significantly. Analysis of RF combined with MDF provides additional information on dental implant healing status. MDF analysis can detect changes in the implant/bone complex during the healing period even in implants with higher RF values.

  5. Effect of drying and co-matrix addition on the yield and quality of supercritical CO₂ extracted pumpkin (Cucurbita moschata Duch.) oil.

    Science.gov (United States)

    Durante, Miriana; Lenucci, Marcello S; D'Amico, Leone; Piro, Gabriella; Mita, Giovanni

    2014-04-01

    In this work a process for obtaining high vitamin E and carotenoid yields by supercritical carbon dioxide (SC-CO₂) extraction from pumpkin (Cucurbita moschata Duch.) is described. The results show that the use of a vacuum oven-dried [residual moisture (∼8%)] and milled (70 mesh sieve) pumpkin flesh matrix increased SC-CO₂ extraction yields of total vitamin E and carotenoids of ∼12.0- and ∼8.5-fold, respectively, with respect to the use of a freeze-dried and milled flesh matrix. The addition of milled (35 mesh) pumpkin seeds as co-matrix (1:1, w/w) allowed a further ∼1.6-fold increase in carotenoid yield, besides to a valuable enrichment of the extracted oil in vitamin E (274 mg/100 g oil) and polyunsaturated fatty acids. These findings encourage further studies in order to scale up the process for possible industrial production of high quality bioactive ingredients from pumpkin useful in functional food or cosmeceutical formulation.

  6. Effect of the addition of soy lecithin and Yucca schidigera extract on the properties of gelatin and glycerol based biodegradable films

    Directory of Open Access Journals (Sweden)

    Tatiana P. Dias

    2013-01-01

    Full Text Available Gelatin-based films containing soy lecithin or Yucca schidigera extract and glycerol as plasticizer were produced by casting and characterized for their mechanical properties, water vapor permeability (WVP, water solubility, opacity and morphology. The addition of glycerol reduced the tensile strength, with a difference of ~ 68% between the values for the minimum and maximum concentrations evaluated, both for the plasticizer and the surfactant. Elongation values reached 52% and 40%, for films containing yucca extract and lecithin, respectively, when higher amounts of plasticizer and surfactant were added. Lower values of WVP were obtained when the intermediate concentration of glycerol (20 g plasticizer/100 g protein was used, reaching 0.14 and 0.15 g mm/m² h kPa, respectively, for films containing yucca extract and lecithin. The solubility was not affected by adding plasticizer and / or surfactants. The morphologies of the inner sections of the films, regardless of type of surfactant used, were compact, without pores or phase separation, indicating efficient incorporation of the compounds added to the protein matrix.

  7. Extracting information on urban impervious surface from GF-1 data in Tianjin City of China

    Science.gov (United States)

    Li, Bin; Meng, Qingyan; Wu, Jun; Gu, Xingfa

    2015-09-01

    The urban impervious surface, an important part in the city system, has a great influence on theecologicalenvironment in urban areas. The coverage of it is an important indicator for the evaluation ofurbanization. TheRemotesensing data has prominent features such as information-rich and accurate and it can provide data basis for large area extraction of impervious surface. GF-1 satellite is the first satellite of high-resolution earth observation system in China. With the homemade GF-1 satellite remote sensing image date as a resolution, this research, by the combination of V-I-S model and linear spectral mixture model, has first made estimation on the impervious surface of Tianjin City and then employed the remote sensingimage date with high resolution to test the precision of the estimated results. The results not only show that this method will make high precision available, but also reveal that Tianjin City has a wide coverage of impervious surface in general level, especially a high coverage rate both in the center and the coastal areas. The average coverage of impervious surface of the Tianjin city is very high and the coverage of impervious surface in the center and the coastal areas of Tianjin city reach seventy percent.City managers can use these data to guide city management and city planning.

  8. Extracting conformational structure information of benzene molecules via laser-induced electron diffraction

    Directory of Open Access Journals (Sweden)

    Yuta Ito

    2016-05-01

    Full Text Available We have measured the angular distributions of high energy photoelectrons of benzene molecules generated by intense infrared femtosecond laser pulses. These electrons arise from the elastic collisions between the benzene ions with the previously tunnel-ionized electrons that have been driven back by the laser field. Theory shows that laser-free elastic differential cross sections (DCSs can be extracted from these photoelectrons, and the DCS can be used to retrieve the bond lengths of gas-phase molecules similar to the conventional electron diffraction method. From our experimental results, we have obtained the C-C and C-H bond lengths of benzene with a spatial resolution of about 10 pm. Our results demonstrate that laser induced electron diffraction (LIED experiments can be carried out with the present-day ultrafast intense lasers already. Looking ahead, with aligned or oriented molecules, more complete spatial information of the molecule can be obtained from LIED, and applying LIED to probe photo-excited molecules, a “molecular movie” of the dynamic system may be created with sub-Ångström spatial and few-ten femtosecond temporal resolutions.

  9. The influence of Citrosept addition to drinking water and Scutellaria baicalensis root extract on the content of selected mineral elements in the blood plasma of turkey hens.

    Science.gov (United States)

    Rusinek-Prystupa, Elżbieta; Lechowski, Jerzy; Zukiewicz-Sobczak, Wioletta; Sobczak, Paweł; Zawiślak, Kazimierz

    2014-01-01

    The aim of this research work was to indicate the influence of Citrosept preparation and Scutellaria baicalensis root extract, administered per os to growing turkey hens in 3 different dosages, on the content of selected mineral elements in blood plasma of slaughter turkey hens. An attempt was also made to specify the most effective dosage of the applied preparations with the highest efficiency as regards increased levels of examined macro- and microelements in the birds' blood. The research experiment was conducted on 315 turkey hens randomly divided into seven groups, each consisting of 45 turkey hens. Group K constituted the control group without experimental additions of the above-mentioned preparations. When it comes to turkey hens which belonged to groups II-IV, Citrosept preparation was instilled to water in the following dosages: Group II - 0.011 ml/kg of bm; Group III - 0.021 ml/kg of bm; Group IV - 0.042 ml/kg bm. For birds which belonged to groups V-VII preparation, which was Scutellaria baicalensis root extract, was instilled to water in the following dosages: Group V - 0.009 ml/kg of bm; Group VI - 0.018 ml/kg of bm, Group VII - 0.036 ml/kg bm. In the examined plant extracts and blood plasma of the birds the levels of Na, K, Ca, Mg, Cu, Zn, and Fe were identified. The use of examined extracts influenced the changes in the levels of all tested elements in slaughter turkey hens' blood plasma. An upward tendency was recorded which regarded the level of calcium and magnesium, and a downward tendency of sodium, potassium, copper, zinc, and iron in relation to the results achieved in the control group.

  10. Use of dehydrated waste grape skins as a natural additive for producing rosé wines: study of extraction conditions and evolution.

    Science.gov (United States)

    Pedroza, Miguel Angel; Carmona, Manuel; Salinas, Maria Rosario; Zalacain, Amaya

    2011-10-26

    Dehydrated waste grape skins from the juice industry were used as an additive to produce rosé wines. Maceration time, particle size, dosage, alcoholic content, and maceration temperature were first studied in model wine solutions using two different dehydrated waste grape skins. Full factorial experimental designs together with Factor Analysis and Multifactor ANOVA allowed for the evaluation of each parameter according to the composition of color and phenolic and aroma compounds. Higher maceration time favored the extraction of anthocyanins; phenolic compound release was influenced by dosage independent from other factors studied. Rosé wines were produced by direct addition of dehydrated waste grape skins, according to selected parameters in two different white wines, achieving characteristics equivalent to commercial rosé wines. After three months of storage, rosé wine composition was stable.

  11. Feature Extraction and Selection Scheme for Intelligent Engine Fault Diagnosis Based on 2DNMF, Mutual Information, and NSGA-II

    Directory of Open Access Journals (Sweden)

    Peng-yuan Liu

    2016-01-01

    Full Text Available A novel feature extraction and selection scheme is presented for intelligent engine fault diagnosis by utilizing two-dimensional nonnegative matrix factorization (2DNMF, mutual information, and nondominated sorting genetic algorithms II (NSGA-II. Experiments are conducted on an engine test rig, in which eight different engine operating conditions including one normal condition and seven fault conditions are simulated, to evaluate the presented feature extraction and selection scheme. In the phase of feature extraction, the S transform technique is firstly utilized to convert the engine vibration signals to time-frequency domain, which can provide richer information on engine operating conditions. Then a novel feature extraction technique, named two-dimensional nonnegative matrix factorization, is employed for characterizing the time-frequency representations. In the feature selection phase, a hybrid filter and wrapper scheme based on mutual information and NSGA-II is utilized to acquire a compact feature subset for engine fault diagnosis. Experimental results by adopted three different classifiers have demonstrated that the proposed feature extraction and selection scheme can achieve a very satisfying classification performance with fewer features for engine fault diagnosis.

  12. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception.

  13. Extracting structural information from the polarization dependence of one- and two-dimensional sum frequency generation spectra.

    Science.gov (United States)

    Laaser, Jennifer E; Zanni, Martin T

    2013-07-25

    We present ways in which pulse sequences and polarizations can be used to extract structural information from one- and two-dimensional vibrational sum frequency generation (2D SFG) spectra. We derive analytic expressions for the polarization dependence of systems containing coupled vibrational modes, and we present simulated spectra to identify the features of different molecular geometries. We discuss several useful polarization combinations for suppressing strong diagonal peaks and emphasizing weaker cross-peaks. We investigate unique capabilities of 2D SFG spectra for obtaining structural information about SFG-inactive modes and for identifying coupled achiral chromophores. This work builds on techniques that have been developed for extracting structural information from 2D IR spectra. This paper discusses how to utilize these concepts in 2D SFG experiments to probe multioscillator systems at interfaces. The sample code for calculating polarization dependence of 1D and 2D SFG spectra is provided in the Supporting Information .

  14. In-shore ship extraction from HR optical remote sensing image via salience structure and GIS information

    Science.gov (United States)

    Ren, Xiaoyuan; Jiang, Libing; Tang, Xiao-an

    2015-12-01

    In order to solve the problem of in-shore ship extraction from remote sensing image, a novel method for in-shore ship extraction from high resolution (HR) optical remote sensing image is proposed via salience structure feature and GIS information. Firstly, the berth ROI is located in the image with the aid of the prior GIS auxiliary information. Secondly, the salient corner features at ship bow are extracted from the berth ROI precisely. Finally, a recursive algorithm concerning the symmetric geometry of the ship target is conducted to discriminate the multi docked in-shore targets into mono in-shore ships. The results of the experiments show that the method proposed in this paper can detect the majority of large and medium scale in-shore ships from the optical remote sensing image, including both the mono and the multi adjacent docked in-shore ship cases.

  15. Adiponectin provides additional information to conventional cardiovascular risk factors for assessing the risk of atherosclerosis in both genders.

    Directory of Open Access Journals (Sweden)

    Jin-Ha Yoon

    Full Text Available BACKGROUND: This study evaluated the relation between adiponectin and atherosclerosis in both genders, and investigated whether adiponectin provides useful additional information for assessing the risk of atherosclerosis. METHODS: We measured serum adiponectin levels and other cardiovascular risk factors in 1033 subjects (454 men, 579 women from the Korean Genomic Rural Cohort study. Carotid intima-media-thickness (CIMT was used as measure of atherosclerosis. Odds ratios (ORs with 95% confidence intervals (95% CI were calculated using multiple logistic regression, and receiver operating characteristic curves (ROC, the category-free net reclassification improvement (NRI and integrated discrimination improvement (IDI were calculated. RESULTS: After adjustment for conventional cardiovascular risk factors, such as age, waist circumference, smoking history, low-density and high-density lipoprotein cholesterol, triglycerides, systolic blood pressure and insulin resistance, the ORs (95%CI of the third tertile adiponectin group were 0.42 (0.25-0.72 in men and 0.47 (0.29-0.75 in women. The area under the curve (AUC on the ROC analysis increased significantly by 0.025 in men and 0.022 in women when adiponectin was added to the logistic model of conventional cardiovascular risk factors (AUC in men: 0.655 to 0.680, p = 0.038; AUC in women: 0.654 to 0.676, p = 0.041. The NRI was 0.32 (95%CI: 0.13-0.50, p<0.001, and the IDI was 0.03 (95%CI: 0.01-0.04, p<0.001 for men. For women, the category-free NRI was 0.18 (95%CI: 0.02-0.34, p = 0.031 and the IDI was 0.003 (95%CI: -0.002-0.008, p = 0.189. CONCLUSION: Adiponectin and atherosclerosis were significantly related in both genders, and these relationships were independent of conventional cardiovascular risk factors. Furthermore, adiponectin provided additional information to conventional cardiovascular risk factors regarding the risk of atherosclerosis.

  16. Urban vegetation cover extraction from hyperspectral imagery and geographic information system spatial analysis techniques: case of Athens, Greece

    Science.gov (United States)

    Petropoulos, George P.; Kalivas, Dionissios P.; Georgopoulou, Iro A.; Srivastava, Prashant K.

    2015-01-01

    The present study aimed at evaluating the performance of two different pixel-based classifiers [spectral angle mapper (SAM) and support vector machines (SVMs)] in discriminating different land-cover classes in a typical urban setting, focusing particularly on urban vegetation cover by utilizing hyperspectral (EO-1 Hyperion) data. As a case study, the city of Athens, Greece, was used. Validation of urban vegetation predictions was based on the error matrix statistics. Additionally, the final urban vegetation cover maps were compared at a municipality level against reference urban vegetation cover estimates derived from the digitization of very high-resolution imagery. To ensure consistency and comparability of the results, the same training and validation points dataset were used to compare the different classifiers. The results showed that SVMs outperformed SAM in terms of both classification and urban vegetation cover mapping with an overall accuracy of 86.53% and Kappa coefficient 0.823, whereas for SAM classification, the accuracy statistics obtained were 75.13% and 0.673, respectively. Our results confirmed the ability of both techniques, when combined with Hyperion imagery, to extract urban vegetation cover for the case of a densely populated city with complex urban features, such as Athens. Our findings offer significant information at the local scale as regards to the presence of open green spaces in the urban environment of Athens. Such information is vital for successful infrastructure development, urban landscape planning, and improvement of urban environment. More widely, this study also contributes significantly toward an objective assessment of Hyperion in detecting and mapping urban vegetation cover.

  17. 41 CFR 102-75.140 - In addition to the title report, and all necessary environmental information and certifications...

    Science.gov (United States)

    2010-07-01

    ... report, and all necessary environmental information and certifications, what information must an Executive agency transmit with the Report of Excess Real Property (Standard Form 118)? 102-75.140 Section... environmental information and certifications, what information must an Executive agency transmit with the...

  18. Extracting key information from historical data to quantify the transmission dynamics of smallpox

    Directory of Open Access Journals (Sweden)

    Brockmann Stefan O

    2008-08-01

    Full Text Available Abstract Background Quantification of the transmission dynamics of smallpox is crucial for optimizing intervention strategies in the event of a bioterrorist attack. This article reviews basic methods and findings in mathematical and statistical studies of smallpox which estimate key transmission parameters from historical data. Main findings First, critically important aspects in extracting key information from historical data are briefly summarized. We mention different sources of heterogeneity and potential pitfalls in utilizing historical records. Second, we discuss how smallpox spreads in the absence of interventions and how the optimal timing of quarantine and isolation measures can be determined. Case studies demonstrate the following. (1 The upper confidence limit of the 99th percentile of the incubation period is 22.2 days, suggesting that quarantine should last 23 days. (2 The highest frequency (61.8% of secondary transmissions occurs 3–5 days after onset of fever so that infected individuals should be isolated before the appearance of rash. (3 The U-shaped age-specific case fatality implies a vulnerability of infants and elderly among non-immune individuals. Estimates of the transmission potential are subsequently reviewed, followed by an assessment of vaccination effects and of the expected effectiveness of interventions. Conclusion Current debates on bio-terrorism preparedness indicate that public health decision making must account for the complex interplay and balance between vaccination strategies and other public health measures (e.g. case isolation and contact tracing taking into account the frequency of adverse events to vaccination. In this review, we summarize what has already been clarified and point out needs to analyze previous smallpox outbreaks systematically.

  19. The implementation of high fermentative 2,3-butanediol production from xylose by simultaneous additions of yeast extract, Na2EDTA, and acetic acid.

    Science.gov (United States)

    Wang, Xiao-Xiong; Hu, Hong-Ying; Liu, De-Hua; Song, Yuan-Quan

    2016-01-25

    The effective use of xylose may significantly enhance the feasibility of using lignocellulosic hydrolysate to produce 2,3-butanediol (2,3-BD). Previous difficulties in 2,3-BD production include that the high-concentration xylose cannot be converted completely and the fermentation rate is slow. This study investigated the effects of yeast extract, ethylenediaminetetraacetic acid disodium salt (Na2EDTA), and acetic acid on 2,3-BD production from xylose. The central composite design approach was used to optimize the concentrations of these components. It was found that simultaneous addition of yeast extract, Na2EDTA, and acetic acid could significantly improve 2,3-BD production. The optimal concentrations of yeast extract, Na2EDTA, and acetic acid were 35.2, 1.2, and 4.5 g/L, respectively. The 2,3-BD concentration in the optimized medium reached 39.7 g/L after 48 hours of shake flask fermentation, the highest value ever reported in such a short period. The xylose utilization ratio and the 2,3-BD concentration increased to 99.0% and 42.7 g/L, respectively, after 48 hours of stirred batch fermentation. Furthermore, the 2,3-BD yield was 0.475 g/g, 95.0% of the theoretical maximum value. As the major components of lignocellulosic hydrolysate are glucose, xylose, and acetic acid, the results of this study indicate the possibility of directly using the hydrolysate to effectively produce 2,3-BD.

  20. Determination of Polymer Additives-Antioxidants, Ultraviolet Stabilizers, Plasticizers and Photoinitiators in Plastic Food Package by Accelerated Solvent Extraction Coupled with High-Performance Liquid Chromatography.

    Science.gov (United States)

    Li, Bo; Wang, Zhi-Wei; Lin, Qin-Bao; Hu, Chang-Ying; Su, Qi-Zhi; Wu, Yu-Mei

    2015-07-01

    An analytical method for the quantitative determination of 4 antioxidants, 9 ultraviolet (UV) stabilizers, 12 phthalate plasticizers and 2 photoinitiators in plastic food package using accelerated solvent extraction (ASE) coupled with high-performance liquid chromatography-photodiode array detector (HPLC-PDA) has been developed. Parameters affecting the efficiency in the process such as extraction and chromatographic conditions were studied in order to determine operating conditions. The analytical method of ASE-HPLC showed good linearity with good correlation coefficients (R ≥ 0.9833). The limits of detection and quantification were between 0.03 and 0.30 µg mL(-1) and between 0.10 and 1.00 µg mL(-1) for 27 analytes. Average spiked recoveries for most analytes in samples were >70.4% at 10, 20 and 40 µg g(-1) spiked levels, except UV-9 and Irganox 1010 (58.6 and 64.0% spiked at 10 µg g(-1), respectively), the relative standard deviations were in the range from 0.4 to 15.4%. The methodology has been proposed for the analysis of 27 polymer additives in plastic food package. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Disability assessment interview : the role of detailed information on functioning in addition to medical history-taking

    NARCIS (Netherlands)

    Spanjer, J.; Krol, B.; Popping, R.; Groothoff, J.W.; Brouwer, Sandra

    Objective: To investigate whether the provision of detailed information on participation and activity limitations, compared with medical information alone, influences the assessment of work limitations by physicians. Methods: Three groups each of 9 insurance physicians used written interview reports

  2. Effect of addition of olive leaves before fruits extraction process to some monovarietal Tunisian extra-virgin olive oils using chemometric analysis.

    Science.gov (United States)

    Sonda, Ammar; Akram, Zribi; Boutheina, Gargouri; Guido, Flamini; Mohamed, Bouaziz

    2014-01-08

    The analysis of the effect of cultivar and olive leaves addition before the extraction on the different analytical values revealed significant differences (p olive leaves. Twenty-three compounds were characterized, representing 86.1-99.2% of the total volatiles. Chétoui cultivar has the highest amount of (E)-2-hexenal, followed by Chemlali cultivar, whereas (E)-2-hexen-1-ol was the major constituent of Zalmati and crossbreeding Chemlali by Zalmati cultivars. Sensory analysis showed that Chemlali and Chétoui Zarzis possessed a high fruity, bitter, and pungent taste, whereas the Zalmati and crossbreeding Chemlali by Zalmati had a 'green' taste among its attributes. Indeed, the taste panel found an improvement of the oil quality when an amount of olive leaves (3%) added to the olives fruits.

  3. An analytical framework for extracting hydrological information from time series of small reservoirs in a semi-arid region

    Science.gov (United States)

    Annor, Frank; van de Giesen, Nick; Bogaard, Thom; Eilander, Dirk

    2013-04-01

    small reservoirs in the Upper East Region of Ghana. Reservoirs without obvious large seepage losses (field survey) were selected. To verify this, stable water isotopic samples are collected from groundwater upstream and downstream from the reservoir. By looking at possible enrichment of downstream groundwater, a good estimate of seepage can be made in addition to estimates on evaporation. We estimated the evaporative losses and compared those with field measurements using eddy correlation measurements. Lastly, we determined the cumulative surface runoff curves for the small reservoirs .We will present this analytical framework for extracting hydrological information from time series of small reservoirs and show the first results for our study region of northern Ghana.

  4. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    Directory of Open Access Journals (Sweden)

    J. Sharmila

    2016-01-01

    Full Text Available Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodology in the exploration with the aid of Bayesian Networks (BN. In their methodology, they were learning on separating the web data and characteristic revelation in view of the Bayesian approach. Roused from their investigation, we mean to propose a web content mining methodology, in view of a Deep Learning Algorithm. The Deep Learning Algorithm gives the interest over BN on the basis that BN is not considered in any learning architecture planning like to propose system. The main objective of this investigation is web document extraction utilizing different grouping algorithm and investigation. This work extricates the data from the web URL. This work shows three classification algorithms, Deep Learning Algorithm, Bayesian Algorithm and BPNN Algorithm. Deep Learning is a capable arrangement of strategies for learning in neural system which is connected like computer vision, speech recognition, and natural language processing and biometrics framework. Deep Learning is one of the simple classification technique and which is utilized for subset of extensive field furthermore Deep Learning has less time for classification. Naive Bayes classifiers are a group of basic probabilistic classifiers in view of applying Bayes hypothesis with concrete independence assumptions between the features. At that point the BPNN algorithm is utilized for classification. Initially training and testing dataset contains more URL. We extract the content presently from the dataset. The

  5. Network and Ensemble Enabled Entity Extraction in Informal Text (NEEEEIT) final report.

    Energy Technology Data Exchange (ETDEWEB)

    Kegelmeyer, W. Philip,; Shead, Timothy M. [Sandia National Laboratories, Albuquerque, NM; Dunlavy, Daniel M. [Sandia National Laboratories, Albuquerque, NM

    2013-09-01

    This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Informal Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods. Conditional random fields (CRFs) are powerful, flexible probabilistic graphical models often used in supervised machine learning prediction tasks associated with sequence data. Specifically, they are currently the best known option for named entity recognition (NER) in text. NER is the process of labeling words in sentences with semantic identifiers such as %E2%80%9Cperson%E2%80%9D, %E2%80%9Cdate%E2%80%9D, or %E2%80%9Corganization%E2%80%9D. Ensembles are a powerful statistical inference meta-method that can make most supervised machine learning methods more accurate, faster, or both. Ensemble methods are normally best suited to %E2%80%9Cunstable%E2%80%9D classification methods with high variance error. CRFs applied to NER are very stable classifiers, and as such, would initially seem to be resistant to the benefits of ensembles. The NEEEEIT project nonetheless worked out how to generalize ensemble methods to CRFs, demonstrated that accuracy can indeed be improved by proper use of ensemble techniques, and generated a new CRF code, %E2%80%9CpyCrust%E2%80%9D and a surrounding application environment, %E2%80%9CNEEEEIT%E2%80%9D, which implement those improvements. The summary practical advice that results from this work, then, is: When making use of CRFs for label prediction tasks in machine learning, use the pyCrust CRF base classifier with NEEEEIT's bagging ensemble implementation. (If those codes are not available, then de-stablize your CRF code via every means available, and generate the bagged training sets by hand.) If you have ample pre-processing computational time, do %E2%80%9Cforward feature selection%E2%80%9D to find and remove counter-productive feature classes. Conversely

  6. Application of dispersive solid-phase extraction and ultra-fast liquid chromatography-tandem quadrupole mass spectrometry in food additive residue analysis of red wine.

    Science.gov (United States)

    Chen, Xiao-Hong; Zhao, Yong-Gang; Shen, Hao-Yu; Jin, Mi-Cong

    2012-11-09

    A novel and effective dispersive solid-phase extraction (dSPE) procedure with rapid magnetic separation using ethylenediamine-functionalized magnetic polymer as an adsorbent was developed. The new procedure had excellent clean-up ability for the selective removal of the matrix in red wine. An accurate, simple, and rapid analytical method using ultra-fast liquid chromatography-tandem quadrupole mass spectrometry (UFLC-MS/MS) for the simultaneous determination of nine food additives (i.e., acesulfame, saccharin, sodium cyclamate, aspartame, benzoic acid, sorbic acid, stevioside, dehydroacetic acid, and neotame) in red wine was also used and validated. Recoveries ranging from 78.5% to 99.2% with relative standard deviations ranging from 0.46% to 6.3% were obtained using the new method. All target compounds showed good linearities in the tested range with correlation coefficients (r) higher than 0.9993. The limits of quantification for the nine food additives were between 0.10 μg/L and 50.0 μg/L. The proposed dSPE-UFLC-MS/MS method was successfully applied in the food-safety risk monitoring of real red wine in Zhejiang Province, China. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  7. Inexperienced clinicians can extract pathoanatomic information from MRI narrative reports with high reproducability for use in research/quality assurance

    DEFF Research Database (Denmark)

    Kent, Peter; Briggs, Andrew M; Albert, Hanne Birgit;

    2011-01-01

    Background Although reproducibility in reading MRI images amongst radiologists and clinicians has been studied previously, no studies have examined the reproducibility of inexperienced clinicians in extracting pathoanatomic information from magnetic resonance imaging (MRI) narrative reports...... pathoanatomic information from radiologist-generated MRI narrative reports. Methods Twenty MRI narrative reports were randomly extracted from an institutional database. A group of three physiotherapy students independently reviewed the reports and coded the presence of 14 common pathoanatomic findings using...... a categorical electronic coding matrix. Decision rules were developed after initial coding in an effort to resolve ambiguities in narrative reports. This process was repeated a further three times using separate samples of 20 MRI reports until no further ambiguities were identified (total n=80). Reproducibility...

  8. Extracting Information from the Atom-Laser Wave Function UsingInterferometric Measurement with a Laser Standing-Wave Grating

    Institute of Scientific and Technical Information of China (English)

    刘正东; 武强; 曾亮; 林宇; 朱诗尧

    2001-01-01

    The reconstruction of the atom-laser wave function is performed using an interferometric measurement with a standing-wave grating, and the results of this scheme are studied. The relations between the measurement data and the atomic wave function are also presented. This scheme is quite applicable and effectively avoids the initial random phase problem of the method that employs the laser running wave. The information which is encoded in the atom-laser wave is extracted.

  9. GRAS Flavoring Substances 25. The 25th publication by the Expert Panel of the Flavor and Extract Manufacturers Association provides an update on reent progress in the consideration of flavoring ingredients generally recognized as safe under the Food Additives Amendment

    NARCIS (Netherlands)

    Smith, R.L.; Waddell, W.J.; Cohen, S.M.; Fukushima, S.; Gooderham, N.J.; Hecht, S.S.; Marnett, L.J.; Porthogese, P.S.; Rietjens, I.; Adams, T.B.; Gavin, C.L.; McGowen, M.M.; Taylor, S.V.

    2011-01-01

    The 25th publication by the Expert Panel of the Flavor and Extract Manufacturers Association provides an update on recent progress in the consideration of flavoring ingredients generally recognized as safe under the Food Additives Amendment.

  10. INFORMATIONAL COMMUNICATION BETWEEN THE PARTICIPANTS OF A CONSTRUCTION PROJECT AS AN ADDITIONAL FACTOR IN EVALUATING THE ORGANIZATIONAL AND TECHNOLOGICAL CAPACITY

    Directory of Open Access Journals (Sweden)

    Lapidus Azariy Abramovich

    2016-06-01

    Full Text Available The current trends of dynamic implementation of new materials, equipment and organizational and technological solutions in the construction lead to increase of information volume. Although the great amount of information flows isn’t fixed in the final variant of design documentation or doesn’t reach the construction site as instructions. This problem is most pressing for major construction projects. The main reason for such a loss of information is inefficiency of data management. The article discusses the influence of the interaction between the participants of a construction project on the effectiveness of the use of information flows within the construction project. The article also indicates the justification of such influence for organizational and technological building project evaluation, which is formed on the basis of information flows. The basic components of the information flow and conditions of effective transfer to final recipient are given. The concept of the role of a participant of building project is introduced as social component of information flow transfer is.

  11. A New Paradigm for the Extraction of Information:Application to Enhancement of Visual Information in a Medical Application

    Institute of Scientific and Technical Information of China (English)

    V. Courboulay; A. Histace; M. Ménard; C.Cavaro-Menard

    2004-01-01

    The noninvasive evaluation of the cardiac function presents a great interest for the diagnosis of cardiovascular diseases. Tagged cardiac MRI allows the measurement of anatomical and functional myocardial parameters. This protocol generates a dark grid which is deformed with the myocardium displacement on both Short-Axis (SA) and Long-Axis (LA) frames in a time sequence. Visual evaluation of the grid deformation allows the estimation of the displacement inside the myocardium. The work described in this paper aims to make robust and reliable the visual enhancement of the grid tags on cardiac MRI sequences, thanks to an informational formalism based on Extreme Physical Informational (EPI). This approach leads to the development of an original diffusion pre-processing allowing us to make better the robustness of the visual detection and the following of the grid of tags.

  12. Road Extraction and Network Building from Synthetic Aperture Radar Images using A-Priori Information

    NARCIS (Netherlands)

    Dekker, R.J.

    2008-01-01

    This paper describes a method for the extraction of road networks from radar images. Three phases can be distinguished: (1) detection of road lines, (2) network building, and (3) network fusion. The method has been demonstrated on two radar images, one urban and one rural. Despite the differences,

  13. Road Extraction and Network Building from Synthetic Aperture Radar Images using A-Priori Information

    NARCIS (Netherlands)

    Dekker, R.J.

    2008-01-01

    This paper describes a method for the extraction of road networks from radar images. Three phases can be distinguished: (1) detection of road lines, (2) network building, and (3) network fusion. The method has been demonstrated on two radar images, one urban and one rural. Despite the differences, t

  14. Comparison of Qinzhou bay wetland landscape information extraction by three methods

    Directory of Open Access Journals (Sweden)

    X. Chang

    2014-04-01

    and OO is 219 km2, 193.70 km2, 217.40 km2 respectively. The result indicates that SC is in the f irst place, followed by OO approach, and the third DT method when used to extract Qingzhou Bay coastal wetland.

  15. A defocus-information-free autostereoscopic three-dimensional (3D) digital reconstruction method using direct extraction of disparity information (DEDI)

    Science.gov (United States)

    Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu

    2016-10-01

    Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.

  16. Extracting Information about the Initial State from the Black Hole Radiation.

    Science.gov (United States)

    Lochan, Kinjalk; Padmanabhan, T

    2016-02-05

    The crux of the black hole information paradox is related to the fact that the complete information about the initial state of a quantum field in a collapsing spacetime is not available to future asymptotic observers, belying the expectations from a unitary quantum theory. We study the imprints of the initial quantum state contained in a specific class of distortions of the black hole radiation and identify the classes of in states that can be partially or fully reconstructed from the information contained within. Even for the general in state, we can uncover some specific information. These results suggest that a classical collapse scenario ignores this richness of information in the resulting spectrum and a consistent quantum treatment of the entire collapse process might allow us to retrieve much more information from the spectrum of the final radiation.

  17. Towards Evidence-based Precision Medicine: Extracting Population Information from Biomedical Text using Binary Classifiers and Syntactic Patterns

    Science.gov (United States)

    Raja, Kalpana; Dasot, Naman; Goyal, Pawan; Jonnalagadda, Siddhartha R

    2016-01-01

    Precision Medicine is an emerging approach for prevention and treatment of disease that considers individual variability in genes, environment, and lifestyle for each person. The dissemination of individualized evidence by automatically identifying population information in literature is a key for evidence-based precision medicine at the point-of-care. We propose a hybrid approach using natural language processing techniques to automatically extract the population information from biomedical literature. Our approach first implements a binary classifier to classify sentences with or without population information. A rule-based system based on syntactic-tree regular expressions is then applied to sentences containing population information to extract the population named entities. The proposed two-stage approach achieved an F-score of 0.81 using a MaxEnt classifier and the rule- based system, and an F-score of 0.87 using a Nai've-Bayes classifier and the rule-based system, and performed relatively well compared to many existing systems. The system and evaluation dataset is being released as open source. PMID:27570671

  18. Advanced image collection, information extraction, and change detection in support of NN-20 broad area search and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Petrie, G.M.; Perry, E.M.; Kirkham, R.R.; Slator, D.E. [and others

    1997-09-01

    This report describes the work performed at the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy`s Office of Nonproliferation and National Security, Office of Research and Development (NN-20). The work supports the NN-20 Broad Area Search and Analysis, a program initiated by NN-20 to improve the detection and classification of undeclared weapons facilities. Ongoing PNNL research activities are described in three main components: image collection, information processing, and change analysis. The Multispectral Airborne Imaging System, which was developed to collect georeferenced imagery in the visible through infrared regions of the spectrum, and flown on a light aircraft platform, will supply current land use conditions. The image information extraction software (dynamic clustering and end-member extraction) uses imagery, like the multispectral data collected by the PNNL multispectral system, to efficiently generate landcover information. The advanced change detection uses a priori (benchmark) information, current landcover conditions, and user-supplied rules to rank suspect areas by probable risk of undeclared facilities or proliferation activities. These components, both separately and combined, provide important tools for improving the detection of undeclared facilities.

  19. Study on extraction of crop information using time-series MODIS data in the Chao Phraya Basin of Thailand

    Science.gov (United States)

    Tingting, Lv; Chuang, Liu

    2010-03-01

    In order to acquire the crop-related information in Chao Phraya Basin, time-series MODIS data were used in this paper. Although the spatial resolution of MODIS data is not very high, it is still useful for detecting very large-scale phenomenon, such as changes in seasonal vegetation patterns. After the data processing a general crop-related LULC (land use and land cover) map, cropping intensity map and cropping patterns map were produced. Analysis of these maps showed that the main land use type in the study area was farmland, most of which was dominated by rice. Rice fields mostly concentrated in the flood plains and double or triple rice-cropping system was commonly employed in this area. Maize, cassava, sugarcane and other upland crops were mainly distributed in the high alluvial terraces. Because these area often have water shortage problem particularly in the dry season which can support only one crop in a year, the cropping intensity was very low. However, some upland areas can be cultivated twice a year with crops which have short growing seasons. The crop information extracted from MODIS data sets were assessed by CBERS data, statistic data and so on. It was shown that MODIS derived crop information coincided well with the statistic data at the provincial level. At the same time, crop information extracted by MODIS data sets and CBERS were compared with each other which also showed similar spatial patterns.

  20. Keyword Extraction from a Document using Word Co-occurrence Statistical Information

    Science.gov (United States)

    Matsuo, Yutaka; Ishizuka, Mitsuru

    We present a new keyword extraction algorithm that applies to a single document without using a large corpus. Frequent terms are extracted first, then a set of co-occurrence between each term and the frequent terms, i.e., occurrences in the same sentences, is generated. The distribution of co-occurrence shows the importance of a term in the document as follows. If the probability distribution of co-occurrence between term a and the frequent terms is biased to a particular subset of the frequent terms, then term a is likely to be a keyword. The degree of the biases of the distribution is measured by χ²-measure. We show our algorithm performs well for indexing technical papers.

  1. Rapid Training of Information Extraction with Local and Global Data Views

    Science.gov (United States)

    2012-05-01

    Proceedings of the COLING, 1996. [35] Ralph Grishman, David Westbrook and Adam Meyers. NYUs English ACE 2005 System Description. ACE 2005 Evaluation...Conference on Artificial Intelligence. 2000. 93 [61] David Nadeau and Satoshi Sekine. A Survey of Named Entity Recognition and Classification. In: Sekine...relation extraction. In Proc. of COLING. 2008. [64] John Ross Quinlan . Induction of decision trees. Machine Learning, 1(1), 81- 106. 1986. [65] Lev

  2. Analysis of a Probabilistic Model of Redundancy in Unsupervised Information Extraction

    Science.gov (United States)

    2010-08-25

    noise. Example A in Figure 4 has strong evidence for a functional relation. 66 out of 70 extractions for was born in ( Mozart , PLACE) have the same y...unambiguous. 19 A. was born in( Mozart , PLACE): Salzburg(66), Germany(3), Vienna(1) B. was born in(John Adams, PLACE): Braintree(12), Quincy(10), Worcester(8...C. lived in( Mozart , PLACE): Vienna(20), Prague(13), Salzburg(5) Figure 4: Functional relations such as example A have a different distribution of y

  3. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    Science.gov (United States)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  4. A extract method of mountainous area settlement place information from GF-1 high resolution optical remote sensing image under semantic constraints

    Science.gov (United States)

    Guo, H., II

    2016-12-01

    Spatial distribution information of mountainous area settlement place is of great significance to the earthquake emergency work because most of the key earthquake hazardous areas of china are located in the mountainous area. Remote sensing has the advantages of large coverage and low cost, it is an important way to obtain the spatial distribution information of mountainous area settlement place. At present, fully considering the geometric information, spectral information and texture information, most studies have applied object-oriented methods to extract settlement place information, In this article, semantic constraints is to be added on the basis of object-oriented methods. The experimental data is one scene remote sensing image of domestic high resolution satellite (simply as GF-1), with a resolution of 2 meters. The main processing consists of 3 steps, the first is pretreatment, including ortho rectification and image fusion, the second is Object oriented information extraction, including Image segmentation and information extraction, the last step is removing the error elements under semantic constraints, in order to formulate these semantic constraints, the distribution characteristics of mountainous area settlement place must be analyzed and the spatial logic relation between settlement place and other objects must be considered. The extraction accuracy calculation result shows that the extraction accuracy of object oriented method is 49% and rise up to 86% after the use of semantic constraints. As can be seen from the extraction accuracy, the extract method under semantic constraints can effectively improve the accuracy of mountainous area settlement place information extraction. The result shows that it is feasible to extract mountainous area settlement place information form GF-1 image, so the article proves that it has a certain practicality to use domestic high resolution optical remote sensing image in earthquake emergency preparedness.

  5. Extracting principles for information management adaptability during crisis response: A dynamic capability view

    NARCIS (Netherlands)

    Bharosa, N.; Janssen, M.F.W.H.A.

    2010-01-01

    During crises, relief agency commanders have to make decisions in a complex and uncertain environment, requiring them to continuously adapt to unforeseen environmental changes. In the process of adaptation, the commanders depend on information management systems for information. Yet there are still

  6. Automated Methods to Extract Patient New Information from Clinical Notes in Electronic Health Record Systems

    Science.gov (United States)

    Zhang, Rui

    2013-01-01

    The widespread adoption of Electronic Health Record (EHR) has resulted in rapid text proliferation within clinical care. Clinicians' use of copying and pasting functions in EHR systems further compounds this by creating a large amount of redundant clinical information in clinical documents. A mixture of redundant information (especially outdated…

  7. Automated Methods to Extract Patient New Information from Clinical Notes in Electronic Health Record Systems

    Science.gov (United States)

    Zhang, Rui

    2013-01-01

    The widespread adoption of Electronic Health Record (EHR) has resulted in rapid text proliferation within clinical care. Clinicians' use of copying and pasting functions in EHR systems further compounds this by creating a large amount of redundant clinical information in clinical documents. A mixture of redundant information (especially outdated…

  8. Extracting depth information of 3-dimensional structures from a single-view X-ray Fourier-transform hologram.

    Science.gov (United States)

    Geilhufe, J; Tieg, C; Pfau, B; Günther, C M; Guehrs, E; Schaffert, S; Eisebitt, S

    2014-10-20

    We demonstrate how information about the three-dimensional structure of an object can be extracted from a single Fourier-transform X-ray hologram. In contrast to lens-based 3D imaging approaches that provide depth information of a specimen utilizing several images from different angles or via adjusting the focus to different depths, our method capitalizes on the use of the holographically encoded phase and amplitude information of the object's wavefield. It enables single-shot measurements of 3D objects at coherent X-ray sources. As the ratio of longitudinal resolution over transverse resolution scales proportional to the diameter of the reference beam aperture over the X-ray wavelength, we expect the approach to be particularly useful in the extreme ultraviolet and soft-X-ray regime.

  9. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    CERN Document Server

    Jonnalagadda, Siddhartha

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  10. Bounds on the entropy generated when timing information is extracted from microscopic systems

    CERN Document Server

    Janzing, D; Janzing, Dominik; Beth, Thomas

    2003-01-01

    We consider Hamiltonian quantum systems with energy bandwidth \\Delta E and show that each measurement that determines the time up to an error \\Delta t generates at least the entropy (\\hbar/(\\Delta t \\Delta E))^2/2. Our result describes quantitatively to what extent all timing information is quantum information in systems with limited energy. It provides a lower bound on the dissipated energy when timing information of microscopic systems is converted to classical information. This is relevant for low power computation since it shows the amount of heat generated whenever a band limited signal controls a classical bit switch. Our result provides a general bound on the information-disturbance trade-off for von-Neumann measurements that distinguish states on the orbits of continuous unitary one-parameter groups with bounded spectrum. In contrast, information gain without disturbance is possible for some completely positive semi-groups. This shows that readout of timing information can be possible without entropy ...

  11. Prepositioned Stocks: Additional Information and a Consistent Definition Would Make DOD’s Annual Report More Useful

    Science.gov (United States)

    2015-06-01

    reporting requirement.3 In our prior reports, we identified a number of long - term and ongoing challenges to DOD’s prepositioned stocks related to strategic...prepositioned stocks and the extent to which these shortfalls contribute to operational risk . View GAO-15-570. For more information, contact Cary Russell...shortfalls, Page 5 GAO-15-570 Prepositioned Stocks risks , and mitigation efforts. We also compared DOD’s definitions of prepositioning and

  12. Processing of recognition information and additional cues: A model-based analysis of choice, confidence, and response time

    Directory of Open Access Journals (Sweden)

    Andreas Glockner

    2011-02-01

    Full Text Available Research on the processing of recognition information has focused on testing the recognition heuristic (RH. On the aggregate, the noncompensatory use of recognition information postulated by the RH was rejected in several studies, while RH could still account for a considerable proportion of choices. These results can be explained if either a a part of the subjects used RH or b nobody used it but its choice predictions were accidentally in line with predictions of the strategy used. In the current study, which exemplifies a new approach to model testing, we determined individuals' decision strategies based on a maximum-likelihood classification method, taking into account choices, response times and confidence ratings simultaneously. Unlike most previous studies of the RH, our study tested the RH under conditions in which we provided information about cue values of unrecognized objects (which we argue is fairly common and thus of some interest. For 77.5% of the subjects, overall behavior was best explained by a compensatory parallel constraint satisfaction (PCS strategy. The proportion of subjects using an enhanced RH heuristic (RHe was negligible (up to 7.5%; 15% of the subjects seemed to use a take the best strategy (TTB. A more-fine grained analysis of the supplemental behavioral parameters conditional on strategy use supports PCS but calls into question process assumptions for apparent users of RH, RHe, and TTB within our experimental context. Our results are consistent with previous literature highlighting the importance of individual strategy classification as compared to aggregated analyses.

  13. A Target Sound Extraction via 2ch Microphone Array Using Phase Information

    Science.gov (United States)

    Suyama, Kenji; Takahashi, Kota

    In this paper, we propose a novel learning method of linear filters for a target sound extraction in a non—stationary noisy environment via a microphone array with 2 elements. The method is based on a phase difference between two microphones, which is detected from outputs of the Hilbert transformer whose length is corresponding to a fundamental period of vowel parts of speech signals. The cue signal, which has a correlation with a power envelop of target sound, is generated using a mean square of phase difference and applied to the learning. A superior performance of the proposed method is presented by several computer simulation results.

  14. Recent advancements in information extraction methodology and hardware for earth resources survey systems

    Science.gov (United States)

    Erickson, J. D.; Thomson, F. J.

    1974-01-01

    The present work discusses some recent developments in preprocessing and extractive processing techniques and hardware and in user applications model development for earth resources survey systems. The Multivariate Interactive Digital Analysis System (MIDAS) is currently being developed, and is an attempt to solve the problem of real time multispectral data processing in an operational system. The main features and design philosophy of this system are described. Examples of wetlands mapping and land resource inventory are presented. A user model developed for predicting the yearly production of mallard ducks from remote sensing and ancillary data is described.

  15. Extracting multiscale pattern information of fMRI based functional brain connectivity with application on classification of autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Hui Wang

    Full Text Available We employed a multi-scale clustering methodology known as "data cloud geometry" to extract functional connectivity patterns derived from functional magnetic resonance imaging (fMRI protocol. The method was applied to correlation matrices of 106 regions of interest (ROIs in 29 individuals with autism spectrum disorders (ASD, and 29 individuals with typical development (TD while they completed a cognitive control task. Connectivity clustering geometry was examined at both "fine" and "coarse" scales. At the coarse scale, the connectivity clustering geometry produced 10 valid clusters with a coherent relationship to neural anatomy. A supervised learning algorithm employed fine scale information about clustering motif configurations and prevalence, and coarse scale information about intra- and inter-regional connectivity; the algorithm correctly classified ASD and TD participants with sensitivity of 82.8% and specificity of 82.8%. Most of the predictive power of the logistic regression model resided at the level of the fine-scale clustering geometry, suggesting that cellular versus systems level disturbances are more prominent in individuals with ASD. This article provides validation for this multi-scale geometric approach to extracting brain functional connectivity pattern information and for its use in classification of ASD.

  16. Extracting multiscale pattern information of fMRI based functional brain connectivity with application on classification of autism spectrum disorders.

    Science.gov (United States)

    Wang, Hui; Chen, Chen; Fushing, Hsieh

    2012-01-01

    We employed a multi-scale clustering methodology known as "data cloud geometry" to extract functional connectivity patterns derived from functional magnetic resonance imaging (fMRI) protocol. The method was applied to correlation matrices of 106 regions of interest (ROIs) in 29 individuals with autism spectrum disorders (ASD), and 29 individuals with typical development (TD) while they completed a cognitive control task. Connectivity clustering geometry was examined at both "fine" and "coarse" scales. At the coarse scale, the connectivity clustering geometry produced 10 valid clusters with a coherent relationship to neural anatomy. A supervised learning algorithm employed fine scale information about clustering motif configurations and prevalence, and coarse scale information about intra- and inter-regional connectivity; the algorithm correctly classified ASD and TD participants with sensitivity of 82.8% and specificity of 82.8%. Most of the predictive power of the logistic regression model resided at the level of the fine-scale clustering geometry, suggesting that cellular versus systems level disturbances are more prominent in individuals with ASD. This article provides validation for this multi-scale geometric approach to extracting brain functional connectivity pattern information and for its use in classification of ASD.

  17. Extraction of structural and chemical information from high angle annular dark-field image by an improved peaks finding method.

    Science.gov (United States)

    Yin, Wenhao; Huang, Rong; Qi, Ruijuan; Duan, Chungang

    2016-09-01

    With the development of spherical aberration (Cs) corrected scanning transmission electron microscopy (STEM), high angle annular dark filed (HAADF) imaging technique has been widely applied in the microstructure characterization of various advanced materials with atomic resolution. However, current qualitative interpretation of the HAADF image is not enough to extract all the useful information. Here a modified peaks finding method was proposed to quantify the HAADF-STEM image to extract structural and chemical information. Firstly, an automatic segmentation technique including numerical filters and watershed algorithm was used to define the sub-areas for each atomic column. Then a 2D Gaussian fitting was carried out to determine the atomic column positions precisely, which provides the geometric information at the unit-cell scale. Furthermore, a self-adaptive integration based on the column position and the covariance of statistical Gaussian distribution were performed. The integrated intensities show very high sensitivity on the mean atomic number with improved signal-to-noise (S/N) ratio. Consequently, the polarization map and strain distributions were rebuilt from a HAADF-STEM image of the rhombohedral and tetragonal BiFeO3 interface and a MnO2 monolayer in LaAlO3 /SrMnO3 /SrTiO3 heterostructure was discerned from its neighbor TiO2 layers. Microsc. Res. Tech. 79:820-826, 2016. © 2016 Wiley Periodicals, Inc.

  18. Wavelet analysis of molecular dynamics: Efficient extraction of time-frequency information in ultrafast optical processes

    Energy Technology Data Exchange (ETDEWEB)

    Prior, Javier; Castro, Enrique [Departamento de Física Aplicada, Universidad Politécnica de Cartagena, Cartagena 30202 (Spain); Chin, Alex W. [Theory of Condensed Matter Group, University of Cambridge, J J Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Almeida, Javier; Huelga, Susana F.; Plenio, Martin B. [Institut für Theoretische Physik, Albert-Einstein-Allee 11, Universität Ulm, D-89069 Ulm (Germany)

    2013-12-14

    New experimental techniques based on nonlinear ultrafast spectroscopies have been developed over the last few years, and have been demonstrated to provide powerful probes of quantum dynamics in different types of molecular aggregates, including both natural and artificial light harvesting complexes. Fourier transform-based spectroscopies have been particularly successful, yet “complete” spectral information normally necessitates the loss of all information on the temporal sequence of events in a signal. This information though is particularly important in transient or multi-stage processes, in which the spectral decomposition of the data evolves in time. By going through several examples of ultrafast quantum dynamics, we demonstrate that the use of wavelets provide an efficient and accurate way to simultaneously acquire both temporal and frequency information about a signal, and argue that this greatly aids the elucidation and interpretation of physical process responsible for non-stationary spectroscopic features, such as those encountered in coherent excitonic energy transport.

  19. Wavelet analysis of molecular dynamics: efficient extraction of time-frequency information in ultrafast optical processes.

    Science.gov (United States)

    Prior, Javier; Castro, Enrique; Chin, Alex W; Almeida, Javier; Huelga, Susana F; Plenio, Martin B

    2013-12-14

    New experimental techniques based on nonlinear ultrafast spectroscopies have been developed over the last few years, and have been demonstrated to provide powerful probes of quantum dynamics in different types of molecular aggregates, including both natural and artificial light harvesting complexes. Fourier transform-based spectroscopies have been particularly successful, yet "complete" spectral information normally necessitates the loss of all information on the temporal sequence of events in a signal. This information though is particularly important in transient or multi-stage processes, in which the spectral decomposition of the data evolves in time. By going through several examples of ultrafast quantum dynamics, we demonstrate that the use of wavelets provide an efficient and accurate way to simultaneously acquire both temporal and frequency information about a signal, and argue that this greatly aids the elucidation and interpretation of physical process responsible for non-stationary spectroscopic features, such as those encountered in coherent excitonic energy transport.

  20. Investigation of the Impact of Extracting and Exchanging Health Information by Using Internet and Social Networks

    Science.gov (United States)

    Pistolis, John; Zimeras, Stelios; Chardalias, Kostas; Roupa, Zoe; Fildisis, George; Diomidous, Marianna

    2016-01-01

    Introduction: Social networks (1) have been embedded in our daily life for a long time. They constitute a powerful tool used nowadays for both searching and exchanging information on different issues by using Internet searching engines (Google, Bing, etc.) and Social Networks (Facebook, Twitter etc.). In this paper, are presented the results of a research based on the frequency and the type of the usage of the Internet and the Social Networks by the general public and the health professionals. Objectives: The objectives of the research were focused on the investigation of the frequency of seeking and meticulously searching for health information in the social media by both individuals and health practitioners. The exchanging of information is a procedure that involves the issues of reliability and quality of information. Methods: In this research, by using advanced statistical techniques an effort is made to investigate the participant’s profile in using social networks for searching and exchanging information on health issues. Results: Based on the answers 93 % of the people, use the Internet to find information on health-subjects. Considering principal component analysis, the most important health subjects were nutrition (0.719 %), respiratory issues (0.79 %), cardiological issues (0.777%), psychological issues (0.667%) and total (73.8%). Conclusions: The research results, based on different statistical techniques revealed that the 61.2% of the males and 56.4% of the females intended to use the social networks for searching medical information. Based on the principal components analysis, the most important sources that the participants mentioned, were the use of the Internet and social networks for exchanging information on health issues. These sources proved to be of paramount importance to the participants of the study. The same holds for nursing, medical and administrative staff in hospitals. PMID:27482135

  1. Extracting protein dynamics information from overlapped NMR signals using relaxation dispersion difference NMR spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Konuma, Tsuyoshi [Icahn School of Medicine at Mount Sinai, Department of Structural and Chemical Biology (United States); Harada, Erisa [Suntory Foundation for Life Sciences, Bioorganic Research Institute (Japan); Sugase, Kenji, E-mail: sugase@sunbor.or.jp, E-mail: sugase@moleng.kyoto-u.ac.jp [Kyoto University, Department of Molecular Engineering, Graduate School of Engineering (Japan)

    2015-12-15

    Protein dynamics plays important roles in many biological events, such as ligand binding and enzyme reactions. NMR is mostly used for investigating such protein dynamics in a site-specific manner. Recently, NMR has been actively applied to large proteins and intrinsically disordered proteins, which are attractive research targets. However, signal overlap, which is often observed for such proteins, hampers accurate analysis of NMR data. In this study, we have developed a new methodology called relaxation dispersion difference that can extract conformational exchange parameters from overlapped NMR signals measured using relaxation dispersion spectroscopy. In relaxation dispersion measurements, the signal intensities of fluctuating residues vary according to the Carr-Purcell-Meiboon-Gill pulsing interval, whereas those of non-fluctuating residues are constant. Therefore, subtraction of each relaxation dispersion spectrum from that with the highest signal intensities, measured at the shortest pulsing interval, leaves only the signals of the fluctuating residues. This is the principle of the relaxation dispersion difference method. This new method enabled us to extract exchange parameters from overlapped signals of heme oxygenase-1, which is a relatively large protein. The results indicate that the structural flexibility of a kink in the heme-binding site is important for efficient heme binding. Relaxation dispersion difference requires neither selectively labeled samples nor modification of pulse programs; thus it will have wide applications in protein dynamics analysis.

  2. Additional evidence that rosacea pathogenesis may involve demodex: new information from the topical efficacy of ivermectin and praziquantel.

    Science.gov (United States)

    Abokwidir, Manal; Fleischer, Alan B

    2015-09-17

    Additional evidence that Demodex folliculorum may contribute to the pathogenesis of papulopustular rosacea are new studies of two topical antiparasitic agents. Ivermectin and praziquantel have recently been shown to be effective in decreasing the severity of papulopustular rosacea. These two agents significantly differ in molecular structure, but yield similar antiparasitic mechanisms of action. Higher numbers of Demodex mites are found in the skin of patients with rosacea than in people with normal skin. If Demodex play a role in pathogenesis, then hypersensitivity to the mites, their flora, or their products could explain the observed efficacy of antidemodectic therapy.

  3. Extracting information about the initial state from the black hole radiation

    CERN Document Server

    Lochan, Kinjalk

    2015-01-01

    The crux of the black hole information paradox is related to the fact that the complete information about the initial state of a quantum field in a collapsing spacetime is not available to future asymptotic observers, belying the expectations from a unitary quantum theory. We study the imprints of the initial quantum state, contained in the distortions of the black hole radiation from the thermal spectrum, which can be detected by the asymptotic observers. We identify the class of in-states which can be fully reconstructed from the information contained in the distortions at the semiclassical level. Even for the general in-state, we can uncover a specific amount of information about the initial state. For a large class of initial states, some specific observables defined in the initial Hilbert space are completely determined from the resulting final spectrum. These results suggest that a \\textit{classical} collapse scenario ignores this richness of information in the resulting spectrum and a consistent quantu...

  4. Mini Electrodes on Ablation Catheters: Valuable Addition or Redundant Information?—Insights from a Computational Study

    Directory of Open Access Journals (Sweden)

    Stefan Pollnow

    2017-01-01

    Full Text Available Radiofrequency ablation has become a first-line approach for curative therapy of many cardiac arrhythmias. Various existing catheter designs provide high spatial resolution to identify the best spot for performing ablation and to assess lesion formation. However, creation of transmural and nonconducting ablation lesions requires usage of catheters with larger electrodes and improved thermal conductivity, leading to reduced spatial sensitivity. As trade-off, an ablation catheter with integrated mini electrodes was introduced. The additional diagnostic benefit of this catheter is still not clear. In order to solve this issue, we implemented a computational setup with different ablation scenarios. Our in silico results show that peak-to-peak amplitudes of unipolar electrograms from mini electrodes are more suitable to differentiate ablated and nonablated tissue compared to electrograms from the distal ablation electrode. However, in orthogonal mapping position, no significant difference was observed between distal electrode and mini electrodes electrograms in the ablation scenarios. In conclusion, catheters with mini electrodes bring about additional benefit to distinguish ablated tissue from nonablated tissue in parallel position with high spatial resolution. It is feasible to detect conduction gaps in linear lesions with this catheter by evaluating electrogram data from mini electrodes.

  5. A Novel Approach for Text Categorization of Unorganized data based with Information Extraction

    Directory of Open Access Journals (Sweden)

    Suneetha Manne,

    2011-07-01

    Full Text Available Internet has made a profound change in the lives of many enthusiastic innovators and researchers. The information available on the web has knocked the doors of Knowledge Discovery leading to a new Information era. Unfortunately, most Search Engines provide web content which is irrelevant to the information intended to the browser. Many Text Categorization techniques for web content have been developed, to recognize the given document’s category but failed to make trust worthy results. This paper primarily focuses on web content categorization based on classic summarization technique by enabling the classification at word level. The web document is preprocessed first which involves filtering the content with classical techniques and then is converted into organized data. The organized data is then treated with predefined hierarchical categorical set to identify theexact category.

  6. Improve Quality of Life - additional criteria for health and social care information technology acceptance in an ageing world.

    Science.gov (United States)

    Monteiro, Jorge

    2012-01-01

    Reversing the rising cost of health and social systems is needed in ageing developed and developing countries. A new model of ageing is advocated by the World Health Organization. This new model asks for more personal health accountability and a more integrated approach on care and preventive cure. Information systems and technologies can play an important role in supporting the changes needed in order to have better and more sustainable health and social care systems. Using value and results for patients as criteria by which systems are accepted by users and by organizations can contribute to a value based competition in health and social care systems. The unified theory of acceptance and use of technology is presented, and the pertinence of adding an extension to the theory in order capture Quality of Life improvements expectations is explored.

  7. Amplitude extraction in pseudoscalar-meson photoproduction: towards a situation of complete information

    CERN Document Server

    Nys, Jannes; Ryckebusch, Jan

    2015-01-01

    A complete set for pseudoscalar-meson photoproduction is a minimum set of observables from which one can determine the underlying reaction amplitudes unambiguously. The complete sets considered in this work involve single- and double-polarization observables. It is argued that for extracting amplitudes from data, the transversity representation of the reaction amplitudes offers advantages over alternate representations. It is shown that with the available single-polarization data for the p({\\gamma},K^+)\\Lambda reaction, the energy and angular dependence of the moduli of the normalized transversity amplitudes in the resonance region can be determined to a fair accuracy. Determining the relative phases of the amplitudes from double-polarization observables is far less evident.

  8. Adaptive extraction of emotion-related EEG segments using multidimensional directed information in time-frequency domain.

    Science.gov (United States)

    Petrantonakis, Panagiotis C; Hadjileontiadis, Leontios J

    2010-01-01

    Emotion discrimination from electroencephalogram (EEG) has gained attention the last decade as a user-friendly and effective approach to EEG-based emotion recognition (EEG-ER) systems. Nevertheless, challenging issues regarding the emotion elicitation procedure, especially its effectiveness, raise. In this work, a novel method, which not only evaluates the degree of emotion elicitation but localizes the emotion information in the time-frequency domain, as well, is proposed. The latter, incorporates multidimensional directed information at the time-frequency EEG representation, extracted using empirical mode decomposition, and introduces an asymmetry index for adaptive emotion-related EEG segment selection. Experimental results derived from 16 subjects visually stimulated with pictures from the valence/arousal space drawn from the International Affective Picture System database, justify the effectiveness of the proposed approach and its potential contribution to the enhancement of EEG-ER systems.

  9. Extracting DC bus current information for optimal phase correction and current ripple in sensorless brushless DC motor drive

    Institute of Scientific and Technical Information of China (English)

    Zu-sheng HO; Chii-maw UANG; Ping-chieh WANG

    2014-01-01

    Brushless DC motor (BLDCM) sensorless driving technology is becoming increasingly established. However, op-timal phase correction still relies on complex calculations or algorithms. In finding the correct commutation point, the problem of phase lag is introduced. In this paper, we extract DC bus current information for auto-calibrating the phase shift to obtain the correct commutation point and optimize the control of BLDC sensorless driving. As we capture only DC bus current information, the original shunt resistor is used in the BLDCM driver and there is no need to add further current sensor components. Software processing using only simple arithmetic operations successfully accomplishes the phase correction. Experimental results show that the proposed method can operate accurately and stably at low or high speed, with light or heavy load, and is suitable for practical applications. This approach will not increase cost but will achieve the best performance/cost ratio and meet market expectations.

  10. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial...... features, include methods for noise removal, e.g., labeling of beyond building-perimeter devices, and methods for quantification of area densities and flows, e.g., building enter and exit events, and for classifying the behavior of people, e.g., into user roles such as visitor, hospitalized or employee...... noise removal of beyond building perimeter devices. We furthermore present detailed statistics from our analysis regarding people’s presence, movement and roles, and example types of visualizations that both highlight their potential as inspection tools for planners and provide interesting insights...

  11. The Promise of Information and Communication Technology in Healthcare: Extracting Value From the Chaos.

    Science.gov (United States)

    Mamlin, Burke W; Tierney, William M

    2016-01-01

    Healthcare is an information business with expanding use of information and communication technologies (ICTs). Current ICT tools are immature, but a brighter future looms. We examine 7 areas of ICT in healthcare: electronic health records (EHRs), health information exchange (HIE), patient portals, telemedicine, social media, mobile devices and wearable sensors and monitors, and privacy and security. In each of these areas, we examine the current status and future promise, highlighting how each might reach its promise. Steps to better EHRs include a universal programming interface, universal patient identifiers, improved documentation and improved data analysis. HIEs require federal subsidies for sustainability and support from EHR vendors, targeting seamless sharing of EHR data. Patient portals must bring patients into the EHR with better design and training, greater provider engagement and leveraging HIEs. Telemedicine needs sustainable payment models, clear rules of engagement, quality measures and monitoring. Social media needs consensus on rules of engagement for providers, better data mining tools and approaches to counter disinformation. Mobile and wearable devices benefit from a universal programming interface, improved infrastructure, more rigorous research and integration with EHRs and HIEs. Laws for privacy and security need updating to match current technologies, and data stewards should share information on breaches and standardize best practices. ICT tools are evolving quickly in healthcare and require a rational and well-funded national agenda for development, use and assessment.

  12. An Approach for Comparative Research Between Ontology Building & Learning Tools for Information Extraction & Retrieval

    Directory of Open Access Journals (Sweden)

    Dr Suresh Jain C. S. Bhatia Dharmendra Gupta Sumit Jain Bharat Pahadiya

    2012-02-01

    Full Text Available Information available on the web is huge & it covers diversified fields. Nowadays most of search engines use essentially keyword based search techniques. We simply specify a set of keywords or query as a request and a reference we get a list of pages, ranked based on similarity of query. Currently searching web face with one problem that many times outcome is not satisfactory because of irrelevance of the information. Searching the exact information from such a huge repository of unstructured web data is still main area of research interest. One solution to achieve this is Semantic Web. Ontology is an effective concept commonly used for the Semantic Web. Ontology is “an explicit specification of a conceptualization”. There are two main pillars of semantic Web one is Problem Solving Methods & another is Ontology. Ontology building is a tedious job and a time consuming task for user. The quality of ontology plays an important role in information retrieval application .This paper deals with features & familiarity with different Ontology building & learning tools. After all the preliminary knowledge about all tools & software we have made research about specific features & services provided by some tools & identified the optimum tool in all respect for particularly for our further research project.

  13. Named entity extraction and disambiguation for informal text: the missing link

    NARCIS (Netherlands)

    Badieh Habib Morgan, Mena

    2014-01-01

    Social media content represents a large portion of all textual content appearing on the Internet. These streams of user generated content (UGC) provide an opportunity and challenge for media analysts to analyze huge amount of new data and use them to infer and reason with new information. An example

  14. Influence of ethanol concentration, addition of spices extract, and level of sweetness on physico-chemical characteristics and sensory quality of apple vermouth

    OpenAIRE

    2000-01-01

    The composition of apple base wine was found to be suitable for conversion into vermouth. The spices extract contained more TSS, tannins, esters, volatile acid but lower titrable acid than apple base wine. To optimize and develop apple vermouth with different ethanol concentrations (12%, 15%, 18%), sugar content (4%, 8%) and spices extract (2.5% and 5.0%) was prepared and was evaluated. Significant differences in physico-chemical characteristics and sensory quality amongst the vermouths havin...

  15. Use of polar and nonpolar fractions as additional information sources for studying thermoxidized virgin olive oils by FTIR

    Directory of Open Access Journals (Sweden)

    Tena, N.

    2014-09-01

    Full Text Available Fourier transform infrared (FTIR spectroscopy has been proposed to study the degradation of virgin olive oils (VOO in samples undergoing thermoxidation. The polar and nonpolar fractions of oxidized oils have been analyzed by FTIR to provide further information on the minor spectral changes taking place during thermoxidation. This information assists in the interpretation of the spectra of the samples. For this purpose polar and nonpolar fractions of 47 VOO samples thermoxidized (190 °C in a fryer were analyzed by FTIR. The time-course change of the band area assigned to single cis double bonds was explained by their correlation with the decrease in oleic acid (adjusted-R2=0.93. The bands assigned to the hydroxyl groups and the first overtone of ester groups was better studied in the spectra collected for the polar and nonpolar fractions, respectively. The bands assigned to peroxide, epoxy, tertiary alcohols and fatty acids were clearly observed in the spectra of the polar fraction while they are not noticeable in the spectra of the oils.La espectroscopía de infrarrojos por transformada de Fourier (FTIR se ha propuesto para estudiar la degradación de los aceites de oliva vírgenes (AOV sujetas a termoxidación. Las fracciones polares y no polares de aceites oxidados se analizaron mediante FTIR para obtener más información sobre los cambios espectrales menores que tienen lugar durante la termoxidación. Esa información ayuda en la interpretación de los espectros de las muestras puras. Con este objetivo, fracciones polares y no polares de 47 AOV termoxidados (190 °C en una freidora se analizaron mediante FTIR. La banda asignada a dobles enlaces cis se explica por su correlación con la disminución de ácido oleico (R2-ajustado=0,93. Las bandas asignadas a los grupos hidroxilos y del primer sobretono de los grupos éster se estudió mejor en los espectros recogidos para la fracción polar y no polar, respectivamente. Grupos asignados a per

  16. Maximizing Chromatographic Information from Environmental Extracts by GCxGC-ToF-MS

    NARCIS (Netherlands)

    Skoczynska, E.M.; Korytar, P.; Boer, de J.

    2008-01-01

    Comprehensive two-dimensional gas chromatography (GCxGC) coupled with a time-of-flight (ToF) detector allows the separation of many constituents of previously unresolved complex mixtures (UCM) of contaminants in sediment samples. In addition to the powerful chromatographic resolution, automated mass

  17. Randomised controlled feasibility trial of an evidence-informed behavioural intervention for obese adults with additional risk factors.

    Directory of Open Access Journals (Sweden)

    Falko F Sniehotta

    Full Text Available BACKGROUND: Interventions for dietary and physical activity changes in obese adults may be less effective for participants with additional obesity-related risk factors and co-morbidities than for otherwise healthy individuals. This study aimed to test the feasibility and acceptability of the recruitment, allocation, measurement, retention and intervention procedures of a randomised controlled trial of an intervention to improve physical activity and dietary practices amongst obese adults with additional obesity related risk factors. METHOD: Pilot single centre open-labelled outcome assessor-blinded randomised controlled trial of obese (Body Mass Index (BMI≥30 kg/m2 adults (age≥18 y with obesity related co-morbidities such as type 2 diabetes, impaired glucose tolerance or hypertension. Participants were randomly allocated to a manual-based group intervention or a leaflet control condition in accordance to a 2∶1 allocation ratio. Primary outcome was acceptability and feasibility of trial procedures, secondary outcomes included measures of body composition, physical activity, food intake and psychological process measures. RESULTS: Out of 806 potentially eligible individuals identified through list searches in two primary care general medical practices N = 81 participants (63% female; mean-age = 56.56(11.44; mean-BMI = 36.73(6.06 with 2.35(1.47 co-morbidities were randomised. Scottish Index of Multiple Deprivation (SIMD was the only significant predictor of providing consent to take part in the study (higher chances of consent for invitees with lower levels of deprivation. Participant flowcharts, qualitative and quantitative feedback suggested good acceptance and feasibility of intervention procedures but 34.6% of randomised participants were lost to follow-up due to overly high measurement burden and sub-optimal retention procedures. Participants in the intervention group showed positive trends for most psychological, behavioural

  18. Extracting Feature Information and its Visualization Based on the Characteristic Defect Octave Frequencies in a Rolling Element Bearing

    Directory of Open Access Journals (Sweden)

    Jianyu Lei

    2007-10-01

    Full Text Available Monitoring the condition of rolling element bearings and defect diagnosis has received considerable attention for many years because the majority of problems in rotating machines are caused by defective bearings. In order to monitor conditions and diagnose defects in a rolling element bearing, a new approach is developed, based on the characteristic defect octave frequencies. The characteristic defect frequencies make it possible to detect the presence of a defect and diagnose in what part of the bearing the defect appears. However, because the characteristic defect frequencies vary with rotational speed, it is difficult to extract feature information from data at variable rotational speeds. In this paper, the characteristic defect octave frequencies, which do not vary with rotation speed, are introduced to replace the characteristic defect frequencies. Therefore feature information can be easily extracted. Moreover, based on characteristic defect octave frequencies, an envelope spectrum array, which associates 3-D visualization technology with extremum envelope spectrum technology, is established. This method has great advantages in acquiring the characteristics and trends of the data and achieves a straightforward and creditable result.

  19. Evaluation of an Automated Information Extraction Tool for Imaging Data Elements to Populate a Breast Cancer Screening Registry.

    Science.gov (United States)

    Lacson, Ronilda; Harris, Kimberly; Brawarsky, Phyllis; Tosteson, Tor D; Onega, Tracy; Tosteson, Anna N A; Kaye, Abby; Gonzalez, Irina; Birdwell, Robyn; Haas, Jennifer S

    2015-10-01

    Breast cancer screening is central to early breast cancer detection. Identifying and monitoring process measures for screening is a focus of the National Cancer Institute's Population-based Research Optimizing Screening through Personalized Regimens (PROSPR) initiative, which requires participating centers to report structured data across the cancer screening continuum. We evaluate the accuracy of automated information extraction of imaging findings from radiology reports, which are available as unstructured text. We present prevalence estimates of imaging findings for breast imaging received by women who obtained care in a primary care network participating in PROSPR (n = 139,953 radiology reports) and compared automatically extracted data elements to a "gold standard" based on manual review for a validation sample of 941 randomly selected radiology reports, including mammograms, digital breast tomosynthesis, ultrasound, and magnetic resonance imaging (MRI). The prevalence of imaging findings vary by data element and modality (e.g., suspicious calcification noted in 2.6% of screening mammograms, 12.1% of diagnostic mammograms, and 9.4% of tomosynthesis exams). In the validation sample, the accuracy of identifying imaging findings, including suspicious calcifications, masses, and architectural distortion (on mammogram and tomosynthesis); masses, cysts, non-mass enhancement, and enhancing foci (on MRI); and masses and cysts (on ultrasound), range from 0.8 to1.0 for recall, precision, and F-measure. Information extraction tools can be used for accurate documentation of imaging findings as structured data elements from text reports for a variety of breast imaging modalities. These data can be used to populate screening registries to help elucidate more effective breast cancer screening processes.

  20. Analysis of space-borne data for coastal zone information extraction of Goa Coast, India

    Digital Repository Service at National Institute of Oceanography (India)

    Kunte, P.D.; Wagle, B.G.

    pan Estuary island Mangrove vegetation Fig. 2. Photo-geomorphological map o[ study area. Analysis of space-borne data for coastal zone information 193 I Fluvial I Coastal Features I I) Estuary islands 2) River terraces 3) Tidal flats.... These projects result in tidal flooding, further accelerating the erosion of river banks, which ultimately has adverse impacts on fish nurseries and salt pans. These revelations demonstrate that remote sensing with spatial, spectral, and temporal capabilities...

  1. Resource Conservation and Recovery Information System extract tape. Data tape documentation

    Energy Technology Data Exchange (ETDEWEB)

    1990-12-31

    Within the Environmental Protection Agency (EPA), the Office of Solid Waste and Emergency Response (OSWER) is responsible for the development and management of a national program to safely handle solid and hazardous waste. The national program, for the most part, is authorized by the Resource Conservation and Recovery Act (RCRA). The Hazardous Waste Data Management System (HWDMS) was developed to automatically track the status of permits, reports, inspections, enforcement activities, and financial data to assist EPA in managing the data generated by RCRA. As with many computer systems, HWDMS has outgrown its capabilities, so a new system is needed. The new system is called the Resource Conservation and Recovery Information System (RCRIS). The goal of the RCRIS system is to provide a more effective means for tracking hazardous waste handlers regulated under RCRA. RCRA Notification, Permitting, and Compliance Monitoring and Evaluation data is available through the National Technical Information Service (NTIS) on IBM compatible tapes. From now until HWDMS is completely archived, there will be two data tapes from NTIS. There will be a tape for HWDMS and a separate one for RCRIS. The HWDMS tape will include data from all States and Territories, except for Mississippi. The RCRIS tape will only contain the data from Mississippi and general enforcement data, sensitive information is not included.

  2. Effects on enteric methane production and bacterial and archaeal communities by the addition of cashew nut shell extract or glycerol-an in vitro evaluation.

    Science.gov (United States)

    Danielsson, Rebecca; Werner-Omazic, Anna; Ramin, Mohammad; Schnürer, Anna; Griinari, Mikko; Dicksved, Johan; Bertilsson, Jan

    2014-09-01

    The objective of the study was to evaluate the effect of cashew nut shell extract (CNSE) and glycerol (purity >99%) on enteric methane (CH4) production and microbial communities in an automated gas in vitro system. Microbial communities from the in vitro system were compared with samples from the donor cows, in vivo. Inoculated rumen fluid was mixed with a diet with a 60:40 forage:concentrate ratio and, in total, 5 different treatments were set up: 5mg of CNSE (CNSE-L), 10mg of CNSE (CNSE-H), 15mmol of glycerol/L (glycerol-L), and 30mmol of glycerol/L (glycerol-H), and a control without feed additive. Gas samples were taken at 2, 4, 8, 24, 32, and 48h of incubation, and the CH4 concentration was measured. Samples of rumen fluid were taken for volatile fatty acid analysis and for microbial sequence analyses after 8, 24, and 48h of incubation. In vivo rumen samples from the cows were taken 2h after the morning feeding at 3 consecutive days to compare the in vitro system with in vivo conditions. The gas data and data from microbial sequence analysis (454 sequencing) were analyzed using a mixed model and principal components analysis. These analyses illustrated that CH4 production was reduced with the CNSE treatment, by 8 and 18%, respectively, for the L and H concentration. Glycerol instead increased CH4 production by 8 and 12%, respectively, for the L and H concentration. The inhibition with CNSE could be due to the observed shift in bacterial population, possibly resulting in decreased production of hydrogen or formate, the methanogenic substrates. Alternatively the response could be explained by a shift in the methanogenic community. In the glycerol treatments, no main differences in bacterial or archaeal population were detected compared with the in vivo control. Thus, the increase in CH4 production may be explained by the increase in substrate in the in vitro system. The reduced CH4 production in vitro with CNSE suggests that CNSE can be a promising inhibitor of

  3. Identifying and extracting patient smoking status information from clinical narrative texts in Spanish.

    Science.gov (United States)

    Figueroa, Rosa L; Soto, Diego A; Pino, Esteban J

    2014-01-01

    In this work we present a system to identify and extract patient's smoking status from clinical narrative text in Spanish. The clinical narrative text was processed using natural language processing techniques, and annotated by four people with a biomedical background. The dataset used for classification had 2,465 documents, each one annotated with one of the four smoking status categories. We used two feature representations: single word token and bigrams. The classification problem was divided in two levels. First recognizing between smoker (S) and non-smoker (NS); second recognizing between current smoker (CS) and past smoker (PS). For each feature representation and classification level, we used two classifiers: Support Vector Machines (SVM) and Bayesian Networks (BN). We split our dataset as follows: a training set containing 66% of the available documents that was used to build classifiers and a test set containing the remaining 34% of the documents that was used to test and evaluate the model. Our results show that SVM together with the bigram representation performed better in both classification levels. For S vs NS classification level performance measures were: ACC=85%, Precision=85%, and Recall=90%. For CS vs PS classification level performance measures were: ACC=87%, Precision=91%, and Recall=94%.

  4. Inexperienced clinicians can extract pathoanatomic information from MRI narrative reports with high reproducibility for use in research/quality assurance

    Directory of Open Access Journals (Sweden)

    Kent Peter

    2011-07-01

    Full Text Available Abstract Background Although reproducibility in reading MRI images amongst radiologists and clinicians has been studied previously, no studies have examined the reproducibility of inexperienced clinicians in extracting pathoanatomic information from magnetic resonance imaging (MRI narrative reports and transforming that information into quantitative data. However, this process is frequently required in research and quality assurance contexts. The purpose of this study was to examine inter-rater reproducibility (agreement and reliability among an inexperienced group of clinicians in extracting spinal pathoanatomic information from radiologist-generated MRI narrative reports. Methods Twenty MRI narrative reports were randomly extracted from an institutional database. A group of three physiotherapy students independently reviewed the reports and coded the presence of 14 common pathoanatomic findings using a categorical electronic coding matrix. Decision rules were developed after initial coding in an effort to resolve ambiguities in narrative reports. This process was repeated a further three times using separate samples of 20 MRI reports until no further ambiguities were identified (total n = 80. Reproducibility between trainee clinicians and two highly trained raters was examined in an arbitrary coding round, with agreement measured using percentage agreement and reliability measured using unweighted Kappa (k. Reproducibility was then examined in another group of three trainee clinicians who had not participated in the production of the decision rules, using another sample of 20 MRI reports. Results The mean percentage agreement for paired comparisons between the initial trainee clinicians improved over the four coding rounds (97.9-99.4%, although the greatest improvement was observed after the first introduction of coding rules. High inter-rater reproducibility was observed between trainee clinicians across 14 pathoanatomic categories over the

  5. Case study on the extraction of land cover information from the SAR image of a coal mining area

    Institute of Scientific and Technical Information of China (English)

    HU Zhao-ling; LI Hai-quan; DU Pei-jun

    2009-01-01

    In this study, analyses are conducted on the information features of a construction site, a cornfield and subsidence seeper land in a coal mining area with a synthetic aperture radar (SAR) image of medium resolution. Based on features of land cover of the coal mining area, on texture feature extraction and a selection method of a gray-level co-occurrence matrix (GLCM) of the SAR image, we propose in this study that the optimum window size for computing the GLCM is an appropriate sized window that can effectively distinguish different types of land cover. Next, a band combination was carried out over the text feature images and the band-filtered SAR image to secure a new multi-band image. After the transformation of the new image with principal component analysis, a classification is conducted selectively on three principal component bands with the most information. Finally, through training and experimenting with the samples, a better three-layered BP neural network was established to classify the SAR image. The results show that, assisted by texture information, the neural network classification improved the accuracy of SAR image clas-sification by 14.6%, compared with a classification by maximum likelihood estimation without texture information.

  6. Classification of Informal Settlements Through the Integration of 2d and 3d Features Extracted from Uav Data

    Science.gov (United States)

    Gevaert, C. M.; Persello, C.; Sliuzas, R.; Vosselman, G.

    2016-06-01

    Unmanned Aerial Vehicles (UAVs) are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  7. CLASSIFICATION OF INFORMAL SETTLEMENTS THROUGH THE INTEGRATION OF 2D AND 3D FEATURES EXTRACTED FROM UAV DATA

    Directory of Open Access Journals (Sweden)

    C. M. Gevaert

    2016-06-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  8. Development of a New Binary Solvent System Using Ionic Liquids as Additives to Improve Rotenone Extraction Yield from Malaysia Derris sp.

    Directory of Open Access Journals (Sweden)

    Zetty Shafiqa Othman

    2015-01-01

    Full Text Available Rotenone is one of the prominent insecticidal isoflavonoid compounds which can be isolated from the extract of Derris sp. plant. Despite being an effective compound in exterminating pests in a minute concentration, procuring a significant amount of rotenone in the extracts for commercialized biopesticides purposes is a challenge to be attained. Therefore, the objective of this study was to determine the best ionic liquid (IL which gives the highest yield of rotenone. The normal soaking extraction (NSE method was carried out for 24 hrs using five different types of binary solvent systems comprising a combination of acetone and five respective ionic liquids (ILs of (1 [BMIM] Cl; (2 [BMIM] OAc; (3 [BMIM] NTf2; (4 [BMIM] OTf; and (5 [BMPy] Cl. Next, the yield of rotenone, % (w/w, and its concentration (mg/mL in dried roots were quantitatively determined by means of RP-HPLC and TLC. The results showed that a binary solvent system of [BMIM] OTf + acetone was the best solvent system combination as compared to other solvent systems (P<0.05. It contributed to the highest rotenone content of 2.69 ± 0.21% (w/w (4.04 ± 0.34 mg/mL at 14 hrs of exhaustive extraction time. In conclusion, a combination of the ILs with a selective organic solvent has been proven to increase a significant amount of bioactive constituents in the phytochemical extraction process.

  9. Toward a comprehensive drug ontology: extraction of drug-indication relations from diverse information sources.

    Science.gov (United States)

    Sharp, Mark E

    2017-01-10

    Drug ontologies could help pharmaceutical researchers overcome information overload and speed the pace of drug discovery, thus benefiting the industry and patients alike. Drug-disease relations, specifically drug-indication relations, are a prime candidate for representation in ontologies. There is a wealth of available drug-indication information, but structuring and integrating it is challenging. We created a drug-indication database (DID) of data from 12 openly available, commercially available, and proprietary information sources, integrated by terminological normalization to UMLS and other authorities. Across sources, there are 29,964 unique raw drug/chemical names, 10,938 unique raw indication "target" terms, and 192,008 unique raw drug-indication pairs. Drug/chemical name normalization to CAS numbers or UMLS concepts reduced the unique name count to 91 or 85% of the raw count, respectively, 84% if combined. Indication "target" normalization to UMLS "phenotypic-type" concepts reduced the unique term count to 57% of the raw count. The 12 sources of raw data varied widely in coverage (numbers of unique drug/chemical and indication concepts and relations) generally consistent with the idiosyncrasies of each source, but had strikingly little overlap, suggesting that we successfully achieved source/raw data diversity. The DID is a database of structured drug-indication relations intended to facilitate building practical, comprehensive, integrated drug ontologies. The DID itself is not an ontology, but could be converted to one more easily than the contributing raw data. Our methodology could be adapted to the creation of other structured drug-disease databases such as for contraindications, precautions, warnings, and side effects.

  10. Linking attentional processes and conceptual problem solving: Visual cues facilitate the automaticity of extracting relevant information from diagrams

    Directory of Open Access Journals (Sweden)

    Amy eRouinfar

    2014-09-01

    Full Text Available This study investigated links between lower-level visual attention processes and higher-level problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants’ attention to relevant areas to facilitate problem solving. Participants (N = 80 individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants’ verbal responses were used to determine their accuracy. The study produced two major findings. First, short duration visual cues can improve problem solving performance on a variety of insight physics problems, including transfer problems not sharing the surface features of the training problems, but instead sharing the underlying solution path. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers’ attention necessarily embodying the solution to the problem. Instead, the cueing effects were caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, these short duration visual cues when administered repeatedly over multiple training problems resulted in participants becoming more efficient at extracting the relevant information on the transfer problem, showing that such cues can improve the automaticity with which solvers extract relevant information from a problem. Both of these results converge on the conclusion that lower-order visual processes driven by attentional cues can influence higher-order cognitive processes

  11. A study on the effect of using mangrove leaf extracts as a feed additive in the progress of bacterial infections in marine ornamental fish

    Institute of Scientific and Technical Information of China (English)

    Thangavelu Balasubramanian; Kapila Tissera

    2013-01-01

    Objective: To ascertain the feasibility of using sustainable natural resources in maintaining disease free fish in such establishments.Methods:causative bacteria were identified by morphology and biochemical techniques. The antibacterial activity and disease resistant capability of mangrove plant leaf extract were investigated against fish pathogens.Results:The infected marine ornamental fishes were collected from the hatchery condition and inhibition activity at the concentration of 220, 200, 175 and 150 µg/mL against Pseudomonas fluorescens, Pseudomonas aeruginosa, Vibrio parahaemolyticus, and Vibrio anguillarum respectively. The experimental trial reveals feeding marine ornamental fish with feed incorporated with a methanol leaf extract of Avicennia marina, increases their survival and reduces their susceptibility to infections from the isolated bacteria. Based on the in vitro assay, methanol extract of Avicennia marina was exhibited good Conclusions: The mangrove leaves have potential to control the infections caused by Pseudomonas fluorescens, Pseudomonas aeruginosa, Vibrio parahaemolyticus and Vibrio anguillarum.

  12. SSIDs in the Wild: Extracting Semantic Information from WiFi SSIDs

    OpenAIRE

    Seneviratne, Suranga; Jiang, Fangzhou; Cunche, Mathieu; Seneviratne, Aruna

    2015-01-01

    International audience; WiFi networks are becoming increasingly ubiquitous. In addition to providing network connectivity, WiFi finds applications in areas such as indoor and outdoor localisation, home automation, and physical analytics. In this paper, we explore the semantics of one key attribute of a WiFi network, SSID name. Using a dataset of approximately 120,000 WiFi access points and their corresponding geo-locations, we use a set of similarity metrics to relate SSID names to known busi...

  13. COSEBIs: Extracting the full E-/B-mode information from cosmic shear correlation functions

    CERN Document Server

    Schneider, Peter; Krause, Elisabeth

    2010-01-01

    Cosmic shear is considered one of the most powerful methods for studying the properties of Dark Energy in the Universe. As a standard method, the two-point correlation functions $xi_\\pm(theta)$ of the cosmic shear field are used as statistical measures for the shear field. In order to separate the observed shear into E- and B-modes, the latter being most likely produced by remaining systematics in the data set and/or intrinsic alignment effects, several statistics have been defined before. Here we aim at a complete E-/B-mode decomposition of the cosmic shear information contained in the $xi_\\pm$ on a finite angular interval. We construct two sets of such E-/B-mode measures, namely Complete Orthogonal Sets of E-/B-mode Integrals (COSEBIs), characterized by weight functions between the $xi_\\pm$ and the COSEBIs which are polynomials in $theta$ or polynomials in $ln(theta)$, respectively. Considering the likelihood in cosmological parameter space, constructed from the COSEBIs, we study their information contents....

  14. Textpresso: an ontology-based information retrieval and extraction system for biological literature.

    Directory of Open Access Journals (Sweden)

    Hans-Michael Müller

    2004-11-01

    Full Text Available We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc. and classes that relate two objects (e.g., association, regulation, etc. or describe one (e.g., biological process, etc.. Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other

  15. Textpresso: an ontology-based information retrieval and extraction system for biological literature.

    Science.gov (United States)

    Müller, Hans-Michael; Kenny, Eimear E; Sternberg, Paul W

    2004-11-01

    We have developed Textpresso, a new text-mining system for scientific literature whose capabilities go far beyond those of a simple keyword search engine. Textpresso's two major elements are a collection of the full text of scientific articles split into individual sentences, and the implementation of categories of terms for which a database of articles and individual sentences can be searched. The categories are classes of biological concepts (e.g., gene, allele, cell or cell group, phenotype, etc.) and classes that relate two objects (e.g., association, regulation, etc.) or describe one (e.g., biological process, etc.). Together they form a catalog of types of objects and concepts called an ontology. After this ontology is populated with terms, the whole corpus of articles and abstracts is marked up to identify terms of these categories. The current ontology comprises 33 categories of terms. A search engine enables the user to search for one or a combination of these tags and/or keywords within a sentence or document, and as the ontology allows word meaning to be queried, it is possible to formulate semantic queries. Full text access increases recall of biological data types from 45% to 95%. Extraction of particular biological facts, such as gene-gene interactions, can be accelerated significantly by ontologies, with Textpresso automatically performing nearly as well as expert curators to identify sentences; in searches for two uniquely named genes and an interaction term, the ontology confers a 3-fold increase of search efficiency. Textpresso currently focuses on Caenorhabditis elegans literature, with 3,800 full text articles and 16,000 abstracts. The lexicon of the ontology contains 14,500 entries, each of which includes all versions of a specific word or phrase, and it includes all categories of the Gene Ontology database. Textpresso is a useful curation tool, as well as search engine for researchers, and can readily be extended to other organism

  16. VLSI architecture of NEO spike detection with noise shaping filter and feature extraction using informative samples.

    Science.gov (United States)

    Hoang, Linh; Yang, Zhi; Liu, Wentai

    2009-01-01

    An emerging class of multi-channel neural recording systems aims to simultaneously monitor the activity of many neurons by miniaturizing and increasing the number of recording channels. Vast volume of data from the recording systems, however, presents a challenge for processing and transmitting wirelessly. An on-chip neural signal processor is needed for filtering uninterested recording samples and performing spike sorting. This paper presents a VLSI architecture of a neural signal processor that can reliably detect spike via a nonlinear energy operator, enhance spike signal over noise ratio by a noise shaping filter, and select meaningful recording samples for clustering by using informative samples. The architecture is implemented in 90-nm CMOS process, occupies 0.2 mm(2), and consumes 0.5 mW of power.

  17. Extracting change information of land-use and soil-erosion based on RS & GIS technology

    Institute of Scientific and Technical Information of China (English)

    LI Zhong-feng; LI You-cai

    2007-01-01

    Rapid land-use change has taken place in many arid regions of China such as Yulin prefecture over the last decade due to rehabilitation measures. Land-use change and soil erosion dynamics were investigated by the combined use of remote sensing and geographic information systems (GIS). The objectives were to determine land-use transition rates and soil erosion change in Yulin prefecture over 15 years from 1986 to 2000. Significant changes in land-use and soil erosion occurred in the area over the study period. The results show the significant decrease in barren land mainly due to conversion to grassland. Agricultural land increased associated with conversions from grassland and barren land. The area of water erosion and wind erosion declined. The study demonstrates that the integration of satellite remote sensing and GIS is an effective approach for analyzing the direction, rate, and spatial pattern of land-use and soil erosion change.

  18. Extracting Time-Resolved Information from Time-Integrated Laser-Induced Breakdown Spectra

    Directory of Open Access Journals (Sweden)

    Emanuela Grifoni

    2014-01-01

    Full Text Available Laser-induced breakdown spectroscopy (LIBS data are characterized by a strong dependence on the acquisition time after the onset of the laser plasma. However, time-resolved broadband spectrometers are expensive and often not suitable for being used in portable LIBS instruments. In this paper we will show how the analysis of a series of LIBS spectra, taken at different delays after the laser pulse, allows the recovery of time-resolved spectral information. The comparison of such spectra is presented for the analysis of an aluminium alloy. The plasma parameters (electron temperature and number density are evaluated, starting from the time-integrated and time-resolved spectra, respectively. The results are compared and discussed.

  19. Breast cancer and quality of life: medical information extraction from health forums.

    Science.gov (United States)

    Opitz, Thomas; Aze, Jérome; Bringay, Sandra; Joutard, Cyrille; Lavergne, Christian; Mollevi, Caroline

    2014-01-01

    Internet health forums are a rich textual resource with content generated through free exchanges among patients and, in certain cases, health professionals. We tackle the problem of retrieving clinically relevant information from such forums, with relevant topics being defined from clinical auto-questionnaires. Texts in forums are largely unstructured and noisy, calling for adapted preprocessing and query methods. We minimize the number of false negatives in queries by using a synonym tool to achieve query expansion of initial topic keywords. To avoid false positives, we propose a new measure based on a statistical comparison of frequent co-occurrences in a large reference corpus (Web) to keep only relevant expansions. Our work is motivated by a study of breast cancer patients' health-related quality of life (QoL). We consider topics defined from a breast-cancer specific QoL-questionnaire. We quantify and structure occurrences in posts of a specialized French forum and outline important future developments.

  20. Extraction of Benthic Cover Information from Video Tows and Photographs Using Object-Based Image Analysis

    Science.gov (United States)

    Estomata, M. T. L.; Blanco, A. C.; Nadaoka, K.; Tomoling, E. C. M.

    2012-07-01

    Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES) was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU), which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA), which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05).