WorldWideScience

Sample records for extract quantitative information

  1. A methodology for the extraction of quantitative information from electron microscopy images at the atomic level

    International Nuclear Information System (INIS)

    Galindo, P L; Pizarro, J; Guerrero, E; Guerrero-Lebrero, M P; Scavello, G; Yáñez, A; Sales, D L; Herrera, M; Molina, S I; Núñez-Moraleda, B M; Maestre, J M

    2014-01-01

    In this paper we describe a methodology developed at the University of Cadiz (Spain) in the past few years for the extraction of quantitative information from electron microscopy images at the atomic level. This work is based on a coordinated and synergic activity of several research groups that have been working together over the last decade in two different and complementary fields: Materials Science and Computer Science. The aim of our joint research has been to develop innovative high-performance computing techniques and simulation methods in order to address computationally challenging problems in the analysis, modelling and simulation of materials at the atomic scale, providing significant advances with respect to existing techniques. The methodology involves several fundamental areas of research including the analysis of high resolution electron microscopy images, materials modelling, image simulation and 3D reconstruction using quantitative information from experimental images. These techniques for the analysis, modelling and simulation allow optimizing the control and functionality of devices developed using materials under study, and have been tested using data obtained from experimental samples

  2. Oxygen octahedra picker: A software tool to extract quantitative information from STEM images

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Yi, E-mail: y.wang@fkf.mpg.de; Salzberger, Ute; Sigle, Wilfried; Eren Suyolcu, Y.; Aken, Peter A. van

    2016-09-15

    In perovskite oxide based materials and hetero-structures there are often strong correlations between oxygen octahedral distortions and functionality. Thus, atomistic understanding of the octahedral distortion, which requires accurate measurements of atomic column positions, will greatly help to engineer their properties. Here, we report the development of a software tool to extract quantitative information of the lattice and of BO{sub 6} octahedral distortions from STEM images. Center-of-mass and 2D Gaussian fitting methods are implemented to locate positions of individual atom columns. The precision of atomic column distance measurements is evaluated on both simulated and experimental images. The application of the software tool is demonstrated using practical examples. - Highlights: • We report a software tool for mapping atomic positions from HAADF and ABF images. • It enables quantification of both crystal lattice and oxygen octahedral distortions. • We test the measurement accuracy and precision on simulated and experimental images. • It works well for different orientations of perovskite structures and interfaces.

  3. A method to extract quantitative information in analyzer-based x-ray phase contrast imaging

    International Nuclear Information System (INIS)

    Pagot, E.; Cloetens, P.; Fiedler, S.; Bravin, A.; Coan, P.; Baruchel, J.; Haertwig, J.; Thomlinson, W.

    2003-01-01

    Analyzer-based imaging is a powerful phase-sensitive technique that generates improved contrast compared to standard absorption radiography. Combining numerically two images taken on either side at ±1/2 of the full width at half-maximum (FWHM) of the rocking curve provides images of 'pure refraction' and of 'apparent absorption'. In this study, a similar approach is made by combining symmetrical images with respect to the peak of the analyzer rocking curve but at general positions, ±α·FWHM. These two approaches do not consider the ultrasmall angle scattering produced by the object independently, which can lead to inconsistent results. An accurate way to separately retrieve the quantitative information intrinsic to the object is proposed. It is based on a statistical analysis of the local rocking curve, and allows one to overcome the problems encountered using the previous approaches

  4. Information extraction system

    Science.gov (United States)

    Lemmond, Tracy D; Hanley, William G; Guensche, Joseph Wendell; Perry, Nathan C; Nitao, John J; Kidwell, Paul Brandon; Boakye, Kofi Agyeman; Glaser, Ron E; Prenger, Ryan James

    2014-05-13

    An information extraction system and methods of operating the system are provided. In particular, an information extraction system for performing meta-extraction of named entities of people, organizations, and locations as well as relationships and events from text documents are described herein.

  5. Multimedia Information Extraction

    CERN Document Server

    Maybury, Mark T

    2012-01-01

    The advent of increasingly large consumer collections of audio (e.g., iTunes), imagery (e.g., Flickr), and video (e.g., YouTube) is driving a need not only for multimedia retrieval but also information extraction from and across media. Furthermore, industrial and government collections fuel requirements for stock media access, media preservation, broadcast news retrieval, identity management, and video surveillance.  While significant advances have been made in language processing for information extraction from unstructured multilingual text and extraction of objects from imagery and vid

  6. QUANTITATIVE EXTRACTION OF MEIOFAUNA: A COMPARISON ...

    African Journals Online (AJOL)

    and A G DE WET. Department of Mathematical Statistics, University of Port Elizabeth. Accepted: May 1978. ABSTRACT. Two methods for the quantitative extraction of meiofauna from natural sandy sediments were investigated and compared: Cobb's decanting and sieving technique and the Oostenbrink elutriator. Both.

  7. Challenges in Managing Information Extraction

    Science.gov (United States)

    Shen, Warren H.

    2009-01-01

    This dissertation studies information extraction (IE), the problem of extracting structured information from unstructured data. Example IE tasks include extracting person names from news articles, product information from e-commerce Web pages, street addresses from emails, and names of emerging music bands from blogs. IE is all increasingly…

  8. Quantitative information in medical imaging

    International Nuclear Information System (INIS)

    Deconinck, F.

    1985-01-01

    When developing new imaging or image processing techniques, one constantly has in mind that the new technique should provide a better, or more optimal answer to medical tasks than existing techniques do 'Better' or 'more optimal' imply some kind of standard by which one can measure imaging or image processing performance. The choice of a particular imaging modality to answer a diagnostic task, such as the detection of coronary artery stenosis is also based on an implicit optimalisation of performance criteria. Performance is measured by the ability to provide information about an object (patient) to the person (referring doctor) who ordered a particular task. In medical imaging the task is generally to find quantitative information on bodily function (biochemistry, physiology) and structure (histology, anatomy). In medical imaging, a wide range of techniques is available. Each technique has it's own characteristics. The techniques discussed in this paper are: nuclear magnetic resonance, X-ray fluorescence, scintigraphy, positron emission tomography, applied potential tomography, computerized tomography, and compton tomography. This paper provides a framework for the comparison of imaging performance, based on the way the quantitative information flow is altered by the characteristics of the modality

  9. Scenario Customization for Information Extraction

    National Research Council Canada - National Science Library

    Yangarber, Roman

    2001-01-01

    Information Extraction (IE) is an emerging NLP technology, whose function is to process unstructured, natural language text, to locate specific pieces of information, or facts, in the text, and to use these facts to fill a database...

  10. Extracting useful information from images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic and heter......The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic...

  11. The Quantitative Theory of Information

    DEFF Research Database (Denmark)

    Topsøe, Flemming; Harremoës, Peter

    2008-01-01

    Information Theory as developed by Shannon and followers is becoming more and more important in a number of sciences. The concepts appear to be just the right ones with intuitively appealing operational interpretations. Furthermore, the information theoretical quantities are connected by powerful...

  12. Quantitative data extraction from transmission electron micrographs

    International Nuclear Information System (INIS)

    Sprague, J.A.

    1982-01-01

    The discussion will cover an overview of quantitative TEM, the digital image analysis process, coherent optical processing, and finally a summary of the author's views on potentially useful advances in TEM image processing

  13. Extracting information from multiplex networks

    Science.gov (United States)

    Iacovacci, Jacopo; Bianconi, Ginestra

    2016-06-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering, and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from big data. For these reasons, characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper, we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function Θ ˜ S for describing their mesoscale organization and community structure. As working examples for studying these measures, we consider three multiplex network datasets coming for social science.

  14. Transductive Pattern Learning for Information Extraction

    National Research Council Canada - National Science Library

    McLernon, Brian; Kushmerick, Nicholas

    2006-01-01

    .... We present TPLEX, a semi-supervised learning algorithm for information extraction that can acquire extraction patterns from a small amount of labelled text in conjunction with a large amount of unlabelled text...

  15. Optimized protein extraction for quantitative proteomics of yeasts.

    Directory of Open Access Journals (Sweden)

    Tobias von der Haar

    2007-10-01

    Full Text Available The absolute quantification of intracellular protein levels is technically demanding, but has recently become more prominent because novel approaches like systems biology and metabolic control analysis require knowledge of these parameters. Current protocols for the extraction of proteins from yeast cells are likely to introduce artifacts into quantification procedures because of incomplete or selective extraction.We have developed a novel procedure for protein extraction from S. cerevisiae based on chemical lysis and simultaneous solubilization in SDS and urea, which can extract the great majority of proteins to apparent completeness. The procedure can be used for different Saccharomyces yeast species and varying growth conditions, is suitable for high-throughput extraction in a 96-well format, and the resulting extracts can easily be post-processed for use in non-SDS compatible procedures like 2D gel electrophoresis.An improved method for quantitative protein extraction has been developed that removes some of the sources of artefacts in quantitative proteomics experiments, while at the same time allowing novel types of applications.

  16. Information Extraction for Social Media

    NARCIS (Netherlands)

    Habib, M. B.; Keulen, M. van

    2014-01-01

    The rapid growth in IT in the last two decades has led to a growth in the amount of information available online. A new style for sharing information is social media. Social media is a continuously instantly updated source of information. In this position paper, we propose a framework for

  17. Information Extraction From Chemical Patents

    Directory of Open Access Journals (Sweden)

    Sandra Bergmann

    2012-01-01

    Full Text Available The development of new chemicals or pharmaceuticals is preceded by an indepth analysis of published patents in this field. This information retrieval is a costly and time inefficient step when done by a human reader, yet it is mandatory for potential success of an investment. The goal of the research project UIMA-HPC is to automate and hence speed-up the process of knowledge mining about patents. Multi-threaded analysis engines, developed according to UIMA (Unstructured Information Management Architecture standards, process texts and images in thousands of documents in parallel. UNICORE (UNiform Interface to COmputing Resources workflow control structures make it possible to dynamically allocate resources for every given task to gain best cpu-time/realtime ratios in an HPC environment.

  18. Quantitative immunoelectrophoretic analysis of extract from cow hair and dander

    Energy Technology Data Exchange (ETDEWEB)

    Prahl, P; Weeke, B; Loewenstein, H [Rigshospitalet, Copenhagen (Denmark)

    1978-01-01

    Quantitative immunoelectrophoresis used for the analysis of a dialysed, centrifuged and freeze-dried extract from cow hair and dander revealed 17 antigens. Five of these were identified as serum proteins. Partial identity to antigens of serum and extract from hair and dander of goat, sheep, swine, horse, dog, cat, and guinea pig, and to antigens of house dust was demonstrated. Sera from 36 patients with manifest allergy to cow hair and dander selected on the basis of case history, RAST, skin and provocation test, were examined in crossed radioimmunoelectrophoresis (CRIE); sera from five persons with high serum IgE, but without allergy to cow hair and dander, and sera from five normal individuals were controls. 31/36 of the sera contained IgE with specific affinity for two of the antigens of the extract. Further, two major and six minor allergens were identified. The control sera showed no specific IgE binding. A significant positive correlation was found between RAST and CRIE for the first group of patients. The approximate molecular weights of the four major allergens obtained by means of gel chromatography were: 2.4 x 10/sup 4/, 2 x 10/sup 4/, 2 x 10/sup 5/ dalton, respectively. Using Con-A and Con-A Sepharose in crossed immunoaffinoelectrophoresis, eight of the antigens were revealed to contain groups with affinity for Con-A.

  19. Quantitative immunoelectrophoretic analysis of extract from cow hair and dander

    International Nuclear Information System (INIS)

    Prahl, P.; Weeke, B.; Loewenstein, H.

    1978-01-01

    Quantitative immunoelectrophoresis used for the analysis of a dialysed, centrifuged and freeze-dried extract from cow hair and dander revealed 17 antigens. Five of these were identified as serum proteins. Partial identity to antigens of serum and extract from hair and dander of goat, sheep, swine, horse, dog, cat, and guinea pig, and to antigens of house dust was demonstrated. Sera from 36 patients with manifest allergy to cow hair and dander selected on the basis of case history, RAST, skin and provocation test, were examined in crossed radioimmunoelectrophoresis (CRIE); sera from five persons with high serum IgE, but without allergy to cow hair and dander, and sera from five normal individuals were controls. 31/36 of the sera contained IgE with specific affinity for two of the antigens of the extract. Further, two major and six minor allergens were identified. The control sera showed no specific IgE binding. A significant positive correlation was found between RAST and CRIE for the first group of patients. The approximate molecular weights of the four major allergens obtained by means of gel chromatography were: 2.4 x 10 4 , 2 x 10 4 , 2 x 10 5 dalton, respectively. Using Con-A and Con-A Sepharose in crossed immunoaffinoelectrophoresis, eight of the antigens were revealed to contain groups with affinity for Con-A. (author)

  20. Extracting Information from Multimedia Meeting Collections

    OpenAIRE

    Gatica-Perez, Daniel; Zhang, Dong; Bengio, Samy

    2005-01-01

    Multimedia meeting collections, composed of unedited audio and video streams, handwritten notes, slides, and electronic documents that jointly constitute a raw record of complex human interaction processes in the workplace, have attracted interest due to the increasing feasibility of recording them in large quantities, by the opportunities for information access and retrieval applications derived from the automatic extraction of relevant meeting information, and by the challenges that the ext...

  1. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...

  2. Visible light scatter as quantitative information source on milk constituents

    DEFF Research Database (Denmark)

    Melentiyeva, Anastasiya; Kucheryavskiy, Sergey; Bogomolov, Andrey

    2012-01-01

    analysis. The main task here is to extract individual quantitative information on milk fat and total protein content from spectral data. This is particularly challenging problem in the case of raw natural milk, where the fat globule sizes may essentially differ depending on source. Fig. 1. Spots of light...... designed set of raw milk samples with simultaneously varying fat, total protein and particle size distribution has been analyzed in the Vis spectral region. The feasibility of raw milk analysis by PLS regression on spectral data has been proved. The root mean-square errors below 0.10% and 0.04% for fat....... 3J&M Analytik AG, Willy-Messerschmitt-Strasse 8, 73457 Essingen, Germany. bogomolov@j-m.de Fat and protein are two major milk nutrients that are routinely analyzed in the dairy industry. Growing food quality requirements promote the dissemination of spectroscopic analysis, enabling real...

  3. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  4. Extracting the information backbone in online system.

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  5. Extraction of bioliquid and quantitative determination of saturates ...

    African Journals Online (AJOL)

    Bioliquid generated from the leaves of banana (Musa sapintum) through anaerobic fungal degradation was obtained by soxhlet extraction using absolute methanol as solvent at 600C for 72 hours. The bioliquid extracted was recovered from the extracting solvent by evaporation using rotary evaporator. The extract obtained ...

  6. Markovian Processes for Quantitative Information Leakage

    DEFF Research Database (Denmark)

    Biondi, Fabrizio

    Quantification of information leakage is a successful approach for evaluating the security of a system. It models the system to be analyzed as a channel with the secret as the input and an output as observable by the attacker as the output, and applies information theory to quantify the amount...... and randomized processes with Markovian models and to compute their information leakage for a very general model of attacker. We present the QUAIL tool that automates such analysis and is able to compute the information leakage of an imperative WHILE language. Finally, we show how to use QUAIL to analyze some...... of information transmitted through such channel, thus effectively quantifying how many bits of the secret can be inferred by the attacker by analyzing the system’s output. Channels are usually encoded as matrices of conditional probabilities, known as channel matrices. Such matrices grow exponentially...

  7. Information extraction from muon radiography data

    International Nuclear Information System (INIS)

    Borozdin, K.N.; Asaki, T.J.; Chartrand, R.; Hengartner, N.W.; Hogan, G.E.; Morris, C.L.; Priedhorsky, W.C.; Schirato, R.C.; Schultz, L.J.; Sottile, M.J.; Vixie, K.R.; Wohlberg, B.E.; Blanpied, G.

    2004-01-01

    Scattering muon radiography was proposed recently as a technique of detection and 3-d imaging for dense high-Z objects. High-energy cosmic ray muons are deflected in matter in the process of multiple Coulomb scattering. By measuring the deflection angles we are able to reconstruct the configuration of high-Z material in the object. We discuss the methods for information extraction from muon radiography data. Tomographic methods widely used in medical images have been applied to a specific muon radiography information source. Alternative simple technique based on the counting of high-scattered muons in the voxels seems to be efficient in many simulated scenes. SVM-based classifiers and clustering algorithms may allow detection of compact high-Z object without full image reconstruction. The efficiency of muon radiography can be increased using additional informational sources, such as momentum estimation, stopping power measurement, and detection of muonic atom emission.

  8. Toward 3D structural information from quantitative electron exit wave analysis

    International Nuclear Information System (INIS)

    Borisenko, Konstantin B; Moldovan, Grigore; Kirkland, Angus I; Wang, Amy; Van Dyck, Dirk; Chen, Fu-Rong

    2012-01-01

    Simulations show that using a new direct imaging detector and accurate exit wave restoration algorithms allows nearly quantitative restoration of electron exit wave phase, which can be regarded as only qualitative for conventional indirect imaging cameras. This opens up a possibility of extracting accurate information on 3D atomic structure of the sample even from a single projection.

  9. MR urography: Anatomical and quantitative information on ...

    African Journals Online (AJOL)

    Background and Aim: Magnetic resonance urography (MRU) is considered to be the next step in uroradiology. This technique combines superb anatomical images and functional information in a single test. In this article, we aim to present the topic of MRU in children and how it has been implemented in Northern Greece so ...

  10. Extracting the information backbone in online system.

    Directory of Open Access Journals (Sweden)

    Qian-Ming Zhang

    Full Text Available Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  11. Extracting the Information Backbone in Online System

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such “less can be more” feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency. PMID:23690946

  12. Quantitative Information Flow as Safety and Liveness Hyperproperties

    Directory of Open Access Journals (Sweden)

    Hirotoshi Yasuoka

    2012-07-01

    Full Text Available We employ Clarkson and Schneider's "hyperproperties" to classify various verification problems of quantitative information flow. The results of this paper unify and extend the previous results on the hardness of checking and inferring quantitative information flow. In particular, we identify a subclass of liveness hyperproperties, which we call "k-observable hyperproperties", that can be checked relative to a reachability oracle via self composition.

  13. Chaotic spectra: How to extract dynamic information

    International Nuclear Information System (INIS)

    Taylor, H.S.; Gomez Llorente, J.M.; Zakrzewski, J.; Kulander, K.C.

    1988-10-01

    Nonlinear dynamics is applied to chaotic unassignable atomic and molecular spectra with the aim of extracting detailed information about regular dynamic motions that exist over short intervals of time. It is shown how this motion can be extracted from high resolution spectra by doing low resolution studies or by Fourier transforming limited regions of the spectrum. These motions mimic those of periodic orbits (PO) and are inserts into the dominant chaotic motion. Considering these inserts and the PO as a dynamically decoupled region of space, resonant scattering theory and stabilization methods enable us to compute ladders of resonant states which interact with the chaotic quasi-continuum computed in principle from basis sets placed off the PO. The interaction of the resonances with the quasicontinuum explains the low resolution spectra seen in such experiments. It also allows one to associate low resolution features with a particular PO. The motion on the PO thereby supplies the molecular movements whose quantization causes the low resolution spectra. Characteristic properties of the periodic orbit based resonances are discussed. The method is illustrated on the photoabsorption spectrum of the hydrogen atom in a strong magnetic field and on the photodissociation spectrum of H 3 + . Other molecular systems which are currently under investigation using this formalism are also mentioned. 53 refs., 10 figs., 2 tabs

  14. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  15. Extraction of temporal information in functional MRI

    Science.gov (United States)

    Singh, M.; Sungkarat, W.; Jeong, Jeong-Won; Zhou, Yongxia

    2002-10-01

    The temporal resolution of functional MRI (fMRI) is limited by the shape of the haemodynamic response function (hrf) and the vascular architecture underlying the activated regions. Typically, the temporal resolution of fMRI is on the order of 1 s. We have developed a new data processing approach to extract temporal information on a pixel-by-pixel basis at the level of 100 ms from fMRI data. Instead of correlating or fitting the time-course of each pixel to a single reference function, which is the common practice in fMRI, we correlate each pixel's time-course to a series of reference functions that are shifted with respect to each other by 100 ms. The reference function yielding the highest correlation coefficient for a pixel is then used as a time marker for that pixel. A Monte Carlo simulation and experimental study of this approach were performed to estimate the temporal resolution as a function of signal-to-noise ratio (SNR) in the time-course of a pixel. Assuming a known and stationary hrf, the simulation and experimental studies suggest a lower limit in the temporal resolution of approximately 100 ms at an SNR of 3. The multireference function approach was also applied to extract timing information from an event-related motor movement study where the subjects flexed a finger on cue. The event was repeated 19 times with the event's presentation staggered to yield an approximately 100-ms temporal sampling of the haemodynamic response over the entire presentation cycle. The timing differences among different regions of the brain activated by the motor task were clearly visualized and quantified by this method. The results suggest that it is possible to achieve a temporal resolution of /spl sim/200 ms in practice with this approach.

  16. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    International Nuclear Information System (INIS)

    Fan, W J; Lu, Y

    2006-01-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting

  17. Amines as extracting agents for the quantitative determinations of actinides in biological samples

    International Nuclear Information System (INIS)

    Singh, N.P.

    1987-01-01

    The use of amines (primary, secondary and tertiary chains and quaternary ammonium salts) as extracting agents for the quantitative determination of actinides in biological samples is reviewed. Among the primary amines, only Primene JM-T is used to determine Pu in urine and bone. No one has investigated the possibility of using secondary amines to quantitatively extract actinides from biological samples. Among the tertiary amines, tri-n-octylamine, tri-iso-octylamine, tyricaprylamine (Alamine) and trilaurylamine (tridodecylamine) are used extensively to extract and separate the actinides from biological samples. Only one quaternary ammonium salt, methyltricapryl ammonium chloride (Aliquat-336), is used to extract Pu from biological samples. (author) 28 refs

  18. Extraction and quantitation of furanic compounds dissolved in oils

    International Nuclear Information System (INIS)

    Koreh, O.; Torkos, K.; Mahara, M.B.; Borossay, J.

    1998-01-01

    Furans are amongst the decomposition products which are generated by the degradation of cellulose in paper. Paper insulation is used in capacitors, cables and transformers. These furans dissolve in the impregnating mineral oil, and a method, involving liquid/liquid extraction, solid phase extraction and high performance liquid chromatography, has been developed to determine the concentration of 2-furfural the most stable compound in oil. The degradation of paper is being examined in order to find correlation between the change in dielectric and mechanical properties and the increase in concentration of 2-furfural in the oil. (author)

  19. Extraction and Quantitative HPLC Analysis of Coumarin in Hydroalcoholic Extracts of Mikania glomerata Spreng: ("guaco" Leaves

    Directory of Open Access Journals (Sweden)

    Celeghini Renata M. S.

    2001-01-01

    Full Text Available Methods for preparation of hydroalcoholic extracts of "guaco" (Mikania glomerata Spreng. leaves were compared: maceration, maceration under sonication, infusion and supercritical fluid extraction. Evaluation of these methods showed that maceration under sonication had the best results, when considering the ratio extraction yield/extraction time. A high performance liquid chromatography (HPLC procedure for the determination of coumarin in these hydroalcoholic extracts of "guaco" leaves is described. The HPLC method is shown to be sensitive and reproducible.

  20. Quantitative Analysis of Tenofovir by Titrimetric, Extractive Ion-pair ...

    African Journals Online (AJOL)

    Methods: Tenofovir disoproxil forms a complex of 1:1 molar ratio with fumaric acid that was employed in its aqueous titration with sodium hydroxide. Non-aqueous titration was also employed for its determination. Extractive ion-pair spectrophotometric technique using methyl orange was similarly employed to evaluate ...

  1. Respiratory Information Extraction from Electrocardiogram Signals

    KAUST Repository

    Amin, Gamal El Din Fathy

    2010-12-01

    The Electrocardiogram (ECG) is a tool measuring the electrical activity of the heart, and it is extensively used for diagnosis and monitoring of heart diseases. The ECG signal reflects not only the heart activity but also many other physiological processes. The respiratory activity is a prominent process that affects the ECG signal due to the close proximity of the heart and the lungs. In this thesis, several methods for the extraction of respiratory process information from the ECG signal are presented. These methods allow an estimation of the lung volume and the lung pressure from the ECG signal. The potential benefit of this is to eliminate the corresponding sensors used to measure the respiration activity. A reduction of the number of sensors connected to patients will increase patients’ comfort and reduce the costs associated with healthcare. As a further result, the efficiency of diagnosing respirational disorders will increase since the respiration activity can be monitored with a common, widely available method. The developed methods can also improve the detection of respirational disorders that occur while patients are sleeping. Such disorders are commonly diagnosed in sleeping laboratories where the patients are connected to a number of different sensors. Any reduction of these sensors will result in a more natural sleeping environment for the patients and hence a higher sensitivity of the diagnosis.

  2. Optimization and automation of quantitative NMR data extraction.

    Science.gov (United States)

    Bernstein, Michael A; Sýkora, Stan; Peng, Chen; Barba, Agustín; Cobas, Carlos

    2013-06-18

    NMR is routinely used to quantitate chemical species. The necessary experimental procedures to acquire quantitative data are well-known, but relatively little attention has been applied to data processing and analysis. We describe here a robust expert system that can be used to automatically choose the best signals in a sample for overall concentration determination and determine analyte concentration using all accepted methods. The algorithm is based on the complete deconvolution of the spectrum which makes it tolerant of cases where signals are very close to one another and includes robust methods for the automatic classification of NMR resonances and molecule-to-spectrum multiplets assignments. With the functionality in place and optimized, it is then a relatively simple matter to apply the same workflow to data in a fully automatic way. The procedure is desirable for both its inherent performance and applicability to NMR data acquired for very large sample sets.

  3. The Limitations of Quantitative Social Science for Informing Public Policy

    Science.gov (United States)

    Jerrim, John; de Vries, Robert

    2017-01-01

    Quantitative social science (QSS) has the potential to make an important contribution to public policy. However it also has a number of limitations. The aim of this paper is to explain these limitations to a non-specialist audience and to identify a number of ways in which QSS research could be improved to better inform public policy.

  4. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  5. A chemical profiling strategy for semi-quantitative analysis of flavonoids in Ginkgo extracts.

    Science.gov (United States)

    Yang, Jing; Wang, An-Qi; Li, Xue-Jing; Fan, Xue; Yin, Shan-Shan; Lan, Ke

    2016-05-10

    Flavonoids analysis in herbal products is challenged by their vast chemical diversity. This work aimed to develop a chemical profiling strategy for the semi-quantification of flavonoids using extracts of Ginkgo biloba L. (EGB) as an example. The strategy was based on the principle that flavonoids in EGB have an almost equivalent molecular absorption coefficient at a fixed wavelength. As a result, the molecular-contents of flavonoids were able to be semi-quantitatively determined by the molecular-concentration calibration curves of common standards and recalculated as the mass-contents with the characterized molecular weight (MW). Twenty batches of EGB were subjected to HPLC-UV/DAD/MS fingerprinting analysis to test the feasibility and reliability of this strategy. The flavonoid peaks were distinguished from the other peaks with principle component analysis and Pearson correlation analysis of the normalized UV spectrometric dataset. Each flavonoid peak was subsequently tentatively identified by the MS data to ascertain their MW. It was highlighted that the flavonoids absorption at Band-II (240-280 nm) was more suitable for the semi-quantification purpose because of the less variation compared to that at Band-I (300-380 nm). The semi-quantification was therefore conducted at 254 nm. Beyond the qualitative comparison results acquired by common chemical profiling techniques, the semi-quantitative approach presented the detailed compositional information of flavonoids in EGB and demonstrated how the adulteration of one batch was achieved. The developed strategy was believed to be useful for the advanced analysis of herbal extracts with a high flavonoid content without laborious identification and isolation of individual components. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. The Agent of extracting Internet Information with Lead Order

    Science.gov (United States)

    Mo, Zan; Huang, Chuliang; Liu, Aijun

    In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.

  7. A quantitative approach to modeling the information processing of NPP operators under input information overload

    International Nuclear Information System (INIS)

    Kim, Jong Hyun; Seong, Poong Hyun

    2002-01-01

    This paper proposes a quantitative approach to modeling the information processing of NPP operators. The aim of this work is to derive the amount of the information processed during a certain control task under input information overload. We primarily develop the information processing model having multiple stages, which contains information flow. Then the uncertainty of the information is quantified using the Conant's model, a kind of information theory. We also investigate the applicability of this approach to quantifying the information reduction of operators under the input information overload

  8. Quantitative approaches to information recovery from black holes

    Energy Technology Data Exchange (ETDEWEB)

    Balasubramanian, Vijay [David Rittenhouse Laboratory, University of Pennsylvania, 209 South 33rd Street, Philadelphia, PA 19104 (United States); Czech, Bartlomiej, E-mail: vijay@physics.upenn.edu, E-mail: czech@phas.ubc.ca [Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada)

    2011-08-21

    The evaporation of black holes into apparently thermal radiation poses a serious conundrum for theoretical physics: at face value, it appears that in the presence of a black hole, quantum evolution is non-unitary and destroys information. This information loss paradox has its seed in the presence of a horizon causally separating the interior and asymptotic regions in a black hole spacetime. A quantitative resolution of the paradox could take several forms: (a) a precise argument that the underlying quantum theory is unitary, and that information loss must be an artifact of approximations in the derivation of black hole evaporation, (b) an explicit construction showing how information can be recovered by the asymptotic observer, (c) a demonstration that the causal disconnection of the black hole interior from infinity is an artifact of the semiclassical approximation. This review summarizes progress on all these fronts. (topical review)

  9. Extracting quantitative structural parameters for disordered polymers from neutron scattering data

    International Nuclear Information System (INIS)

    Rosi-Schwartz, B.; Mitchell, G.R.

    1995-01-01

    The organization of non-crystalline polymeric materials at a local level, namely on a spatial scale between a few and 100 A, is still unclear in many respects. The determination of the local structure in terms of the configuration and conformation of the polymer chain and of the packing characteristics of the chain in the bulk material represents a challenging problem. Data from wide-angle diffraction experiments are very difficult to interpret due to the very large amount of information that they carry, that is the large number of correlations present in the diffraction patterns.We describe new approaches that permit a detailed analysis of the complex neutron diffraction patterns characterizing polymer melts and glasses. The coupling of different computer modelling strategies with neutron scattering data over a wide Q range allows the extraction of detailed quantitative information on the structural arrangements of the materials of interest. Proceeding from modelling routes as diverse as force field calculations, single-chain modelling and reverse Monte Carlo, we show the successes and pitfalls of each approach in describing model systems, which illustrate the need to attack the data analysis problem simultaneously from several fronts. ((orig.))

  10. Cause Information Extraction from Financial Articles Concerning Business Performance

    Science.gov (United States)

    Sakai, Hiroyuki; Masuyama, Shigeru

    We propose a method of extracting cause information from Japanese financial articles concerning business performance. Our method acquires cause informtion, e. g. “_??__??__??__??__??__??__??__??__??__??_ (zidousya no uriage ga koutyou: Sales of cars were good)”. Cause information is useful for investors in selecting companies to invest. Our method extracts cause information as a form of causal expression by using statistical information and initial clue expressions automatically. Our method can extract causal expressions without predetermined patterns or complex rules given by hand, and is expected to be applied to other tasks for acquiring phrases that have a particular meaning not limited to cause information. We compared our method with our previous one originally proposed for extracting phrases concerning traffic accident causes and experimental results showed that our new method outperforms our previous one.

  11. [Quantitive variation of polysaccharides and alcohol-soluble extracts in F1 generation of Dendrobium officinale].

    Science.gov (United States)

    Zhang, Xiao-Ling; Liu, Jing-Jing; Wu, Ling-Shang; Si, Jin-Ping; Guo, Ying-Ying; Yu, Jie; Wang, Lin-Hua

    2013-11-01

    Using phenol-sulfuric acid method and hot-dip method of alcohol-soluble extracts, the contents of polysaccharides and alcohol-soluble extracts in 11 F1 generations of Dendrobium officinale were determined. The results showed that the polysaccharides contents in samples collected in May and February were 32.89%-43.07% and 25.77%-35.25%, respectively, while the extracts contents were 2.81%-4.85% and 7.90%-17.40%, respectively. They were significantly different among families. The content of polysaccharides in offspring could be significantly improved by hybridization between parents with low and high polysaccharides contents, and the hybrid vigor was obvious. Cross breeding was an effective way for breeding new varieties with higher polysaccharides contents. Harvest time would significantly affect the contents of polysaccharides and alcohol-soluble extracts. The contents of polysaccharides in families collected in May were higher than those of polysaccharides in families collected in February, but the extracts content had the opposite variation. The extents of quantitative variation of polysaccharides and alcohol-soluble extracts were different among families, and each family had its own rules. It would be significant in giving full play to their role as the excellent varieties and increasing effectiveness by studying on the quantitative accumulation regularity of polysaccharides and alcohol-soluble extracts in superior families (varieties) of D. officinale to determine the best harvesting time.

  12. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  13. Can we replace curation with information extraction software?

    Science.gov (United States)

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.

  14. Mining knowledge from text repositories using information extraction ...

    Indian Academy of Sciences (India)

    Information extraction (IE); text mining; text repositories; knowledge discovery from .... general purpose English words. However ... of precision and recall, as extensive experimentation is required due to lack of public tagged corpora. 4. Mining ...

  15. Mars Target Encyclopedia: Information Extraction for Planetary Science

    Science.gov (United States)

    Wagstaff, K. L.; Francis, R.; Gowda, T.; Lu, Y.; Riloff, E.; Singh, K.

    2017-06-01

    Mars surface targets / and published compositions / Seek and ye will find. We used text mining methods to extract information from LPSC abstracts about the composition of Mars surface targets. Users can search by element, mineral, or target.

  16. Extracting quantitative three-dimensional unsteady flow direction from tuft flow visualizations

    Energy Technology Data Exchange (ETDEWEB)

    Omata, Noriyasu; Shirayama, Susumu, E-mail: omata@nakl.t.u-tokyo.ac.jp, E-mail: sirayama@sys.t.u-tokyo.ac.jp [Department of Systems Innovation, School of Engineering, The University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo, 113-8656 (Japan)

    2017-10-15

    We focus on the qualitative but widely used method of tuft flow visualization, and propose a method for quantifying it using information technology. By applying stereo image processing and computer vision, the three-dimensional (3D) flow direction in a real environment can be obtained quantitatively. In addition, we show that the flow can be divided temporally by performing appropriate machine learning on the data. Acquisition of flow information in real environments is important for design development, but it is generally considered difficult to apply simulations or quantitative experiments to such environments. Hence, qualitative methods including the tuft method are still in use today. Although attempts have been made previously to quantify such methods, it has not been possible to acquire 3D information. Furthermore, even if quantitative data could be acquired, analysis was often performed empirically or qualitatively. In contrast, we show that our method can acquire 3D information and analyze the measured data quantitatively. (paper)

  17. Extracting quantitative three-dimensional unsteady flow direction from tuft flow visualizations

    International Nuclear Information System (INIS)

    Omata, Noriyasu; Shirayama, Susumu

    2017-01-01

    We focus on the qualitative but widely used method of tuft flow visualization, and propose a method for quantifying it using information technology. By applying stereo image processing and computer vision, the three-dimensional (3D) flow direction in a real environment can be obtained quantitatively. In addition, we show that the flow can be divided temporally by performing appropriate machine learning on the data. Acquisition of flow information in real environments is important for design development, but it is generally considered difficult to apply simulations or quantitative experiments to such environments. Hence, qualitative methods including the tuft method are still in use today. Although attempts have been made previously to quantify such methods, it has not been possible to acquire 3D information. Furthermore, even if quantitative data could be acquired, analysis was often performed empirically or qualitatively. In contrast, we show that our method can acquire 3D information and analyze the measured data quantitatively. (paper)

  18. Integrating Information Extraction Agents into a Tourism Recommender System

    Science.gov (United States)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  19. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    Science.gov (United States)

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  20. Extracting quantitative measures from EAP: a small clinical study using BFOR.

    Science.gov (United States)

    Hosseinbor, A Pasha; Chung, Moo K; Wu, Yu-Chien; Fleming, John O; Field, Aaron S; Alexander, Andrew L

    2012-01-01

    The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents, and hence providing rich information about complex tissue microstructure properties. Bessel Fourier orientation reconstruction (BFOR) is one of several analytical, non-Cartesian EAP reconstruction schemes employing multiple shell acquisitions that have recently been proposed. Such modeling bases have not yet been fully exploited in the extraction of rotationally invariant q-space indices that describe the degree of diffusion anisotropy/restrictivity. Such quantitative measures include the zero-displacement probability (P(o)), mean squared displacement (MSD), q-space inverse variance (QIV), and generalized fractional anisotropy (GFA), and all are simply scalar features of the EAP. In this study, a general relationship between MSD and q-space diffusion signal is derived and an EAP-based definition of GFA is introduced. A significant part of the paper is dedicated to utilizing BFOR in a clinical dataset, comprised of 5 multiple sclerosis (MS) patients and 4 healthy controls, to estimate P(o), MSD, QIV, and GFA of corpus callosum, and specifically, to see if such indices can detect changes between normal appearing white matter (NAWM) and healthy white matter (WM). Although the sample size is small, this study is a proof of concept that can be extended to larger sample sizes in the future.

  1. Information extraction from multi-institutional radiology reports.

    Science.gov (United States)

    Hassanpour, Saeed; Langlotz, Curtis P

    2016-01-01

    The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We

  2. Fine-grained information extraction from German transthoracic echocardiography reports.

    Science.gov (United States)

    Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank

    2015-11-12

    Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports

  3. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  4. Simultaneous extraction and quantitation of several bioactive amines in cheese and chocolate.

    Science.gov (United States)

    Baker, G B; Wong, J T; Coutts, R T; Pasutto, F M

    1987-04-17

    A method is described for simultaneous extraction and quantitation of the amines 2-phenylethylamine, tele-methylhistamine, histamine, tryptamine, m- and p-tyramine, 3-methoxytyramine, 5-hydroxytryptamine, cadaverine, putrescine, spermidine and spermine. This method is based on extractive derivatization of the amines with a perfluoroacylating agent, pentafluorobenzoyl chloride, under basic aqueous conditions. Analysis was done on a gas chromatograph equipped with an electron-capture detector and a capillary column system. The procedure is relatively rapid and provides derivatives with good chromatographic properties. Its application to analysis of the above amines in cheese and chocolate products is described.

  5. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  6. The development of quantitative determination method of organic acids in complex poly herbal extraction

    Directory of Open Access Journals (Sweden)

    I. L. Dyachok

    2016-08-01

    Full Text Available Aim. The development of sensible, economical and expressive method of quantitative determination of organic acids in complex poly herbal extraction counted on izovaleric acid with the use of digital technologies. Materials and methods. Model complex poly herbal extraction of sedative action was chosen as a research object. Extraction is composed of these medical plants: Valeriana officinalis L., Crataégus, Melissa officinalis L., Hypericum, Mentha piperita L., Húmulus lúpulus, Viburnum. Based on chemical composition of plant components, we consider that main pharmacologically active compounds, which can be found in complex poly herbal extraction are: polyphenolic substances (flavonoids, which are contained in Crataégus, Viburnum, Hypericum, Mentha piperita L., Húmulus lúpulus; also organic acids, including izovaleric acid, which are contained in Valeriana officinalis L., Mentha piperita L., Melissa officinalis L., Viburnum; the aminoacid are contained in Valeriana officinalis L. For the determination of organic acids content in low concentration we applied instrumental method of analysis, namely conductometry titration which consisted in the dependences of water solution conductivity of complex poly herbal extraction on composition of organic acids. Result. The got analytical dependences, which describes tangent lines to the conductometry curve before and after the point of equivalence, allow to determine the volume of solution expended on titration and carry out procedure of quantitative determination of organic acids in the digital mode. Conclusion. The proposed method enables to determine the point of equivalence and carry out quantitative determination of organic acids counted on izovaleric acid with the use of digital technologies, that allows to computerize the method on the whole.

  7. Application of crown ethers to selective extraction and quantitative analysis of technetium 99, iodine 129 and cesium 135 in effluents

    International Nuclear Information System (INIS)

    Paviet, P.

    1992-01-01

    Properties of crown ethers are first recalled. Then extraction of technetium 99 is studied in actual radioactive effluents. Quantitative analysis is carried out by liquid scintillation and interference of tritium is corrected. Iodine 129 is extracted from radioactive effluents and determined by gamma spectrometry. Finally cesium 135 is extracted and determined by thermo ionization mass spectroscopy

  8. Knowledge Dictionary for Information Extraction on the Arabic Text Data

    Directory of Open Access Journals (Sweden)

    Wahyu Jauharis Saputra

    2013-04-01

    Full Text Available Information extraction is an early stage of a process of textual data analysis. Information extraction is required to get information from textual data that can be used for process analysis, such as classification and categorization. A textual data is strongly influenced by the language. Arabic is gaining a significant attention in many studies because Arabic language is very different from others, and in contrast to other languages, tools and research on the Arabic language is still lacking. The information extracted using the knowledge dictionary is a concept of expression. A knowledge dictionary is usually constructed manually by an expert and this would take a long time and is specific to a problem only. This paper proposed a method for automatically building a knowledge dictionary. Dictionary knowledge is formed by classifying sentences having the same concept, assuming that they will have a high similarity value. The concept that has been extracted can be used as features for subsequent computational process such as classification or categorization. Dataset used in this paper was the Arabic text dataset. Extraction result was tested by using a decision tree classification engine and the highest precision value obtained was 71.0% while the highest recall value was 75.0%. 

  9. Quantitative analysis of perfumes in talcum powder by using headspace sorptive extraction.

    Science.gov (United States)

    Ng, Khim Hui; Heng, Audrey; Osborne, Murray

    2012-03-01

    Quantitative analysis of perfume dosage in talcum powder has been a challenge due to interference of the matrix and has so far not been widely reported. In this study, headspace sorptive extraction (HSSE) was validated as a solventless sample preparation method for the extraction and enrichment of perfume raw materials from talcum powder. Sample enrichment is performed on a thick film of poly(dimethylsiloxane) (PDMS) coated onto a magnetic stir bar incorporated in a glass jacket. Sampling is done by placing the PDMS stir bar in the headspace vial by using a holder. The stir bar is then thermally desorbed online with capillary gas chromatography-mass spectrometry. The HSSE method is based on the same principles as headspace solid-phase microextraction (HS-SPME). Nevertheless, a relatively larger amount of extracting phase is coated on the stir bar as compared to SPME. Sample amount and extraction time were optimized in this study. The method has shown good repeatability (with relative standard deviation no higher than 12.5%) and excellent linearity with correlation coefficients above 0.99 for all analytes. The method was also successfully applied in the quantitative analysis of talcum powder spiked with perfume at different dosages. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Ontology-Based Information Extraction for Business Intelligence

    Science.gov (United States)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  11. Quantitative measurement of cerebral oxygen extraction fraction using MRI in patients with MELAS.

    Directory of Open Access Journals (Sweden)

    Lei Yu

    Full Text Available OBJECTIVE: To quantify the cerebral OEF at different phases of stroke-like episodes in patients with mitochondrial myopathy, encephalopathy, lactic acidosis, and stroke-like episodes (MELAS by using MRI. METHODS: We recruited 32 patients with MELAS confirmed by gene analysis. Conventional MRI scanning, as well as functional MRI including arterial spin labeling and oxygen extraction fraction imaging, was undertaken to obtain the pathological and metabolic information of the brains at different stages of stroke-like episodes in patients. A total of 16 MRI examinations at the acute and subacute phase and 19 examinations at the interictal phase were performed. In addition, 24 healthy volunteers were recruited for control subjects. Six regions of interest were placed in the anterior, middle, and posterior parts of the bilateral hemispheres to measure the OEF of the brain or the lesions. RESULTS: OEF was reduced significantly in brains of patients at both the acute and subacute phase (0.266 ± 0.026 and at the interictal phase (0.295 ± 0.009, compared with normal controls (0.316 ± 0.025. In the brains at the acute and subacute phase of the episode, 13 ROIs were prescribed on the stroke-like lesions, which showed decreased OEF compared with the contralateral spared brain regions. Increased blood flow was revealed in the stroke-like lesions at the acute and subacute phase, which was confined to the lesions. CONCLUSION: MRI can quantitatively show changes in OEF at different phases of stroke-like episodes. The utilization of oxygen in the brain seems to be reduced more severely after the onset of episodes in MELAS, especially for those brain tissues involved in the episodes.

  12. NAMED ENTITY RECOGNITION FROM BIOMEDICAL TEXT -AN INFORMATION EXTRACTION TASK

    Directory of Open Access Journals (Sweden)

    N. Kanya

    2016-07-01

    Full Text Available Biomedical Text Mining targets the Extraction of significant information from biomedical archives. Bio TM encompasses Information Retrieval (IR and Information Extraction (IE. The Information Retrieval will retrieve the relevant Biomedical Literature documents from the various Repositories like PubMed, MedLine etc., based on a search query. The IR Process ends up with the generation of corpus with the relevant document retrieved from the Publication databases based on the query. The IE task includes the process of Preprocessing of the document, Named Entity Recognition (NER from the documents and Relationship Extraction. This process includes Natural Language Processing, Data Mining techniques and machine Language algorithm. The preprocessing task includes tokenization, stop word Removal, shallow parsing, and Parts-Of-Speech tagging. NER phase involves recognition of well-defined objects such as genes, proteins or cell-lines etc. This process leads to the next phase that is extraction of relationships (IE. The work was based on machine learning algorithm Conditional Random Field (CRF.

  13. Phytochrome quantitation in crude extracts of Avena by enzyme-linked immunosorbent assay with monoclonal antibodies

    Energy Technology Data Exchange (ETDEWEB)

    Shimazaki, Y; Cordonnier, M M; Pratt, L H

    1983-01-01

    An enzyme-linked immunosorbent assay (ELISA), which uses both rabbit polyclonal and mouse monoclonal antibodies to phytochrome, has been adapted for quantitation of phytochrome in crude plant extracts. The assay has a detection limit of about 100 pg phytochrome and can be completed within 10 h. Quantitation of phytochrome in crude extracts of etiolated oat seedlings by ELISA gave values that agreed well with those obtained by spectrophotometric assay. When etiolated oat seedlings were irradiated continuously for 24 h, the amount of phytochrome detected by ELISA and by spectrophotometric assay decreased by more than 1000-fold and about 100-fold, respectively. This discrepancy indicates that phytochrome in light-treated plants may be antigenically distinct from that found in fully etiolated plants. When these light-grown oat seedlings were kept in darkness for 48 h, phytochrome content detected by ELISA increased by 50-fold in crude extracts of green oat shoots, but only about 12-fold in extracts of herbicide-treated oat shoots. Phytochrome reaccumulation in green oat shoots was initially more rapid in the more mature cells of the primary leaf tip than near the basal part of the shoot. The inhibitory effect of Norflurazon on phytochrome accumulation was much more evident near the leaf tip than the shoot base. A 5-min red irradiation of oat seedlings at the end of a 48-h dark period resulted in a subsequent, massive decrease in phytochrome content in crude extracts from both green and Norflurazon-bleached oat shoots. These observations eliminate the possibility that substantial accumulation of chromophore-free phytochrome was being detected and indicate that Norflurazon has a substantial effect on phytochrome accumulation during a prolonged dark period. 25 references, 9 figures, 3 tables.

  14. Extraction and quantitation of total cholesterol, dolichol and dolichyl phosphate from mammalian liver

    International Nuclear Information System (INIS)

    Crick, D.C.; Carroll, K.K.

    1987-01-01

    A procedure is described for the determination of total cholesterol, dolichol and dolichyl phosphate (Dol-P) in mammalian liver. It is based on extraction of these compounds into diethyl ether after alkaline saponification of the tissue. Extractability is affected by the length of saponification and concentration of potassium hydroxide (KOH) in the saponification mixture. After extraction, total cholesterol and dolichol are quantitated directly by reverse-phase high pressure liquid chromatography (HPLC) on C18. Dol-P requires further purification before quantitation by HPLC, this is accomplished by chromatography on silicic acid. These methods gave recoveries of over 90% for cholesterol and dolichol and about 60% for Dol-P, using [4- 14 C]cholesterol, a polyprenol containing 15 isoprene units, and [1- 14 C]Dol-P as recovery standards. Concentrations of total cholesterol, dolichol and Dol-P in livers from one month-old-CBA mice were found to be 5.7 +/- 0.7 mg/g, 66.3 +/- 1.2 micrograms/g and 3.7 +/- 0.3 micrograms/g, respectively

  15. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  16. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  17. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  18. Extracting Semantic Information from Visual Data: A Survey

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2016-03-01

    Full Text Available The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods.

  19. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  20. Quantitative Structure-Relative Volatility Relationship Model for Extractive Distillation of Ethylbenzene/p-Xylene Mixtures: Application to Binary and Ternary Mixtures as Extractive Agents

    International Nuclear Information System (INIS)

    Kang, Young-Mook; Oh, Kyunghwan; You, Hwan; No, Kyoung Tai; Jeon, Yukwon; Shul, Yong-Gun; Hwang, Sung Bo; Shin, Hyun Kil; Kim, Min Sung; Kim, Namseok; Son, Hyoungjun; Chu, Young Hwan; Cho, Kwang-Hwi

    2016-01-01

    Ethylbenzene (EB) and p-xylene (PX) are important chemicals for the production of industrial materials; accordingly, their efficient separation is desired, even though the difference in their boiling points is very small. This paper describes the efforts toward the identification of high-performance extractive agents for EB and PX separation by distillation. Most high-performance extractive agents contain halogen atoms, which present health hazards and are corrosive to distillation plates. To avoid this disadvantage of extractive agents, we developed a quantitative structure-relative volatility relationship (QSRVR) model for designing safe extractive agents. We have previously developed and reported QSRVR models for single extractive agents. In this study, we introduce extended QSRVR models for binary and ternary extractive agents. The QSRVR models accurately predict the relative volatilities of binary and ternary extractive agents. The service to predict the relative volatility for binary and ternary extractive agents is freely available from the Internet at http://qsrvr.o pengsi.org/.

  1. Quantitative Structure-Relative Volatility Relationship Model for Extractive Distillation of Ethylbenzene/p-Xylene Mixtures: Application to Binary and Ternary Mixtures as Extractive Agents

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Young-Mook; Oh, Kyunghwan; You, Hwan; No, Kyoung Tai [Bioinformatics and Molecular Design Research Center, Seoul (Korea, Republic of); Jeon, Yukwon; Shul, Yong-Gun; Hwang, Sung Bo; Shin, Hyun Kil; Kim, Min Sung; Kim, Namseok; Son, Hyoungjun [Yonsei University, Seoul (Korea, Republic of); Chu, Young Hwan [Sangji University, Wonju (Korea, Republic of); Cho, Kwang-Hwi [Soongsil University, Seoul (Korea, Republic of)

    2016-04-15

    Ethylbenzene (EB) and p-xylene (PX) are important chemicals for the production of industrial materials; accordingly, their efficient separation is desired, even though the difference in their boiling points is very small. This paper describes the efforts toward the identification of high-performance extractive agents for EB and PX separation by distillation. Most high-performance extractive agents contain halogen atoms, which present health hazards and are corrosive to distillation plates. To avoid this disadvantage of extractive agents, we developed a quantitative structure-relative volatility relationship (QSRVR) model for designing safe extractive agents. We have previously developed and reported QSRVR models for single extractive agents. In this study, we introduce extended QSRVR models for binary and ternary extractive agents. The QSRVR models accurately predict the relative volatilities of binary and ternary extractive agents. The service to predict the relative volatility for binary and ternary extractive agents is freely available from the Internet at http://qsrvr.o pengsi.org/.

  2. EXTRACTION AND QUANTITATIVE ANALYSIS OF ELEMENTAL SULFUR FROM SULFIDE MINERAL SURFACES BY HIGH-PERFORMANCE LIQUID CHROMATOGRAPHY. (R826189)

    Science.gov (United States)

    A simple method for the quantitative determination of elemental sulfur on oxidized sulfide minerals is described. Extraction of elemental sulfur in perchloroethylene and subsequent analysis with high-performance liquid chromatography were used to ascertain the total elemental ...

  3. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  4. Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    Directory of Open Access Journals (Sweden)

    Yeh Chia-Hung

    2005-01-01

    Full Text Available A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1 the moving cast shadow effect, (2 the occlusion effect, and (3 nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

  5. Advanced applications of natural language processing for performing information extraction

    CERN Document Server

    Rodrigues, Mário

    2015-01-01

    This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses.   ·         Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for t...

  6. Extraction of fish body oil from Sardinella longiceps by employing direct steaming method and its quantitative and qualitative assessment

    Directory of Open Access Journals (Sweden)

    Moorthy Pravinkumar

    2015-12-01

    Full Text Available Objective: To analyze the quantitative and qualitative properties of the extracted fish oil from Sardinella longiceps (S. longiceps. Methods: Four size groups of S. longiceps were examined for the extraction of fish oil based on length. The size groups included Group I (size range of 7.1–10.0 cm, Group II (size range of 10.1–13.0 cm, Group III (size range of 13.1–16.0 cm and Group IV (size range of 16.1– 19.0 cm. Fish oil was extracted from the tissues of S. longiceps by direct steaming method. The oil was then subjected to the determination of specific gravity, refractive index, moisture content, free fatty acids, iodine value, peroxide value, saponification value and observation of colour. Results: The four groups showed different yield of fish oil that Group IV recorded the highest values of (165.00 ± 1.00 mL/kg followed by Group III [(145.66 ± 1.15 mL/kg] and Group II [(129.33 ± 0.58 mL/kg], whereas Group I recorded the lowest values of (78.33 ± 0.58 mL/ kg in monsoon season, and the average yield was (180.0 ± 4.9 mL/kg fish tissues. These analytical values of the crude oil were well within the acceptable standard values for both fresh and stocked samples. Conclusions: The information generated in the present study pertaining to the quantitative and qualitative analysis of fish oil will serve as a reference baseline for entrepreneurs and industrialists in future for the successful commercial production of fish oil by employing oil sardines.

  7. Improving information extraction using a probability-based approach

    DEFF Research Database (Denmark)

    Kim, S.; Ahmed, Saeema; Wallace, K.

    2007-01-01

    Information plays a crucial role during the entire life-cycle of a product. It has been shown that engineers frequently consult colleagues to obtain the information they require to solve problems. However, the industrial world is now more transient and key personnel move to other companies...... or retire. It is becoming essential to retrieve vital information from archived product documents, if it is available. There is, therefore, great interest in ways of extracting relevant and sharable information from documents. A keyword-based search is commonly used, but studies have shown...... the recall, while maintaining the high precision, a learning approach that makes identification decisions based on a probability model, rather than simply looking up the presence of the pre-defined variations, looks promising. This paper presents the results of developing such a probability-based entity...

  8. DNA agarose gel electrophoresis for antioxidant analysis: Development of a quantitative approach for phenolic extracts.

    Science.gov (United States)

    Silva, Sara; Costa, Eduardo M; Vicente, Sandra; Veiga, Mariana; Calhau, Conceição; Morais, Rui M; Pintado, Manuela E

    2017-10-15

    Most of the fast in vitro assays proposed to determine the antioxidant capacity of a compound/extract lack either biological context or employ complex protocols. Therefore, the present work proposes the improvement of an agarose gel DNA electrophoresis in order to allow for a quantitative estimation of the antioxidant capacity of pure phenolic compounds as well as of a phenolic rich extract, while also considering their possible pro-oxidant effects. The result obtained demonstrated that the proposed method allowed for the evaluation of the protection of DNA oxidation [in the presence of hydrogen peroxide (H 2 O 2 ) and an H 2 O 2 /iron (III) chloride (FeCl 3 ) systems] as well as for the observation of pro-oxidant activities, with the measurements registering interclass correlation coefficients above 0.9. Moreover, this method allowed for the characterization of the antioxidant capacity of a blueberry extract while demonstrating that it had no perceived pro-oxidant effect. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Transliteration normalization for Information Extraction and Machine Translation

    Directory of Open Access Journals (Sweden)

    Yuval Marton

    2014-12-01

    Full Text Available Foreign name transliterations typically include multiple spelling variants. These variants cause data sparseness and inconsistency problems, increase the Out-of-Vocabulary (OOV rate, and present challenges for Machine Translation, Information Extraction and other natural language processing (NLP tasks. This work aims to identify and cluster name spelling variants using a Statistical Machine Translation method: word alignment. The variants are identified by being aligned to the same “pivot” name in another language (the source-language in Machine Translation settings. Based on word-to-word translation and transliteration probabilities, as well as the string edit distance metric, names with similar spellings in the target language are clustered and then normalized to a canonical form. With this approach, tens of thousands of high-precision name transliteration spelling variants are extracted from sentence-aligned bilingual corpora in Arabic and English (in both languages. When these normalized name spelling variants are applied to Information Extraction tasks, improvements over strong baseline systems are observed. When applied to Machine Translation tasks, a large improvement potential is shown.

  10. Knowledge discovery: Extracting usable information from large amounts of data

    International Nuclear Information System (INIS)

    Whiteson, R.

    1998-01-01

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best

  11. Evolving spectral transformations for multitemporal information extraction using evolutionary computation

    Science.gov (United States)

    Momm, Henrique; Easson, Greg

    2011-01-01

    Remote sensing plays an important role in assessing temporal changes in land features. The challenge often resides in the conversion of large quantities of raw data into actionable information in a timely and cost-effective fashion. To address this issue, research was undertaken to develop an innovative methodology integrating biologically-inspired algorithms with standard image classification algorithms to improve information extraction from multitemporal imagery. Genetic programming was used as the optimization engine to evolve feature-specific candidate solutions in the form of nonlinear mathematical expressions of the image spectral channels (spectral indices). The temporal generalization capability of the proposed system was evaluated by addressing the task of building rooftop identification from a set of images acquired at different dates in a cross-validation approach. The proposed system generates robust solutions (kappa values > 0.75 for stage 1 and > 0.4 for stage 2) despite the statistical differences between the scenes caused by land use and land cover changes coupled with variable environmental conditions, and the lack of radiometric calibration between images. Based on our results, the use of nonlinear spectral indices enhanced the spectral differences between features improving the clustering capability of standard classifiers and providing an alternative solution for multitemporal information extraction.

  12. Recognition techniques for extracting information from semistructured documents

    Science.gov (United States)

    Della Ventura, Anna; Gagliardi, Isabella; Zonta, Bruna

    2000-12-01

    Archives of optical documents are more and more massively employed, the demand driven also by the new norms sanctioning the legal value of digital documents, provided they are stored on supports that are physically unalterable. On the supply side there is now a vast and technologically advanced market, where optical memories have solved the problem of the duration and permanence of data at costs comparable to those for magnetic memories. The remaining bottleneck in these systems is the indexing. The indexing of documents with a variable structure, while still not completely automated, can be machine supported to a large degree with evident advantages both in the organization of the work, and in extracting information, providing data that is much more detailed and potentially significant for the user. We present here a system for the automatic registration of correspondence to and from a public office. The system is based on a general methodology for the extraction, indexing, archiving, and retrieval of significant information from semi-structured documents. This information, in our prototype application, is distributed among the database fields of sender, addressee, subject, date, and body of the document.

  13. Quantitative detection of nitric oxide in exhaled human breath by extractive electrospray ionization mass spectrometry

    Science.gov (United States)

    Pan, Susu; Tian, Yong; Li, Ming; Zhao, Jiuyan; Zhu, Lanlan; Zhang, Wei; Gu, Haiwei; Wang, Haidong; Shi, Jianbo; Fang, Xiang; Li, Penghui; Chen, Huanwen

    2015-03-01

    Exhaled nitric oxide (eNO) is a useful biomarker of various physiological conditions, including asthma and other pulmonary diseases. Herein a fast and sensitive analytical method has been developed for the quantitative detection of eNO based on extractive electrospray ionization mass spectrometry (EESI-MS). Exhaled NO molecules selectively reacted with 2-phenyl-4, 4, 5, 5-tetramethylimidazoline-1-oxyl-3-oxide (PTIO) reagent, and eNO concentration was derived based on the EESI-MS response of 1-oxyl-2-phenyl-4, 4, 5, 5-tetramethylimidazoline (PTI) product. The method allowed quantification of eNO below ppb level (~0.02 ppbv) with a relative standard deviation (RSD) of 11.6%. In addition, eNO levels of 20 volunteers were monitored by EESI-MS over the time period of 10 hrs. Long-term eNO response to smoking a cigarette was recorded, and the observed time-dependent profile was discussed. This work extends the application of EESI-MS to small molecules (mass spectrometers. Long-term quantitative profiling of eNO by EESI-MS opens new possibilities for the research of human metabolism and clinical diagnosis.

  14. A highly sensitive quantitative cytosensor technique for the identification of receptor ligands in tissue extracts.

    Science.gov (United States)

    Lenkei, Z; Beaudet, A; Chartrel, N; De Mota, N; Irinopoulou, T; Braun, B; Vaudry, H; Llorens-Cortes, C

    2000-11-01

    Because G-protein-coupled receptors (GPCRs) constitute excellent putative therapeutic targets, functional characterization of orphan GPCRs through identification of their endogenous ligands has great potential for drug discovery. We propose here a novel single cell-based assay for identification of these ligands. This assay involves (a) fluorescent tagging of the GPCR, (b) expression of the tagged receptor in a heterologous expression system, (c) incubation of the transfected cells with fractions purified from tissue extracts, and (d) imaging of ligand-induced receptor internalization by confocal microscopy coupled to digital image quantification. We tested this approach in CHO cells stably expressing the NT1 neurotensin receptor fused to EGFP (enhanced green fluorescent protein), in which neurotensin promoted internalization of the NT1-EGFP receptor in a dose-dependent fashion (EC(50) = 0.98 nM). Similarly, four of 120 consecutive reversed-phase HPLC fractions of frog brain extracts promoted internalization of the NT1-EGFP receptor. The same four fractions selectively contained neurotensin, an endogenous ligand of the NT1 receptor, as detected by radioimmunoassay and inositol phosphate production. The present internalization assay provides a highly specific quantitative cytosensor technique with sensitivity in the nanomolar range that should prove useful for the identification of putative natural and synthetic ligands for GPCRs.

  15. Simultaneous HPLC quantitative analysis of mangostin derivatives in Tetragonula pagdeni propolis extracts

    Directory of Open Access Journals (Sweden)

    Sumet Kongkiatpaiboon

    2016-04-01

    Full Text Available Propolis has been used as indigenous medicine for curing numerous maladies. The one that is of ethnopharmacological use is stingless bee propolis from Tetragonula pagdeni. A simultaneous high-performance liquid chromatography (HPLC investigation was developed and validated to determine the contents of bioactive compounds: 3-isomangostin, gamma-mangostin, beta-mangostin, and alpha-mangostin. HPLC analysis was effectively performed using a Hypersil BDS C18 column, with the gradient elution of methanol–0.2% formic acid and a flow rate of 1 ml/min, at 25 °C and detected at 245 nm. Parameters for the validation included accuracy, precision, linearity, and limits of quantitation and detection. The developed HPLC technique was precise, with lower than 2% relative standard deviation. The recovery values of 3-isomangostin, gamma-mangostin, beta-mangostin, and alpha-mangostin in the extracts were 99.98%, 99.97%, 98.98% and 99.19%, respectively. The average contents of these mixtures in the propolis extracts collected from different seasons were 0.127%, 1.008%, 0.323% and 2.703% (w/w, respectively. The developed HPLC technique was suitable and practical for the simultaneous analysis of these mangostin derivatives in T. pagdeni propolis and would be a valuable guidance for the standardization of its pharmaceutical products.

  16. An Improved DNA Extraction Method for Efficient and Quantitative Recovery of Phytoplankton Diversity in Natural Assemblages.

    Directory of Open Access Journals (Sweden)

    Jian Yuan

    Full Text Available Marine phytoplankton are highly diverse with different species possessing different cell coverings, posing challenges for thoroughly breaking the cells in DNA extraction yet preserving DNA integrity. While quantitative molecular techniques have been increasingly used in phytoplankton research, an effective and simple method broadly applicable to different lineages and natural assemblages is still lacking. In this study, we developed a bead-beating protocol based on our previous experience and tested it against 9 species of phytoplankton representing different lineages and different cell covering rigidities. We found the bead-beating method enhanced the final yield of DNA (highest as 2 folds in comparison with the non-bead-beating method, while also preserving the DNA integrity. When our method was applied to a field sample collected at a subtropical bay located in Xiamen, China, the resultant ITS clone library revealed a highly diverse assemblage of phytoplankton and other micro-eukaryotes, including Archaea, Amoebozoa, Chlorophyta, Ciliphora, Bacillariophyta, Dinophyta, Fungi, Metazoa, etc. The appearance of thecate dinoflagellates, thin-walled phytoplankton and "naked" unicellular organisms indicates that our method could obtain the intact DNA of organisms with different cell coverings. All the results demonstrate that our method is useful for DNA extraction of phytoplankton and environmental surveys of their diversity and abundance.

  17. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-01-01

    Full Text Available OBJECTIVES: A common method for conducting a quantitative systematic review (QSR for observational studies related to nutritional epidemiology is the “highest versus lowest intake” method (HLM, in which only the information concerning the effect size (ES of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM, a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES between the HLM and ICM. METHODS: A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. RESULTS: No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. CONCLUSIONS: The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  18. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  19. Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2011-01-01

    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration

  20. Melanoma screening: Informing public health policy with quantitative modelling.

    Directory of Open Access Journals (Sweden)

    Stephen Gilmore

    Full Text Available Australia and New Zealand share the highest incidence rates of melanoma worldwide. Despite the substantial increase in public and physician awareness of melanoma in Australia over the last 30 years-as a result of the introduction of publicly funded mass media campaigns that began in the early 1980s -mortality has steadily increased during this period. This increased mortality has led investigators to question the relative merits of primary versus secondary prevention; that is, sensible sun exposure practices versus early detection. Increased melanoma vigilance on the part of the public and among physicians has resulted in large increases in public health expenditure, primarily from screening costs and increased rates of office surgery. Has this attempt at secondary prevention been effective? Unfortunately epidemiologic studies addressing the causal relationship between the level of secondary prevention and mortality are prohibitively difficult to implement-it is currently unknown whether increased melanoma surveillance reduces mortality, and if so, whether such an approach is cost-effective. Here I address the issue of secondary prevention of melanoma with respect to incidence and mortality (and cost per life saved by developing a Markov model of melanoma epidemiology based on Australian incidence and mortality data. The advantages of developing a methodology that can determine constraint-based surveillance outcomes are twofold: first, it can address the issue of effectiveness; and second, it can quantify the trade-off between cost and utilisation of medical resources on one hand, and reduced morbidity and lives saved on the other. With respect to melanoma, implementing the model facilitates the quantitative determination of the relative effectiveness and trade-offs associated with different levels of secondary and tertiary prevention, both retrospectively and prospectively. For example, I show that the surveillance enhancement that began in

  1. Information Extraction for Clinical Data Mining: A Mammography Case Study.

    Science.gov (United States)

    Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David

    2009-01-01

    Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts' input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F 1 -score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level.

  2. Quantitative Detection of Trace Malachite Green in Aquiculture Water Samples by Extractive Electrospray Ionization Mass Spectrometry.

    Science.gov (United States)

    Fang, Xiaowei; Yang, Shuiping; Chingin, Konstantin; Zhu, Liang; Zhang, Xinglei; Zhou, Zhiquan; Zhao, Zhanfeng

    2016-08-11

    Exposure to malachite green (MG) may pose great health risks to humans; thus, it is of prime importance to develop fast and robust methods to quantitatively screen the presence of malachite green in water. Herein the application of extractive electrospray ionization mass spectrometry (EESI-MS) has been extended to the trace detection of MG within lake water and aquiculture water, due to the intensive use of MG as a biocide in fisheries. This method has the advantage of obviating offline liquid-liquid extraction or tedious matrix separation prior to the measurement of malachite green in native aqueous medium. The experimental results indicate that the extrapolated detection limit for MG was ~3.8 μg·L(-1) (S/N = 3) in lake water samples and ~0.5 μg·L(-1) in ultrapure water under optimized experimental conditions. The signal intensity of MG showed good linearity over the concentration range of 10-1000 μg·L(-1). Measurement of practical water samples fortified with MG at 0.01, 0.1 and 1.0 mg·L(-1) gave a good validation of the established calibration curve. The average recoveries and relative standard deviation (RSD) of malachite green in lake water and Carassius carassius fish farm effluent water were 115% (6.64% RSD), 85.4% (9.17% RSD) and 96.0% (7.44% RSD), respectively. Overall, the established EESI-MS/MS method has been demonstrated suitable for sensitive and rapid (malachite green in various aqueous media, indicating its potential for online real-time monitoring of real life samples.

  3. INFORMATION EXTRACTION IN TOMB PIT USING HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    X. Yang

    2018-04-01

    Full Text Available Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  4. Information Extraction in Tomb Pit Using Hyperspectral Data

    Science.gov (United States)

    Yang, X.; Hou, M.; Lyu, S.; Ma, S.; Gao, Z.; Bai, S.; Gu, M.; Liu, Y.

    2018-04-01

    Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  5. Automated Extraction of Substance Use Information from Clinical Texts.

    Science.gov (United States)

    Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.

  6. Domain-independent information extraction in unstructured text

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H. [Sandia National Labs., Albuquerque, NM (United States). Software Surety Dept.

    1996-09-01

    Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness when compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.

  7. Extracting and Using Photon Polarization Information in Radiative B Decays

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Yuval

    2000-05-09

    The authors discuss the uses of conversion electron pairs for extracting photon polarization information in weak radiative B decays. Both cases of leptons produced through a virtual and real photon are considered. Measurements of the angular correlation between the (K-pi) and (e{sup +}e{sup {minus}}) decay planes in B --> K*(--> K-pi)gamma (*)(--> e{sup +}e{sup {minus}}) decays can be used to determine the helicity amplitudes in the radiative B --> K*gamma decays. A large right-handed helicity amplitude in B-bar decays is a signal of new physics. The time-dependent CP asymmetry in the B{sup 0} decay angular correlation is shown to measure sin 2-beta and cos 2-beta with little hadronic uncertainty.

  8. Extraction of neutron spectral information from Bonner-Sphere data

    CERN Document Server

    Haney, J H; Zaidins, C S

    1999-01-01

    We have extended a least-squares method of extracting neutron spectral information from Bonner-Sphere data which was previously developed by Zaidins et al. (Med. Phys. 5 (1978) 42). A pulse-height analysis with background stripping is employed which provided a more accurate count rate for each sphere. Newer response curves by Mares and Schraube (Nucl. Instr. and Meth. A 366 (1994) 461) were included for the moderating spheres and the bare detector which comprise the Bonner spectrometer system. Finally, the neutron energy spectrum of interest was divided using the philosophy of fuzzy logic into three trapezoidal regimes corresponding to slow, moderate, and fast neutrons. Spectral data was taken using a PuBe source in two different environments and the analyzed data is presented for these cases as slow, moderate, and fast neutron fluences. (author)

  9. ONTOGRABBING: Extracting Information from Texts Using Generative Ontologies

    DEFF Research Database (Denmark)

    Nilsson, Jørgen Fischer; Szymczak, Bartlomiej Antoni; Jensen, P.A.

    2009-01-01

    We describe principles for extracting information from texts using a so-called generative ontology in combination with syntactic analysis. Generative ontologies are introduced as semantic domains for natural language phrases. Generative ontologies extend ordinary finite ontologies with rules...... for producing recursively shaped terms representing the ontological content (ontological semantics) of NL noun phrases and other phrases. We focus here on achieving a robust, often only partial, ontology-driven parsing of and ascription of semantics to a sentence in the text corpus. The aim of the ontological...... analysis is primarily to identify paraphrases, thereby achieving a search functionality beyond mere keyword search with synsets. We further envisage use of the generative ontology as a phrase-based rather than word-based browser into text corpora....

  10. Novel mathematic models for quantitative transitivity of quality-markers in extraction process of the Buyanghuanwu decoction.

    Science.gov (United States)

    Zhang, Yu-Tian; Xiao, Mei-Feng; Deng, Kai-Wen; Yang, Yan-Tao; Zhou, Yi-Qun; Zhou, Jin; He, Fu-Yuan; Liu, Wen-Long

    2018-06-01

    Nowadays, to research and formulate an efficiency extraction system for Chinese herbal medicine, scientists have always been facing a great challenge for quality management, so that the transitivity of Q-markers in quantitative analysis of TCM was proposed by Prof. Liu recently. In order to improve the quality of extraction from raw medicinal materials for clinical preparations, a series of integrated mathematic models for transitivity of Q-markers in quantitative analysis of TCM were established. Buyanghuanwu decoction (BYHWD) was a commonly TCMs prescription, which was used to prevent and treat the ischemic heart and brain diseases. In this paper, we selected BYHWD as an extraction experimental subject to study the quantitative transitivity of TCM. Based on theory of Fick's Rule and Noyes-Whitney equation, novel kinetic models were established for extraction of active components. Meanwhile, fitting out kinetic equations of extracted models and then calculating the inherent parameters in material piece and Q-marker quantitative transfer coefficients, which were considered as indexes to evaluate transitivity of Q-markers in quantitative analysis of the extraction process of BYHWD. HPLC was applied to screen and analyze the potential Q-markers in the extraction process. Fick's Rule and Noyes-Whitney equation were adopted for mathematically modeling extraction process. Kinetic parameters were fitted and calculated by the Statistical Program for Social Sciences 20.0 software. The transferable efficiency was described and evaluated by potential Q-markers transfer trajectory via transitivity availability AUC, extraction ratio P, and decomposition ratio D respectively. The Q-marker was identified with AUC, P, D. Astragaloside IV, laetrile, paeoniflorin, and ferulic acid were studied as potential Q-markers from BYHWD. The relative technologic parameters were presented by mathematic models, which could adequately illustrate the inherent properties of raw materials

  11. In vitro prebiotic effects and quantitative analysis of Bulnesia sarmienti extract

    Directory of Open Access Journals (Sweden)

    Md Ahsanur Reza

    2016-10-01

    Full Text Available Prebiotics are used to influence the growth, colonization, survival, and activity of probiotics, and enhance the innate immunity, thus improving the health status of the host. The survival, growth, and activity of probiotics are often interfered with by intrinsic factors and indigenous microbes in the gastrointestinal tract. In this study, Bulnesia sarmienti aqueous extract (BSAE was evaluated for the growth-promoting activity of different strains of Lactobacillus acidophilus, and a simple, precise, cost-effective high-performance liquid chromatography (HPLC method was developed and validated for the determination of active prebiotic ingredients in the extract. Different strains of L. acidophilus (probiotic were incubated in de Man, Rogosa, and Sharpe (MRS medium with the supplementation of BSAE in a final concentration of 0.0%, 1.0%, and 3.0% (w/v as the sole carbon source. Growth of the probiotics was determined by measuring the pH changes and colony-forming units (CFU/mL using the microdilution method for a period of 24 hours. The HPLC method was designed by optimizing mobile-phase composition, flow rate, column temperature, and detection wavelength. The method was validated according to the requirements of a new method, including accuracy, precision, linearity, limit of detection, limit of quantitation, and specificity. The major prebiotic active ingredients in BSAE were determined using the validated HPLC method. The rapid growth rate of different strains of L. acidophilus was observed in growth media with BSAE, whereas the decline of pH values of cultures varied in different strains of probiotics depending on the time of culture. (+-Catechin and (−-epicatechin were identified on the basis of their retention time, absorbance spectrum, and mass spectrometry fragmentation pattern. The developed method met the limit of all validation parameters. The prebiotic active components, (+-catechin and (−-epicatechin, were quantified as 1.27% and 0

  12. Information extraction and knowledge graph construction from geoscience literature

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen

    2018-03-01

    Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.

  13. Data Assimilation to Extract Soil Moisture Information from SMAP Observations

    Directory of Open Access Journals (Sweden)

    Jana Kolassa

    2017-11-01

    Full Text Available This study compares different methods to extract soil moisture information through the assimilation of Soil Moisture Active Passive (SMAP observations. Neural network (NN and physically-based SMAP soil moisture retrievals were assimilated into the National Aeronautics and Space Administration (NASA Catchment model over the contiguous United States for April 2015 to March 2017. By construction, the NN retrievals are consistent with the global climatology of the Catchment model soil moisture. Assimilating the NN retrievals without further bias correction improved the surface and root zone correlations against in situ measurements from 14 SMAP core validation sites (CVS by 0.12 and 0.16, respectively, over the model-only skill, and reduced the surface and root zone unbiased root-mean-square error (ubRMSE by 0.005 m 3 m − 3 and 0.001 m 3 m − 3 , respectively. The assimilation reduced the average absolute surface bias against the CVS measurements by 0.009 m 3 m − 3 , but increased the root zone bias by 0.014 m 3 m − 3 . Assimilating the NN retrievals after a localized bias correction yielded slightly lower surface correlation and ubRMSE improvements, but generally the skill differences were small. The assimilation of the physically-based SMAP Level-2 passive soil moisture retrievals using a global bias correction yielded similar skill improvements, as did the direct assimilation of locally bias-corrected SMAP brightness temperatures within the SMAP Level-4 soil moisture algorithm. The results show that global bias correction methods may be able to extract more independent information from SMAP observations compared to local bias correction methods, but without accurate quality control and observation error characterization they are also more vulnerable to adverse effects from retrieval errors related to uncertainties in the retrieval inputs and algorithm. Furthermore, the results show that using global bias correction approaches without a

  14. Multi-Filter String Matching and Human-Centric Entity Matching for Information Extraction

    Science.gov (United States)

    Sun, Chong

    2012-01-01

    More and more information is being generated in text documents, such as Web pages, emails and blogs. To effectively manage this unstructured information, one broadly used approach includes locating relevant content in documents, extracting structured information and integrating the extracted information for querying, mining or further analysis. In…

  15. Noninvasive Assessment of Oxygen Extraction Fraction in Chronic Ischemia Using Quantitative Susceptibility Mapping at 7 Tesla.

    Science.gov (United States)

    Uwano, Ikuko; Kudo, Kohsuke; Sato, Ryota; Ogasawara, Kuniaki; Kameda, Hiroyuki; Nomura, Jun-Ichi; Mori, Futoshi; Yamashita, Fumio; Ito, Kenji; Yoshioka, Kunihiro; Sasaki, Makoto

    2017-08-01

    The oxygen extraction fraction (OEF) is an effective metric to evaluate metabolic reserve in chronic ischemia. However, OEF is considered to be accurately measured only when using positron emission tomography (PET). Thus, we investigated whether OEF maps generated by magnetic resonance quantitative susceptibility mapping (QSM) at 7 Tesla enabled detection of OEF changes when compared with those obtained with PET. Forty-one patients with chronic stenosis/occlusion of the unilateral internal carotid artery or middle cerebral artery were examined using 7 Tesla-MRI and PET scanners. QSM images were obtained from 3-dimensional T2*-weighted images, using a multiple dipole-inversion algorithm. OEF maps were generated based on susceptibility differences between venous structures and brain tissues on QSM images. OEF ratios of the ipsilateral middle cerebral artery territory against the contralateral side were calculated on the QSM-OEF and PET-OEF images, using an anatomic template. The OEF ratio in the middle cerebral artery territory showed significant correlations between QSM-OEF and PET-OEF maps ( r =0.69; P 1.09, as determined by receiver operating characteristic analysis, showed a sensitivity and specificity of 0.82 and 0.86, respectively, for the substantial increase in the PET-OEF ratio. Absolute QSM-OEF values were significantly correlated with PET-OEF values in the patients with increased PET-OEF. OEF ratios on QSM-OEF images at 7 Tesla showed a good correlation with those on PET-OEF images in patients with unilateral steno-occlusive internal carotid artery/middle cerebral artery lesions, suggesting that noninvasive OEF measurement by MRI can be a substitute for PET. © 2017 American Heart Association, Inc.

  16. Proceedings of the International Symposium on quantitative description of metal extraction processes

    International Nuclear Information System (INIS)

    Themelis, N.J.

    1991-01-01

    This book contains the proceedings of the H.H. Kellogg International Symposium. Topics include: Extractive metallurgy; Thermochemical phenomena in metallurgy; Thermodynamic modeling of metallurgical processes; and Transport and rate phenomena in metallurgical extraction

  17. Validation of the Mass-Extraction-Window for Quantitative Methods Using Liquid Chromatography High Resolution Mass Spectrometry.

    Science.gov (United States)

    Glauser, Gaétan; Grund, Baptiste; Gassner, Anne-Laure; Menin, Laure; Henry, Hugues; Bromirski, Maciej; Schütz, Frédéric; McMullen, Justin; Rochat, Bertrand

    2016-03-15

    A paradigm shift is underway in the field of quantitative liquid chromatography-mass spectrometry (LC-MS) analysis thanks to the arrival of recent high-resolution mass spectrometers (HRMS). The capability of HRMS to perform sensitive and reliable quantifications of a large variety of analytes in HR-full scan mode is showing that it is now realistic to perform quantitative and qualitative analysis with the same instrument. Moreover, HR-full scan acquisition offers a global view of sample extracts and allows retrospective investigations as virtually all ionized compounds are detected with a high sensitivity. In time, the versatility of HRMS together with the increasing need for relative quantification of hundreds of endogenous metabolites should promote a shift from triple-quadrupole MS to HRMS. However, a current "pitfall" in quantitative LC-HRMS analysis is the lack of HRMS-specific guidance for validated quantitative analyses. Indeed, false positive and false negative HRMS detections are rare, albeit possible, if inadequate parameters are used. Here, we investigated two key parameters for the validation of LC-HRMS quantitative analyses: the mass accuracy (MA) and the mass-extraction-window (MEW) that is used to construct the extracted-ion-chromatograms. We propose MA-parameters, graphs, and equations to calculate rational MEW width for the validation of quantitative LC-HRMS methods. MA measurements were performed on four different LC-HRMS platforms. Experimentally determined MEW values ranged between 5.6 and 16.5 ppm and depended on the HRMS platform, its working environment, the calibration procedure, and the analyte considered. The proposed procedure provides a fit-for-purpose MEW determination and prevents false detections.

  18. Earth Science Data Analytics: Preparing for Extracting Knowledge from Information

    Science.gov (United States)

    Kempler, Steven; Barbieri, Lindsay

    2016-01-01

    Data analytics is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. Data analytics is a broad term that includes data analysis, as well as an understanding of the cognitive processes an analyst uses to understand problems and explore data in meaningful ways. Analytics also include data extraction, transformation, and reduction, utilizing specific tools, techniques, and methods. Turning to data science, definitions of data science sound very similar to those of data analytics (which leads to a lot of the confusion between the two). But the skills needed for both, co-analyzing large amounts of heterogeneous data, understanding and utilizing relevant tools and techniques, and subject matter expertise, although similar, serve different purposes. Data Analytics takes on a practitioners approach to applying expertise and skills to solve issues and gain subject knowledge. Data Science, is more theoretical (research in itself) in nature, providing strategic actionable insights and new innovative methodologies. Earth Science Data Analytics (ESDA) is the process of examining, preparing, reducing, and analyzing large amounts of spatial (multi-dimensional), temporal, or spectral data using a variety of data types to uncover patterns, correlations and other information, to better understand our Earth. The large variety of datasets (temporal spatial differences, data types, formats, etc.) invite the need for data analytics skills that understand the science domain, and data preparation, reduction, and analysis techniques, from a practitioners point of view. The application of these skills to ESDA is the focus of this presentation. The Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster was created in recognition of the practical need to facilitate the co-analysis of large amounts of data and information for Earth science. Thus, from a to

  19. Puffed cereals with added chamomile – quantitative analysis of polyphenols and optimization of their extraction method

    Directory of Open Access Journals (Sweden)

    Tomasz Blicharski

    2017-05-01

    For most of the analyzed compounds, the highest yields were obtained by ultrasound assisted extraction. The highest temperature during the ultrasonification process (60oC increased the efficiency of extraction, without degradation of polyphenols. UAE easily arrives at extraction equilibrium and therefore permits shorter periods of time, reducing the energy input. Furthermore, UAE meets the requirements of ‘Green Chemistry’

  20. Testing the reliability of information extracted from ancient zircon

    Science.gov (United States)

    Kielman, Ross; Whitehouse, Martin; Nemchin, Alexander

    2015-04-01

    Studies combining zircon U-Pb chronology, trace element distribution as well as O and Hf isotope systematics are a powerful way to gain understanding of the processes shaping Earth's evolution, especially in detrital populations where constraints from the original host are missing. Such studies of the Hadean detrital zircon population abundant in sedimentary rocks in Western Australia have involved analysis of an unusually large number of individual grains, but also highlighted potential problems with the approach, only apparent when multiple analyses are obtained from individual grains. A common feature of the Hadean as well as many early Archaean zircon populations is their apparent inhomogeneity, which reduces confidence in conclusions based on studies combining chemistry and isotopic characteristics of zircon. In order to test the reliability of information extracted from early Earth zircon, we report results from one of the first in-depth multi-method study of zircon from a relatively simple early Archean magmatic rock, used as an analogue to ancient detrital zircon. The approach involves making multiple SIMS analyses in individual grains in order to be comparable to the most advanced studies of detrital zircon populations. The investigated sample is a relatively undeformed, non-migmatitic ca. 3.8 Ga tonalite collected a few kms south of the Isua Greenstone Belt, southwest Greenland. Extracted zircon grains can be combined into three different groups based on the behavior of their U-Pb systems: (i) grains that show internally consistent and concordant ages and define an average age of 3805±15 Ma, taken to be the age of the rock, (ii) grains that are distributed close to the concordia line, but with significant variability between multiple analyses, suggesting an ancient Pb loss and (iii) grains that have multiple analyses distributed along a discordia pointing towards a zero intercept, indicating geologically recent Pb-loss. This overall behavior has

  1. Zone analysis in biology articles as a basis for information extraction.

    Science.gov (United States)

    Mizuta, Yoko; Korhonen, Anna; Mullen, Tony; Collier, Nigel

    2006-06-01

    In the field of biomedicine, an overwhelming amount of experimental data has become available as a result of the high throughput of research in this domain. The amount of results reported has now grown beyond the limits of what can be managed by manual means. This makes it increasingly difficult for the researchers in this area to keep up with the latest developments. Information extraction (IE) in the biological domain aims to provide an effective automatic means to dynamically manage the information contained in archived journal articles and abstract collections and thus help researchers in their work. However, while considerable advances have been made in certain areas of IE, pinpointing and organizing factual information (such as experimental results) remains a challenge. In this paper we propose tackling this task by incorporating into IE information about rhetorical zones, i.e. classification of spans of text in terms of argumentation and intellectual attribution. As the first step towards this goal, we introduce a scheme for annotating biological texts for rhetorical zones and provide a qualitative and quantitative analysis of the data annotated according to this scheme. We also discuss our preliminary research on automatic zone analysis, and its incorporation into our IE framework.

  2. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    Science.gov (United States)

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  3. Medicaid Analytic eXtract (MAX) General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Analytic eXtract (MAX) data is a set of person-level data files on Medicaid eligibility, service utilization, and payments. The MAX data are created to...

  4. Quantitative spatial analysis of the mouse brain lipidome by pressurized liquid extraction surface analysis

    DEFF Research Database (Denmark)

    Almeida, Reinaldo; Berzina, Zane; Christensen, Eva Arnspang

    2015-01-01

    extracted directly from tissue sections. PLESA uses a sealed and pressurized sampling probe that enables the use of chloroform-containing extraction solvents for efficient in situ lipid microextraction with a spatial resolution of 400 μm. Quantification of lipid species is achieved by the inclusion...

  5. Quantitative and qualitative proteome characteristics extracted from in-depth integrated genomics and proteomics analysis

    NARCIS (Netherlands)

    Low, T.Y.; van Heesch, S.; van den Toorn, H.; Giansanti, P.; Cristobal, A.; Toonen, P.; Schafer, S.; Hubner, N.; van Breukelen, B.; Mohammed, S.; Cuppen, E.; Heck, A.J.R.; Guryev, V.

    2013-01-01

    Quantitative and qualitative protein characteristics are regulated at genomic, transcriptomic, and posttranscriptional levels. Here, we integrated in-depth transcriptome and proteome analyses of liver tissues from two rat strains to unravel the interactions within and between these layers. We

  6. Augmenting Amyloid PET Interpretations With Quantitative Information Improves Consistency of Early Amyloid Detection.

    Science.gov (United States)

    Harn, Nicholas R; Hunt, Suzanne L; Hill, Jacqueline; Vidoni, Eric; Perry, Mark; Burns, Jeffrey M

    2017-08-01

    Establishing reliable methods for interpreting elevated cerebral amyloid-β plaque on PET scans is increasingly important for radiologists, as availability of PET imaging in clinical practice increases. We examined a 3-step method to detect plaque in cognitively normal older adults, focusing on the additive value of quantitative information during the PET scan interpretation process. Fifty-five F-florbetapir PET scans were evaluated by 3 experienced raters. Scans were first visually interpreted as having "elevated" or "nonelevated" plaque burden ("Visual Read"). Images were then processed using a standardized quantitative analysis software (MIMneuro) to generate whole brain and region of interest SUV ratios. This "Quantitative Read" was considered elevated if at least 2 of 6 regions of interest had an SUV ratio of more than 1.1. The final interpretation combined both visual and quantitative data together ("VisQ Read"). Cohen kappa values were assessed as a measure of interpretation agreement. Plaque was elevated in 25.5% to 29.1% of the 165 total Visual Reads. Interrater agreement was strong (kappa = 0.73-0.82) and consistent with reported values. Quantitative Reads were elevated in 45.5% of participants. Final VisQ Reads changed from initial Visual Reads in 16 interpretations (9.7%), with most changing from "nonelevated" Visual Reads to "elevated." These changed interpretations demonstrated lower plaque quantification than those initially read as "elevated" that remained unchanged. Interrater variability improved for VisQ Reads with the addition of quantitative information (kappa = 0.88-0.96). Inclusion of quantitative information increases consistency of PET scan interpretations for early detection of cerebral amyloid-β plaque accumulation.

  7. Stability Test and Quantitative and Qualitative Analyses of the Amino Acids in Pharmacopuncture Extracted from Scolopendra subspinipes mutilans

    Science.gov (United States)

    Cho, GyeYoon; Han, KyuChul; Yoon, JinYoung

    2015-01-01

    Objectives: Scolopendra subspinipes mutilans (S. subspinipes mutilans) is known as a traditional medicine and includes various amino acids, peptides and proteins. The amino acids in the pharmacopuncture extracted from S. subspinipes mutilans by using derivatization methods were analyzed quantitatively and qualitatively by using high performance liquid chromatography (HPLC) over a 12 month period to confirm its stability. Methods: Amino acids of pharmacopuncture extracted from S. subspinipes mutilans were derived by using O-phthaldialdehyde (OPA) & 9-fluorenyl methoxy carbonyl chloride (FMOC) reagent and were analyzed using HPLC. The amino acids were detected by using a diode array detector (DAD) and a fluorescence detector (FLD) to compare a mixed amino acid standard (STD) to the pharmacopuncture from centipedes. The stability tests on the pharmacopuncture from centipedes were done using HPLC for three conditions: a room temperature test chamber, an acceleration test chamber, and a cold test chamber. Results: The pharmacopuncture from centipedes was prepared by using the method of the Korean Pharmacopuncture Institute (KPI) and through quantitative analyses was shown to contain 9 amino acids of the 16 amino acids in the mixed amino acid STD. The amounts of the amino acids in the pharmacopuncture from centipedes were 34.37 ppm of aspartate, 123.72 ppm of arginine, 170.63 ppm of alanine, 59.55 ppm of leucine and 57 ppm of lysine. The relative standard deviation (RSD %) results for the pharmacopuncture from centipedes had a maximum value of 14.95% and minimum value of 1.795% on the room temperature test chamber, the acceleration test chamber and the cold test chamber stability tests. Conclusion: Stability tests on and quantitative and qualitative analyses of the amino acids in the pharmacopuncture extracted from centipedes by using derivatization methods were performed by using HPLC. Through research, we hope to determine the relationship between time and the

  8. Quantitative Study of Emotional Intelligence and Communication Levels in Information Technology Professionals

    Science.gov (United States)

    Hendon, Michalina

    2016-01-01

    This quantitative non-experimental correlational research analyzes the relationship between emotional intelligence and communication due to the lack of this research on information technology professionals in the U.S. One hundred and eleven (111) participants completed a survey that measures both the emotional intelligence and communication…

  9. Climate Change Education: Quantitatively Assessing the Impact of a Botanical Garden as an Informal Learning Environment

    Science.gov (United States)

    Sellmann, Daniela; Bogner, Franz X.

    2013-01-01

    Although informal learning environments have been studied extensively, ours is one of the first studies to quantitatively assess the impact of learning in botanical gardens on students' cognitive achievement. We observed a group of 10th graders participating in a one-day educational intervention on climate change implemented in a botanical garden.…

  10. Quantitative Analysis of Qualitative Information from Interviews: A Systematic Literature Review

    Science.gov (United States)

    Fakis, Apostolos; Hilliam, Rachel; Stoneley, Helen; Townend, Michael

    2014-01-01

    Background: A systematic literature review was conducted on mixed methods area. Objectives: The overall aim was to explore how qualitative information from interviews has been analyzed using quantitative methods. Methods: A contemporary review was undertaken and based on a predefined protocol. The references were identified using inclusion and…

  11. Quantitative and Qualitative Analysis of Nutrition and Food Safety Information in School Science Textbooks of India

    Science.gov (United States)

    Subba Rao, G. M.; Vijayapushapm, T.; Venkaiah, K.; Pavarala, V.

    2012-01-01

    Objective: To assess quantity and quality of nutrition and food safety information in science textbooks prescribed by the Central Board of Secondary Education (CBSE), India for grades I through X. Design: Content analysis. Methods: A coding scheme was developed for quantitative and qualitative analyses. Two investigators independently coded the…

  12. Strategy for the extraction of yeast DNA from artisan agave must for quantitative PCR analysis.

    Science.gov (United States)

    Kirchmayr, Manuel Reinhart; Segura-Garcia, Luis Eduardo; Flores-Berrios, Ericka Patricia; Gschaedler, Anne

    2011-11-01

    An efficient method for the direct extraction of yeast genomic DNA from agave must was developed. The optimized protocol, which was based on silica-adsorption of DNA on microcolumns, included an enzymatic cell wall degradation step followed by prolonged lysis with hot detergent. The resulting extracts were suitable templates for subsequent qPCR assays that quantified mixed yeast populations in artisan Mexican mezcal fermentations. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  13. Information Extraction with Character-level Neural Networks and Free Noisy Supervision

    OpenAIRE

    Meerkamp, Philipp; Zhou, Zhengyi

    2016-01-01

    We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn compl...

  14. EXTRACTION AND QUANTITATIVE DETERMINATION OF ASCORBIC ACID FROM BANANA PEEL MUSA ACUMINATA ‘KEPOK’

    Directory of Open Access Journals (Sweden)

    Khairul Anwar Mohamad Said

    2016-04-01

    Full Text Available This paper discusses the extraction of an antioxidant compound, which is ascorbic acid or vitamin C, from a banana peel using an ultrasound-assisted extraction (UAE method. The type of banana used was Musa acuminata also known as “PisangKepok” in Malaysia. The investigation includes the effect of solvent/solid ratio (4.5, 5 g and 10  ml/g, sonication time (15, 30 and 45 mins and temperature variation (30 , 45  and 60oC on the extraction of ascorbic acid compounds from the banana peel to determine the best or optimum condition of the operation. Out of all extract samples analyzed by redox titration method using iodine solution, it was found that the highest yield was 0.04939 ± 0.00080 mg that resulted from an extraction at 30oC for 15 mins with 5 ml/g solvent-to-solute ratio.KEYWORDS:  Musa acuminata; ultrasound-assisted extraction; vitamin C; redox titration

  15. Semantics-based information extraction for detecting economic events

    NARCIS (Netherlands)

    A.C. Hogenboom (Alexander); F. Frasincar (Flavius); K. Schouten (Kim); O. van der Meer

    2013-01-01

    textabstractAs today's financial markets are sensitive to breaking news on economic events, accurate and timely automatic identification of events in news items is crucial. Unstructured news items originating from many heterogeneous sources have to be mined in order to extract knowledge useful for

  16. Tagline: Information Extraction for Semi-Structured Text Elements in Medical Progress Notes

    Science.gov (United States)

    Finch, Dezon Kile

    2012-01-01

    Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in…

  17. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    Science.gov (United States)

    Jonnalagadda, Siddhartha

    2011-01-01

    In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…

  18. Quantitative analysis of semivolatile organic compounds in selected fractions of air sample extracts by GC/MI-IR spectrometry

    International Nuclear Information System (INIS)

    Childers, J.W.; Wilson, N.K.; Barbour, R.K.

    1990-01-01

    The authors are currently investigating the capabilities of gas chromatography/matrix isolation infrared (GC/MI-IR) spectrometry for the determination of semivolatile organic compounds (SVOCs) in environmental air sample extracts. Their efforts are focused on the determination of SVOCs such as alkylbenzene positional isomers, which are difficult to separate chromatographically and to distinguish by conventional electron-impact ionization GC/mass spectrometry. They have performed a series of systematic experiments to identify sources of error in quantitative GC/MI-IR analyses. These experiments were designed to distinguish between errors due to instrument design or performance and errors that arise from some characteristic inherent to the GC/MI-IR technique, such as matrix effects. They have investigated repeatability as a function of several aspects of GC/MI IR spectrometry, including sample injection, spectral acquisition, cryogenic disk movement, and matrix deposition. The precision, linearity, dynamic range, and detection limits of a commercial GC/MI-IR system for target SVOCs were determined and compared to those obtained with the system's flame ionization detector. The use of deuterated internal standards in the quantitative GC/MI-IR analysis of selected fractions of ambient air sample extracts will be demonstrated. They will also discuss the current limitations of the technique in quantitative analyses and suggest improvements for future consideration

  19. Preliminary Phytochemical Screening, Quantitative Analysis of Alkaloids, and Antioxidant Activity of Crude Plant Extracts from Ephedra intermedia Indigenous to Balochistan.

    Science.gov (United States)

    Gul, Rahman; Jan, Syed Umer; Faridullah, Syed; Sherani, Samiullah; Jahan, Nusrat

    2017-01-01

    The aim of this study was to evaluate the antioxidant activity, screening the phytogenic chemical compounds, and to assess the alkaloids present in the E. intermedia to prove its uses in Pakistani folk medicines for the treatment of asthma and bronchitis. Antioxidant activity was analyzed by using 2,2-diphenyl-1-picryl-hydrazyl-hydrate assay. Standard methods were used for the identification of cardiac glycosides, phenolic compounds, flavonoids, anthraquinones, and alkaloids. High performance liquid chromatography (HPLC) was used for quantitative purpose of ephedrine alkaloids in E. intermedia . The quantitative separation was confirmed on Shimadzu 10AVP column (Shampack) of internal diameter (id) 3.0 mm and 50 mm in length. The extract of the solute in flow rate of 1 ml/min at the wavelength 210 nm and methanolic extract showed the antioxidant activity and powerful oxygen free radicals scavenging activities and the IC50 for the E. intermedia plant was near to the reference standard ascorbic acid. The HPLC method was useful for the quantitative purpose of ephedrine (E) and pseudoephedrine (PE) used for 45 samples of one species collected from central habitat in three districts (Ziarat, Shairani, and Kalat) of Balochistan. Results showed that average alkaloid substance in E. intermedia was as follows: PE (0.209%, 0.238%, and 0.22%) and E (0.0538%, 0.0666%, and 0.0514%).

  20. Optimisation of information influences on problems of consequences of Chernobyl accident and quantitative criteria for estimation of information actions

    International Nuclear Information System (INIS)

    Sobaleu, A.

    2004-01-01

    Consequences of Chernobyl NPP accident still very important for Belarus. About 2 million Byelorussians live in the districts polluted by Chernobyl radionuclides. Modern approaches to the decision of after Chernobyl problems in Belarus assume more active use of information and educational actions to grow up a new radiological culture. It will allow to reduce internal doze of radiation without spending a lot of money and other resources. Experience of information work with the population affected by Chernobyl since 1986 till 2004 has shown, that information and educational influences not always reach the final aim - application of received knowledge on radiating safety in practice and changing the style of life. If we take into account limited funds and facilities, we should optimize information work. The optimization can be achieved on the basis of quantitative estimations of information actions effectiveness. It is possible to use two parameters for this quantitative estimations: 1) increase in knowledge of the population and experts on the radiating safety, calculated by new method based on applied theory of the information (Mathematical Theory of Communication) by Claude E. Shannon and 2) reduction of internal doze of radiation, calculated on the basis of measurements on human irradiation counter (HIC) before and after an information or educational influence. (author)

  1. Quantitation of promethazine and metabolites in urine samples using on-line solid-phase extraction and column-switching

    Science.gov (United States)

    Song, Q.; Putcha, L.; Harm, D. L. (Principal Investigator)

    2001-01-01

    A chromatographic method for the quantitation of promethazine (PMZ) and its three metabolites in urine employing on-line solid-phase extraction and column-switching has been developed. The column-switching system described here uses an extraction column for the purification of PMZ and its metabolites from a urine matrix. The extraneous matrix interference was removed by flushing the extraction column with a gradient elution. The analytes of interest were then eluted onto an analytical column for further chromatographic separation using a mobile phase of greater solvent strength. This method is specific and sensitive with a range of 3.75-1400 ng/ml for PMZ and 2.5-1400 ng/ml for the metabolites promethazine sulfoxide, monodesmethyl promethazine sulfoxide and monodesmethyl promethazine. The lower limits of quantitation (LLOQ) were 3.75 ng/ml with less than 6.2% C.V. for PMZ and 2.50 ng/ml with less than 11.5% C.V. for metabolites based on a signal-to-noise ratio of 10:1 or greater. The accuracy and precision were within +/- 11.8% in bias and not greater than 5.5% C.V. in intra- and inter-assay precision for PMZ and metabolites. Method robustness was investigated using a Plackett-Burman experimental design. The applicability of the analytical method for pharmacokinetic studies in humans is illustrated.

  2. A rapid extraction of landslide disaster information research based on GF-1 image

    Science.gov (United States)

    Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na

    2015-08-01

    In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.

  3. Perceived relevance and information needs regarding food topics and preferred information sources among Dutch adults: results of a quantitative consumer study

    NARCIS (Netherlands)

    Dillen, van S.M.E.; Hiddink, G.J.; Koelen, M.A.; Graaf, de C.; Woerkum, van C.M.J.

    2004-01-01

    Objective: For more effective nutrition communication, it is crucial to identify sources from which consumers seek information. Our purpose was to assess perceived relevance and information needs regarding food topics, and preferred information sources by means of quantitative consumer research.

  4. LC-MS/MS quantitative analysis of reducing carbohydrates in soil solutions extracted from crop rhizospheres.

    Science.gov (United States)

    McRae, G; Monreal, C M

    2011-06-01

    A simple, sensitive, and specific analytical method has been developed for the quantitative determination of 15 reducing carbohydrates in the soil solution of crop rhizosphere. Reducing carbohydrates were derivatized with 1-phenyl-3-methyl-5-pyrazolone, separated by reversed-phase high-performance liquid chromatography and detected by electrospray ionization tandem mass spectrometry. Lower limits of quantitation of 2 ng/mL were achieved for all carbohydrates. Quantitation was performed using peak area ratios (analyte/internal standard) and a calibration curve spiked in water with glucose-d(2) as the internal standard. Calibration curves showed excellent linearity over the range 2-100 ng/mL (10-1,000 ng/mL for glucose). The method has been tested with quality control samples spiked in water and soil solution samples obtained from the rhizosphere of wheat and canola and has been found to provide accurate and precise results.

  5. Towards an information extraction and knowledge formation framework based on Shannon entropy

    Directory of Open Access Journals (Sweden)

    Iliescu Dragoș

    2017-01-01

    Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.

  6. Information Management Processes for Extraction of Student Dropout Indicators in Courses in Distance Mode

    Directory of Open Access Journals (Sweden)

    Renata Maria Abrantes Baracho

    2016-04-01

    Full Text Available This research addresses the use of information management processes in order to extract student dropout indicators in distance mode courses. Distance education in Brazil aims to facilitate access to information. The MEC (Ministry of Education announced, in the second semester of 2013, that the main obstacles faced by institutions offering courses in this mode were students dropping out and the resistance of both educators and students to this mode. The research used a mixed methodology, qualitative and quantitative, to obtain student dropout indicators. The factors found and validated in this research were: the lack of interest from students, insufficient training in the use of the virtual learning environment for students, structural problems in the schools that were chosen to offer the course, students without e-mail, incoherent answers to activities to the course, lack of knowledge on the part of the student when using the computer tool. The scenario considered was a course offered in distance mode called Aluno Integrado (Integrated Student

  7. Quantitative and Qualitative Proteome Characteristics Extracted from In-Depth Integrated Genomics and Proteomics Analysis

    NARCIS (Netherlands)

    Low, Teck Yew; van Heesch, Sebastiaan; van den Toorn, Henk; Giansanti, Piero; Cristobal, Alba; Toonen, Pim; Schafer, Sebastian; Huebner, Norbert; van Breukelen, Bas; Mohammed, Shabaz; Cuppen, Edwin; Heck, Albert J. R.; Guryev, Victor

    2013-01-01

    Quantitative and qualitative protein characteristics are regulated at genomic, transcriptomic, and post-transcriptional levels. Here, we integrated in-depth transcriptome and proteome analyses of liver tissues from two rat strains to unravel the interactions within and between these layers. We

  8. Extracting local information from crowds through betting markets

    Science.gov (United States)

    Weijs, Steven

    2015-04-01

    In this research, a set-up is considered in which users can bet against a forecasting agency to challenge their probabilistic forecasts. From an information theory standpoint, a reward structure is considered that either provides the forecasting agency with better information, paying the successful providers of information for their winning bets, or funds excellent forecasting agencies through users that think they know better. Especially for local forecasts, the approach may help to diagnose model biases and to identify local predictive information that can be incorporated in the models. The challenges and opportunities for implementing such a system in practice are also discussed.

  9. Qualitative and Quantitative Data on the Use of the Internet for Archaeological Information

    Directory of Open Access Journals (Sweden)

    Lorna-Jane Richardson

    2015-04-01

    Full Text Available These survey results are from an online survey of 577 UK-based archaeological volunteers, professional archaeologists and archaeological organisations. These data cover a variety of topics related to how and why people access the Internet for information about archaeology, including demographic information, activity relating to accessing information on archaeological topics, archaeological sharing and networking and the use of mobile phone apps and QR codes for public engagement. There is wide scope for further qualitative and quantitative analysis of these data.

  10. Membrane-based microchannel device for continuous quantitative extraction of dissolved free sulfide from water and from oil.

    Science.gov (United States)

    Toda, Kei; Ebisu, Yuki; Hirota, Kazutoshi; Ohira, Shin-Ichi

    2012-09-05

    Underground fluids are important natural sources of drinking water, geothermal energy, and oil-based fuels. To facilitate the surveying of such underground fluids, a novel microchannel extraction device was investigated for in-line continuous analysis and flow injection analysis of sulfide levels in water and in oil. Of the four designs investigated, the honeycomb-patterned microchannel extraction (HMCE) device was found to offer the most effective liquid-liquid extraction. In the HMCE device, a thin silicone membrane was sandwiched between two polydimethylsiloxane plates in which honeycomb-patterned microchannels had been fabricated. The identical patterns on the two plates were accurately aligned. The extracted sulfide was detected by quenching monitoring of fluorescein mercuric acetate (FMA). The sulfide extraction efficiencies from water and oil samples of the HMCE device and of three other designs (two annular and one rectangular channel) were examined theoretically and experimentally. The best performance was obtained with the HMCE device because of its thin sample layer (small diffusion distance) and large interface area. Quantitative extraction from both water and oil could be obtained using the HMCE device. The estimated limit of detection for continuous monitoring was 0.05 μM, and sulfide concentrations in the range of 0.15-10 μM could be determined when the acceptor was 5 μM FMA alkaline solution. The method was applied to natural water analysis using flow injection mode, and the data agreed with those obtained using headspace gas chromatography-flame photometric detection. The analysis of hydrogen sulfide levels in prepared oil samples was also performed. The proposed device is expected to be used for real time survey of oil wells and groundwater wells. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Qualitative and quantitative analysis of Dibenzofuran, Alkyldibenzofurans, and Benzo[b]naphthofurans in crude oils and source rock extracts

    Science.gov (United States)

    Meijun Li,; Ellis, Geoffrey S.

    2015-01-01

    Dibenzofuran (DBF), its alkylated homologues, and benzo[b]naphthofurans (BNFs) are common oxygen-heterocyclic aromatic compounds in crude oils and source rock extracts. A series of positional isomers of alkyldibenzofuran and benzo[b]naphthofuran were identified in mass chromatograms by comparison with internal standards and standard retention indices. The response factors of dibenzofuran in relation to internal standards were obtained by gas chromatography-mass spectrometry analyses of a set of mixed solutions with different concentration ratios. Perdeuterated dibenzofuran and dibenzothiophene are optimal internal standards for quantitative analyses of furan compounds in crude oils and source rock extracts. The average concentration of the total DBFs in oils derived from siliciclastic lacustrine rock extracts from the Beibuwan Basin, South China Sea, was 518 μg/g, which is about 5 times that observed in the oils from carbonate source rocks in the Tarim Basin, Northwest China. The BNFs occur ubiquitously in source rock extracts and related oils of various origins. The results of this work suggest that the relative abundance of benzo[b]naphthofuran isomers, that is, the benzo[b]naphtho[2,1-d]furan/{benzo[b]naphtho[2,1-d]furan + benzo[b]naphtho[1,2-d]furan} ratio, may be a potential molecular geochemical parameter to indicate oil migration pathways and distances.

  12. Quantitative HPLC analysis of some marker compounds of hydroalcoholic extracts of Piper aduncum L

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Laura C.P.; Nunomura, Sergio M. [Instituto Nacional de Pesquisas da Amazonia (INPA), Manaus, AM (Brazil). Coordenacao de Pesquisas em Produtos Naturais]. E-mail: smnunomu@inpa.gov.br; Mause, Robert [Siema Eco Essencias da Amazonia Ltda., Manaus, AM (Brazil)

    2005-11-15

    High performance liquid chromatography is one of the major analytical techniques used in the quality control of phytotherapics. This work describes a HPLC method used to determine the major components present in different hydroalcoholic extracts of aerial parts of Piper aduncum. (author)

  13. Quantitative HPLC analysis of some marker compounds of hydroalcoholic extracts of Piper aduncum L

    International Nuclear Information System (INIS)

    Oliveira, Laura C.P.; Nunomura, Sergio M.

    2005-01-01

    High performance liquid chromatography is one of the major analytical techniques used in the quality control of phytotherapics. This work describes a HPLC method used to determine the major components present in different hydroalcoholic extracts of aerial parts of Piper aduncum. (author)

  14. Spoken Language Understanding Systems for Extracting Semantic Information from Speech

    CERN Document Server

    Tur, Gokhan

    2011-01-01

    Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, usin

  15. Information Risk Management: Qualitative or Quantitative? Cross industry lessons from medical and financial fields

    Directory of Open Access Journals (Sweden)

    Upasna Saluja

    2012-06-01

    Full Text Available Enterprises across the world are taking a hard look at their risk management practices. A number of qualitative and quantitative models and approaches are employed by risk practitioners to keep risk under check. As a norm most organizations end up choosing the more flexible, easier to deploy and customize qualitative models of risk assessment. In practice one sees that such models often call upon the practitioners to make qualitative judgments on a relative rating scale which brings in considerable room for errors, biases and subjectivity. On the other hand under the quantitative risk analysis approach, estimation of risk is connected with application of numerical measures of some kind. Medical risk management models lend themselves as ideal candidates for deriving lessons for Information Security Risk Management. We can use this considerably developed understanding of risk management from the medical field especially Survival Analysis towards handling risks that information infrastructures face. Similarly, financial risk management discipline prides itself on perhaps the most quantifiable of models in risk management. Market Risk and Credit Risk Information Security Risk Management can make risk measurement more objective and quantitative by referring to the approach of Credit Risk. During the recent financial crisis many investors and financial institutions lost money or went bankrupt respectively, because they did not apply the basic principles of risk management. Learning from the financial crisis provides some valuable lessons for information risk management.

  16. Sifting Through Chaos: Extracting Information from Unstructured Legal Opinions.

    Science.gov (United States)

    Oliveira, Bruno Miguel; Guimarães, Rui Vasconcellos; Antunes, Luís; Rodrigues, Pedro Pereira

    2018-01-01

    Abiding to the law is, in some cases, a delicate balance between the rights of different players. Re-using health records is such a case. While the law grants reuse rights to public administration documents, in which health records produced in public health institutions are included, it also grants privacy to personal records. To safeguard a correct usage of data, public hospitals in Portugal employ jurists that are responsible for allowing or withholding access rights to health records. To help decision making, these jurists can consult the legal opinions issued by the national committee on public administration documents usage. While these legal opinions are of undeniable value, due to their doctrine contribution, they are only available in a format best suited from printing, forcing individual consultation of each document, with no option, whatsoever of clustered search, filtering or indexing, which are standard operations nowadays in a document management system. When having to decide on tens of data requests a day, it becomes unfeasible to consult the hundreds of legal opinions already available. With the objective to create a modern document management system, we devised an open, platform agnostic system that extracts and compiles the legal opinions, ex-tracts its contents and produces metadata, allowing for a fast searching and filtering of said legal opinions.

  17. Study of the Qualitative and Semi-quantitative Analysis of Grape Seed Extract by HPLC

    OpenAIRE

    Sorolla, Sílvia; Flores, Antònia; Canals Parelló, Trini; Cantero Gómez, María Rosa; Font Vallès, Joaquim; Ollé Otero, Lluís; Bacardit Dalmases, Anna

    2018-01-01

    The main aim of this study is to carry out a qualitative and semiquantitative analysis of tannin extracts as an alternative to the official analysis method ISO 14088 – IUC 32, so that a correlation between the two methods is established. From the point of view of the chemical composition, tannins are classified into two major groups: i) condensed tannins, also called flavanols or catechins, and ii) hydrolysable tannins, also called pyrogallic tannins. Today, the most widely used convent...

  18. A Validated Reverse Phase HPLC Analytical Method for Quantitation of Glycoalkaloids in Solanum lycocarpum and Its Extracts

    Directory of Open Access Journals (Sweden)

    Renata Fabiane Jorge Tiossi

    2012-01-01

    Full Text Available Solanum lycocarpum (Solanaceae is native to the Brazilian Cerrado. Fruits of this species contain the glycoalkaloids solasonine (SN and solamargine (SM, which display antiparasitic and anticancer properties. A method has been developed for the extraction and HPLC-UV analysis of the SN and SM in different parts of S. lycocarpum, mainly comprising ripe and unripe fruits, leaf, and stem. This analytical method was validated and gave good detection response with linearity over a dynamic range of 0.77–1000.00 μg mL−1 and recovery in the range of 80.92–91.71%, allowing a reliable quantitation of the target compounds. Unripe fruits displayed higher concentrations of glycoalkaloids (1.04% ± 0.01 of SN and 0.69% ± 0.00 of SM than the ripe fruits (0.83% ± 0.02 of SN and 0.60% ± 0.01 of SM. Quantitation of glycoalkaloids in the alkaloidic extract gave 45.09% ± 1.14 of SN and 44.37% ± 0.60 of SM, respectively.

  19. Information extraction from FN plots of tungsten microemitters

    Energy Technology Data Exchange (ETDEWEB)

    Mussa, Khalil O. [Department of Physics, Mu' tah University, Al-Karak (Jordan); Mousa, Marwan S., E-mail: mmousa@mutah.edu.jo [Department of Physics, Mu' tah University, Al-Karak (Jordan); Fischer, Andreas, E-mail: andreas.fischer@physik.tu-chemnitz.de [Institut für Physik, Technische Universität Chemnitz, Chemnitz (Germany)

    2013-09-15

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10{sup −8}mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  20. Information extraction from FN plots of tungsten microemitters

    International Nuclear Information System (INIS)

    Mussa, Khalil O.; Mousa, Marwan S.; Fischer, Andreas

    2013-01-01

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10 −8 mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  1. Identifying Contributors of DNA Mixtures by Means of Quantitative Information of STR Typing

    DEFF Research Database (Denmark)

    Tvedebrink, Torben; Eriksen, Poul Svante; Mogensen, Helle Smidt

    2012-01-01

    identified using polymorphic genetic markers. However, modern typing techniques supply additional quantitative data, which contain very important information about the observed evidence. This is particularly true for cases of DNA mixtures, where more than one individual has contributed to the observed......Abstract Estimating the weight of evidence in forensic genetics is often done in terms of a likelihood ratio, LR. The LR evaluates the probability of the observed evidence under competing hypotheses. Most often, probabilities used in the LR only consider the evidence from the genomic variation...... biological stain. This article presents a method for including the quantitative information of short tandem repeat (STR) DNA mixtures in the LR. Also, an efficient algorithmic method for finding the best matching combination of DNA mixture profiles is derived and implemented in an on-line tool for two...

  2. High-resolution gas chromatography/mas spectrometry method for characterization and quantitative analysis of ginkgolic acids in ginkgo biloba plants, extracts, and dietary supplements

    Science.gov (United States)

    A high resolution GC/MS with Selected Ion Monitor (SIM) method focusing on the characterization and quantitative analysis of ginkgolic acids (GAs) in Ginkgo biloba L. plant materials, extracts and commercial products was developed and validated. The method involved sample extraction with (1:1) meth...

  3. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    Science.gov (United States)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  4. OPTIMAL INFORMATION EXTRACTION OF LASER SCANNING DATASET BY SCALE-ADAPTIVE REDUCTION

    Directory of Open Access Journals (Sweden)

    Y. Zang

    2018-04-01

    Full Text Available 3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  5. A convenient method for the quantitative determination of elemental sulfur in coal by HPLC analysis of perchloroethylene extracts

    Science.gov (United States)

    Buchanan, D.H.; Coombs, K.J.; Murphy, P.M.; Chaven, C.

    1993-01-01

    A convenient method for the quantitative determination of elemental sulfur in coal is described. Elemental sulfur is extracted from the coal with hot perchloroethylene (PCE) (tetrachloroethene, C2Cl4) and quantitatively determined by HPLC analysis on a C18 reverse-phase column using UV detection. Calibration solutions were prepared from sublimed sulfur. Results of quantitative HPLC analyses agreed with those of a chemical/spectroscopic analysis. The HPLC method was found to be linear over the concentration range of 6 ?? 10-4 to 2 ?? 10-2 g/L. The lower detection limit was 4 ?? 10-4 g/L, which for a coal sample of 20 g is equivalent to 0.0006% by weight of coal. Since elemental sulfur is known to react slowly with hydrocarbons at the temperature of boiling PCE, standard solutions of sulfur in PCE were heated with coals from the Argonne Premium Coal Sample program. Pseudo-first-order uptake of sulfur by the coals was observed over several weeks of heating. For the Illinois No. 6 premium coal, the rate constant for sulfur uptake was 9.7 ?? 10-7 s-1, too small for retrograde reactions between solubilized sulfur and coal to cause a significant loss in elemental sulfur isolated during the analytical extraction. No elemental sulfur was produced when the following pure compounds were heated to reflux in PCE for up to 1 week: benzyl sulfide, octyl sulfide, thiane, thiophene, benzothiophene, dibenzothiophene, sulfuric acid, or ferrous sulfate. A sluury of mineral pyrite in PCE contained elemental sulfur which increased in concentration with heating time. ?? 1993 American Chemical Society.

  6. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    International Nuclear Information System (INIS)

    Tam, Allison; Barker, Jocelyn; Rubin, Daniel

    2016-01-01

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline

  7. A method for normalizing pathology images to improve feature extraction for quantitative pathology

    Energy Technology Data Exchange (ETDEWEB)

    Tam, Allison [Stanford Institutes of Medical Research Program, Stanford University School of Medicine, Stanford, California 94305 (United States); Barker, Jocelyn [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 (United States); Rubin, Daniel [Department of Radiology, Stanford University School of Medicine, Stanford, California 94305 and Department of Medicine (Biomedical Informatics Research), Stanford University School of Medicine, Stanford, California 94305 (United States)

    2016-01-15

    Purpose: With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides. Methods: To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets. Results: The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature. Conclusions: ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

  8. Information extraction from FN plots of tungsten microemitters.

    Science.gov (United States)

    Mussa, Khalil O; Mousa, Marwan S; Fischer, Andreas

    2013-09-01

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials-such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current-voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)-screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10(-8) mbar when baked at up to ∼180 °C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler-Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in particular

  9. Study on methods and techniques of aeroradiometric weak information extraction for sandstone-hosted uranium deposits based on GIS

    International Nuclear Information System (INIS)

    Han Shaoyang; Ke Dan; Hou Huiqun

    2005-01-01

    The weak information extraction is one of the important research contents in the current sandstone-type uranium prospecting in China. This paper introduces the connotation of aeroradiometric weak information extraction, and discusses the formation theories of aeroradiometric weak information extraction, and discusses the formation theories of aeroradiometric weak information and establishes some effective mathematic models for weak information extraction. Models for weak information extraction are realized based on GIS software platform. Application tests of weak information extraction are realized based on GIS software platform. Application tests of weak information extraction are completed in known uranium mineralized areas. Research results prove that the prospective areas of sandstone-type uranium deposits can be rapidly delineated by extracting aeroradiometric weak information. (authors)

  10. Extraction of Graph Information Based on Image Contents and the Use of Ontology

    Science.gov (United States)

    Kanjanawattana, Sarunya; Kimura, Masaomi

    2016-01-01

    A graph is an effective form of data representation used to summarize complex information. Explicit information such as the relationship between the X- and Y-axes can be easily extracted from a graph by applying human intelligence. However, implicit knowledge such as information obtained from other related concepts in an ontology also resides in…

  11. Extracting information of fixational eye movements through pupil tracking

    Science.gov (United States)

    Xiao, JiangWei; Qiu, Jian; Luo, Kaiqin; Peng, Li; Han, Peng

    2018-01-01

    Human eyes are never completely static even when they are fixing a stationary point. These irregular, small movements, which consist of micro-tremors, micro-saccades and drifts, can prevent the fading of the images that enter our eyes. The importance of researching the fixational eye movements has been experimentally demonstrated recently. However, the characteristics of fixational eye movements and their roles in visual process have not been explained clearly, because these signals can hardly be completely extracted by now. In this paper, we developed a new eye movement detection device with a high-speed camera. This device includes a beam splitter mirror, an infrared light source and a high-speed digital video camera with a frame rate of 200Hz. To avoid the influence of head shaking, we made the device wearable by fixing the camera on a safety helmet. Using this device, the experiments of pupil tracking were conducted. By localizing the pupil center and spectrum analysis, the envelope frequency spectrum of micro-saccades, micro-tremors and drifts are shown obviously. The experimental results show that the device is feasible and effective, so that the device can be applied in further characteristic analysis.

  12. Extracting Social Networks and Contact Information From Email and the Web

    National Research Council Canada - National Science Library

    Culotta, Aron; Bekkerman, Ron; McCallum, Andrew

    2005-01-01

    ...-suited for such information extraction tasks. By recursively calling itself on new people discovered on the Web, the system builds a social network with multiple degrees of separation from the user...

  13. Lithium NLP: A System for Rich Information Extraction from Noisy User Generated Text on Social Media

    OpenAIRE

    Bhargava, Preeti; Spasojevic, Nemanja; Hu, Guoning

    2017-01-01

    In this paper, we describe the Lithium Natural Language Processing (NLP) system - a resource-constrained, high- throughput and language-agnostic system for information extraction from noisy user generated text on social media. Lithium NLP extracts a rich set of information including entities, topics, hashtags and sentiment from text. We discuss several real world applications of the system currently incorporated in Lithium products. We also compare our system with existing commercial and acad...

  14. Quantitative methodology to extract regional magnetotelluric impedances and determine the dimension of the conductivity structure

    Energy Technology Data Exchange (ETDEWEB)

    Groom, R [PetRos EiKon Incorporated, Ontario (Canada); Kurtz, R; Jones, A; Boerner, D [Geological Survey of Canada, Ontario (Canada)

    1996-05-01

    This paper describes a systematic method for determining the appropriate dimensionality of magnetotelluric (MT) data from a site, and illustrates the application of this method to analyze both synthetic data and real data. Additionally, it describes the extraction of regional impedance responses from multiple sites. This method was examined extensively with synthetic data, and proven to be successful. It was demonstrated for two neighboring sites that the analysis methodology can be extremely useful in unraveling the bulk regional response when hidden by strong three-dimensional effects. Although there may still be some uncertainties remaining in the true levels for the regional responses for stations LIT000 and LITW02, the analysis has provided models which not only fit the data but are consistent for neighboring sites. It was suggested from these data that the stations are seeing significantly different structures. 12 refs.

  15. Overview of ImageCLEF 2017: information extraction from images

    OpenAIRE

    Ionescu, Bogdan; Müller, Henning; Villegas, Mauricio; Arenas, Helbert; Boato, Giulia; Dang Nguyen, Duc Tien; Dicente Cid, Yashin; Eickhoff, Carsten; Seco de Herrera, Alba G.; Gurrin, Cathal; Islam, Bayzidul; Kovalev, Vassili; Liauchuk, Vitali; Mothe, Josiane; Piras, Luca

    2017-01-01

    This paper presents an overview of the ImageCLEF 2017 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) labs 2017. ImageCLEF is an ongoing initiative (started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval for providing information access to collections of images in various usage scenarios and domains. In 2017, the 15th edition of ImageCLEF, three main tasks were proposed and one pil...

  16. Statistical techniques to extract information during SMAP soil moisture assimilation

    Science.gov (United States)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-12-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, the need for bias correction prior to an assimilation of these estimates is reduced, which could result in a more effective use of the independent information provided by the satellite observations. In this study, a statistical neural network (NN) retrieval algorithm is calibrated using SMAP brightness temperature observations and modeled soil moisture estimates (similar to those used to calibrate the SMAP Level 4 DA system). Daily values of surface soil moisture are estimated using the NN and then assimilated into the NASA Catchment model. The skill of the assimilation estimates is assessed based on a comprehensive comparison to in situ measurements from the SMAP core and sparse network sites as well as the International Soil Moisture Network. The NN retrieval assimilation is found to significantly improve the model skill, particularly in areas where the model does not represent processes related to agricultural practices. Additionally, the NN method is compared to assimilation experiments using traditional bias correction techniques. The NN retrieval assimilation is found to more effectively use the independent information provided by SMAP resulting in larger model skill improvements than assimilation experiments using traditional bias correction techniques.

  17. MetaFluxNet: the management of metabolic reaction information and quantitative metabolic flux analysis.

    Science.gov (United States)

    Lee, Dong-Yup; Yun, Hongsoek; Park, Sunwon; Lee, Sang Yup

    2003-11-01

    MetaFluxNet is a program package for managing information on the metabolic reaction network and for quantitatively analyzing metabolic fluxes in an interactive and customized way. It allows users to interpret and examine metabolic behavior in response to genetic and/or environmental modifications. As a result, quantitative in silico simulations of metabolic pathways can be carried out to understand the metabolic status and to design the metabolic engineering strategies. The main features of the program include a well-developed model construction environment, user-friendly interface for metabolic flux analysis (MFA), comparative MFA of strains having different genotypes under various environmental conditions, and automated pathway layout creation. http://mbel.kaist.ac.kr/ A manual for MetaFluxNet is available as PDF file.

  18. Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame

    Science.gov (United States)

    Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi

    2018-01-01

    At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.

  19. [Extraction of management information from the national quality assurance program].

    Science.gov (United States)

    Stausberg, Jürgen; Bartels, Claus; Bobrowski, Christoph

    2007-07-15

    Starting with clinically motivated projects, the national quality assurance program has established a legislative obligatory framework. Annual feedback of results is an important means of quality control. The annual reports cover quality-related information with high granularity. A synopsis for corporate management is missing, however. Therefore, the results of the University Clinics in Greifswald, Germany, have been analyzed and aggregated to support hospital management. Strengths were identified by the ranking of results within the state for each quality indicator, weaknesses by the comparison with national reference values. The assessment was aggregated per clinical discipline and per category (indication, process, and outcome). A composition of quality indicators was claimed multiple times. A coherent concept is still missing. The method presented establishes a plausible summary of strengths and weaknesses of a hospital from the point of view of the national quality assurance program. Nevertheless, further adaptation of the program is needed to better assist corporate management.

  20. Extracting of implicit information in English advertising texts with phonetic and lexical-morphological means

    Directory of Open Access Journals (Sweden)

    Traikovskaya Natalya Petrovna

    2015-12-01

    Full Text Available The article deals with phonetic and lexical-morphological language means participating in the process of extracting implicit information in English-speaking advertising texts for men and women. The functioning of phonetic means of the English language is not the basis for implication of information in advertising texts. Lexical and morphological means play the role of markers of relevant information, playing the role of the activator ofimplicit information in the texts of advertising.

  1. Quantitative metrics for evaluating the phased roll-out of clinical information systems.

    Science.gov (United States)

    Wong, David; Wu, Nicolas; Watkinson, Peter

    2017-09-01

    We introduce a novel quantitative approach for evaluating the order of roll-out during phased introduction of clinical information systems. Such roll-outs are associated with unavoidable risk due to patients transferring between clinical areas using both the old and new systems. We proposed a simple graphical model of patient flow through a hospital. Using a simple instance of the model, we showed how a roll-out order can be generated by minimising the flow of patients from the new system to the old system. The model was applied to admission and discharge data acquired from 37,080 patient journeys at the Churchill Hospital, Oxford between April 2013 and April 2014. The resulting order was evaluated empirically and produced acceptable orders. The development of data-driven approaches to clinical Information system roll-out provides insights that may not necessarily be ascertained through clinical judgment alone. Such methods could make a significant contribution to the smooth running of an organisation during the roll-out of a potentially disruptive technology. Unlike previous approaches, which are based on clinical opinion, the approach described here quantitatively assesses the appropriateness of competing roll-out strategies. The data-driven approach was shown to produce strategies that matched clinical intuition and provides a flexible framework that may be used to plan and monitor Clinical Information System roll-out. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  2. Quantitative determination of 1,4-dioxane and tetrahydrofuran in groundwater by solid phase extraction GC/MS/MS.

    Science.gov (United States)

    Isaacson, Carl; Mohr, Thomas K G; Field, Jennifer A

    2006-12-01

    Groundwater contamination by cyclic ethers, 1,4-dioxane (dioxane), a probable human carcinogen, and tetrahydrofuran (THF), a co-contaminant at many chlorinated solvent release sites, are a growing concern. Cyclic ethers are readily transported in groundwater, yet little is known about their fate in environmental systems. High water solubility coupled with low Henry's law constants and octanol-water partition coefficients make their removal from groundwater problematic for both remedial and analytical purposes. A solid-phase extraction (SPE) method based on activated carbon disks was developed for the quantitative determination of dioxane and THF. The method requires 80 mL samples and a total of 1.2 mL of solvent (acetone). The number of steps is minimized due to the "in-vial" elution of the disks. Average recoveries for dioxane and THF were 98% and 95%, respectively, with precision, as indicated by the relative standard deviation of <2% to 6%. The method quantitation limits are 0.31 microg/L for dioxane and 3.1 microg/L for THF. The method was demonstrated by analyzing groundwater samples for dioxane and THF collected during a single sampling campaign at a TCA-impacted site. Dioxane concentrations and areal extent of dioxane in groundwater were greater than those of either TCA or THF.

  3. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  4. Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images.

    Science.gov (United States)

    Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy

    2017-10-06

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

  5. Validation of a quantitative NMR method for suspected counterfeit products exemplified on determination of benzethonium chloride in grapefruit seed extracts.

    Science.gov (United States)

    Bekiroglu, Somer; Myrberg, Olle; Ostman, Kristina; Ek, Marianne; Arvidsson, Torbjörn; Rundlöf, Torgny; Hakkarainen, Birgit

    2008-08-05

    A 1H-nuclear magnetic resonance (NMR) spectroscopy method for quantitative determination of benzethonium chloride (BTC) as a constituent of grapefruit seed extract was developed. The method was validated, assessing its specificity, linearity, range, and precision, as well as accuracy, limit of quantification and robustness. The method includes quantification using an internal reference standard, 1,3,5-trimethoxybenzene, and regarded as simple, rapid, and easy to implement. A commercial grapefruit seed extract was studied and the experiments were performed on spectrometers operating at two different fields, 300 and 600 MHz for proton frequencies, the former with a broad band (BB) probe and the latter equipped with both a BB probe and a CryoProbe. The concentration average for the product sample was 78.0, 77.8 and 78.4 mg/ml using the 300 BB probe, the 600MHz BB probe and CryoProbe, respectively. The standard deviation and relative standard deviation (R.S.D., in parenthesis) for the average concentrations was 0.2 (0.3%), 0.3 (0.4%) and 0.3mg/ml (0.4%), respectively.

  6. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    Science.gov (United States)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  7. A method for automating the extraction of specialized information from the web

    NARCIS (Netherlands)

    Lin, L.; Liotta, A.; Hippisley, A.; Hao, Y.; Liu, J.; Wang, Y.; Cheung, Y-M.; Yin, H.; Jiao, L.; Ma, j.; Jiao, Y-C.

    2005-01-01

    The World Wide Web can be viewed as a gigantic distributed database including millions of interconnected hosts some of which publish information via web servers or peer-to-peer systems. We present here a novel method for the extraction of semantically rich information from the web in a fully

  8. Information analysis of iris biometrics for the needs of cryptology key extraction

    Directory of Open Access Journals (Sweden)

    Adamović Saša

    2013-01-01

    Full Text Available The paper presents a rigorous analysis of iris biometric information for the synthesis of an optimized system for the extraction of a high quality cryptology key. Estimations of local entropy and mutual information were identified as segments of the iris most suitable for this purpose. In order to optimize parameters, corresponding wavelets were transformed, in order to obtain the highest possible entropy and mutual information lower in the transformation domain, which set frameworks for the synthesis of systems for the extraction of truly random sequences of iris biometrics, without compromising authentication properties. [Projekat Ministarstva nauke Republike Srbije, br. TR32054 i br. III44006

  9. An integrated enhancement and reconstruction strategy for the quantitative extraction of actin stress fibers from fluorescence micrographs.

    Science.gov (United States)

    Zhang, Zhen; Xia, Shumin; Kanchanawong, Pakorn

    2017-05-22

    The stress fibers are prominent organization of actin filaments that perform important functions in cellular processes such as migration, polarization, and traction force generation, and whose collective organization reflects the physiological and mechanical activities of the cells. Easily visualized by fluorescence microscopy, the stress fibers are widely used as qualitative descriptors of cell phenotypes. However, due to the complexity of the stress fibers and the presence of other actin-containing cellular features, images of stress fibers are relatively challenging to quantitatively analyze using previously developed approaches, requiring significant user intervention. This poses a challenge for the automation of their detection, segmentation, and quantitative analysis. Here we describe an open-source software package, SFEX (Stress Fiber Extractor), which is geared for efficient enhancement, segmentation, and analysis of actin stress fibers in adherent tissue culture cells. Our method made use of a carefully chosen image filtering technique to enhance filamentous structures, effectively facilitating the detection and segmentation of stress fibers by binary thresholding. We subdivided the skeletons of stress fiber traces into piecewise-linear fragments, and used a set of geometric criteria to reconstruct the stress fiber networks by pairing appropriate fiber fragments. Our strategy enables the trajectory of a majority of stress fibers within the cells to be comprehensively extracted. We also present a method for quantifying the dimensions of the stress fibers using an image gradient-based approach. We determine the optimal parameter space using sensitivity analysis, and demonstrate the utility of our approach by analyzing actin stress fibers in cells cultured on various micropattern substrates. We present an open-source graphically-interfaced computational tool for the extraction and quantification of stress fibers in adherent cells with minimal user input. This

  10. Relating Maxwell’s demon and quantitative analysis of information leakage for practical imperative programs

    International Nuclear Information System (INIS)

    Anjaria, Kushal; Mishra, Arun

    2017-01-01

    Shannon observed the relation between information entropy and Maxwell demon experiment to come up with information entropy formula. After that, Shannon’s entropy formula is widely used to measure information leakage in imperative programs. But in the present work, our aim is to go in a reverse direction and try to find possible Maxwell’s demon experimental setup for contemporary practical imperative programs in which variations of Shannon’s entropy formula has been applied to measure the information leakage. To establish the relation between the second principle of thermodynamics and quantitative analysis of information leakage, present work models contemporary variations of imperative programs in terms of Maxwell’s demon experimental setup. In the present work five contemporary variations of imperative program related to information quantification are identified. They are: (i) information leakage in imperative program, (ii) imperative multithreaded program, (iii) point to point leakage in the imperative program, (iv) imperative program with infinite observation, and (v) imperative program in the SOA-based environment. For these variations, minimal work required by an attacker to gain the secret is also calculated using historical Maxwell’s demon experiment. To model the experimental setup of Maxwell’s demon, non-interference security policy is used. In the present work, imperative programs with one-bit secret information have been considered to avoid the complexity. The findings of the present work from the history of physics can be utilized in many areas related to information flow of physical computing, nano-computing, quantum computing, biological computing, energy dissipation in computing, and computing power analysis. (paper)

  11. Quantitative analysis of gender stereotypes and information aggregation in a national election.

    Directory of Open Access Journals (Sweden)

    Michele Tumminello

    Full Text Available By analyzing a database of a questionnaire answered by a large majority of candidates and elected in a parliamentary election, we quantitatively verify that (i female candidates on average present political profiles which are more compassionate and more concerned with social welfare issues than male candidates and (ii the voting procedure acts as a process of information aggregation. Our results show that information aggregation proceeds with at least two distinct paths. In the first case candidates characterize themselves with a political profile aiming to describe the profile of the majority of voters. This is typically the case of candidates of political parties which are competing for the center of the various political dimensions. In the second case, candidates choose a political profile manifesting a clear difference from opposite political profiles endorsed by candidates of a political party positioned at the opposite extreme of some political dimension.

  12. Results of Studying Astronomy Students’ Science Literacy, Quantitative Literacy, and Information Literacy

    Science.gov (United States)

    Buxner, Sanlyn; Impey, Chris David; Follette, Katherine B.; Dokter, Erin F.; McCarthy, Don; Vezino, Beau; Formanek, Martin; Romine, James M.; Brock, Laci; Neiberding, Megan; Prather, Edward E.

    2017-01-01

    Introductory astronomy courses often serve as terminal science courses for non-science majors and present an opportunity to assess non future scientists’ attitudes towards science as well as basic scientific knowledge and scientific analysis skills that may remain unchanged after college. Through a series of studies, we have been able to evaluate students’ basic science knowledge, attitudes towards science, quantitative literacy, and informational literacy. In the Fall of 2015, we conducted a case study of a single class administering all relevant surveys to an undergraduate class of 20 students. We will present our analysis of trends of each of these studies as well as the comparison case study. In general we have found that students basic scientific knowledge has remained stable over the past quarter century. In all of our studies, there is a strong relationship between student attitudes and their science and quantitative knowledge and skills. Additionally, students’ information literacy is strongly connected to their attitudes and basic scientific knowledge. We are currently expanding these studies to include new audiences and will discuss the implications of our findings for instructors.

  13. Simplified and rapid method for extraction of ergosterol from natural samples and detection with quantitative and semi-quantitative methods using thin-layer chromatography

    OpenAIRE

    Larsen, Cand.scient Thomas; Ravn, Senior scientist Helle; Axelsen, Senior Scientist Jørgen

    2004-01-01

    A new and simplified method for extraction of ergosterol (ergoste-5,7,22-trien-3-beta-ol) from fungi in soil and litter was developed using pre-soaking extraction and paraffin oil for recovery. Recoveries of ergosterol were in the range of 94 - 100% depending on the solvent to oil ratio. Extraction efficiencies equal to heat-assisted extraction treatments were obtained with pre-soaked extraction. Ergosterol was detected with thin-layer chromatography (TLC) using fluorodensitometry with a quan...

  14. MedTime: a temporal information extraction system for clinical narratives.

    Science.gov (United States)

    Lin, Yu-Kai; Chen, Hsinchun; Brown, Randall A

    2013-12-01

    Temporal information extraction from clinical narratives is of critical importance to many clinical applications. We participated in the EVENT/TIMEX3 track of the 2012 i2b2 clinical temporal relations challenge, and presented our temporal information extraction system, MedTime. MedTime comprises a cascade of rule-based and machine-learning pattern recognition procedures. It achieved a micro-averaged f-measure of 0.88 in both the recognitions of clinical events and temporal expressions. We proposed and evaluated three time normalization strategies to normalize relative time expressions in clinical texts. The accuracy was 0.68 in normalizing temporal expressions of dates, times, durations, and frequencies. This study demonstrates and evaluates the integration of rule-based and machine-learning-based approaches for high performance temporal information extraction from clinical narratives. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    Science.gov (United States)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  16. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  17. End-to-end information extraction without token-level supervision

    DEFF Research Database (Denmark)

    Palm, Rasmus Berg; Hovy, Dirk; Laws, Florian

    2017-01-01

    Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels...... and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction...

  18. Extracting additional risk managers information from a risk assessment of Listeria monocytogenes in deli meats

    NARCIS (Netherlands)

    Pérez-Rodríguez, F.; Asselt, van E.D.; García-Gimeno, R.M.; Zurera, G.; Zwietering, M.H.

    2007-01-01

    The risk assessment study of Listeria monocytogenes in ready-to-eat foods conducted by the U.S. Food and Drug Administration is an example of an extensive quantitative microbiological risk assessment that could be used by risk analysts and other scientists to obtain information and by managers and

  19. Extraction Method for Earthquake-Collapsed Building Information Based on High-Resolution Remote Sensing

    International Nuclear Information System (INIS)

    Chen, Peng; Wu, Jian; Liu, Yaolin; Wang, Jing

    2014-01-01

    At present, the extraction of earthquake disaster information from remote sensing data relies on visual interpretation. However, this technique cannot effectively and quickly obtain precise and efficient information for earthquake relief and emergency management. Collapsed buildings in the town of Zipingpu after the Wenchuan earthquake were used as a case study to validate two kinds of rapid extraction methods for earthquake-collapsed building information based on pixel-oriented and object-oriented theories. The pixel-oriented method is based on multi-layer regional segments that embody the core layers and segments of the object-oriented method. The key idea is to mask layer by layer all image information, including that on the collapsed buildings. Compared with traditional techniques, the pixel-oriented method is innovative because it allows considerably rapid computer processing. As for the object-oriented method, a multi-scale segment algorithm was applied to build a three-layer hierarchy. By analyzing the spectrum, texture, shape, location, and context of individual object classes in different layers, the fuzzy determined rule system was established for the extraction of earthquake-collapsed building information. We compared the two sets of results using three variables: precision assessment, visual effect, and principle. Both methods can extract earthquake-collapsed building information quickly and accurately. The object-oriented method successfully overcomes the pepper salt noise caused by the spectral diversity of high-resolution remote sensing data and solves the problem of same object, different spectrums and that of same spectrum, different objects. With an overall accuracy of 90.38%, the method achieves more scientific and accurate results compared with the pixel-oriented method (76.84%). The object-oriented image analysis method can be extensively applied in the extraction of earthquake disaster information based on high-resolution remote sensing

  20. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.

    Science.gov (United States)

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single

  1. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    Science.gov (United States)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  2. Tool for the quantitative evaluation of a Facebook app-based informal training process

    Directory of Open Access Journals (Sweden)

    Adolfo Calle-Gómez

    2017-02-01

    Full Text Available The study of the impact of Facebook in academy has been mainly based on the qualitative evaluation of the academic performance and motivation of students. This work takes as starting point the use of the Facebook app Sigma in the Universidad Técnica de Ambato. Students of this university share educative resources through Sigma. This constitutes an informal learning process. We have proposed to construct Gamma, a tool for the generation of statistics and charts that illustrates the impact of the social network in the resulting learning process. This paper presents the results of the study of how Gamma is valued by those who like to do informal learning. It was checked that 1 Gamma gives feedback about the value of educative resources and social actions and that 2 it allows the quantitative measurement of the impact of using Facebook in the informal learning process. As an added value, Gamma supports the communication between supporters and detractors of the use of Facebook in the academia.

  3. Chemometric study of Andalusian extra virgin olive oils Raman spectra: Qualitative and quantitative information.

    Science.gov (United States)

    Sánchez-López, E; Sánchez-Rodríguez, M I; Marinas, A; Marinas, J M; Urbano, F J; Caridad, J M; Moalem, M

    2016-08-15

    Authentication of extra virgin olive oil (EVOO) is an important topic for olive oil industry. The fraudulent practices in this sector are a major problem affecting both producers and consumers. This study analyzes the capability of FT-Raman combined with chemometric treatments of prediction of the fatty acid contents (quantitative information), using gas chromatography as the reference technique, and classification of diverse EVOOs as a function of the harvest year, olive variety, geographical origin and Andalusian PDO (qualitative information). The optimal number of PLS components that summarizes the spectral information was introduced progressively. For the estimation of the fatty acid composition, the lowest error (both in fitting and prediction) corresponded to MUFA, followed by SAFA and PUFA though such errors were close to zero in all cases. As regards the qualitative variables, discriminant analysis allowed a correct classification of 94.3%, 84.0%, 89.0% and 86.6% of samples for harvest year, olive variety, geographical origin and PDO, respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Strategy for Extracting DNA from Clay Soil and Detecting a Specific Target Sequence via Selective Enrichment and Real-Time (Quantitative) PCR Amplification ▿

    Science.gov (United States)

    Yankson, Kweku K.; Steck, Todd R.

    2009-01-01

    We present a simple strategy for isolating and accurately enumerating target DNA from high-clay-content soils: desorption with buffers, an optional magnetic capture hybridization step, and quantitation via real-time PCR. With the developed technique, μg quantities of DNA were extracted from mg samples of pure kaolinite and a field clay soil. PMID:19633108

  5. QUANTITATIVE ION-PAIR EXTRACTION OF 4(5)-METHYLIMIDAZOLE FROM CARAMEL COLOR AND ITS DETERMINATION BY REVERSED-PHASE ION-PAIR LIQUID-CHROMATOGRAPHY

    DEFF Research Database (Denmark)

    Thomsen, Mohens; Willumsen, Dorthe

    1981-01-01

    A procedure for quantitative ion-pair extraction of 4(5)-methylimidazole from caramel colour using bis(2-ethylhexyl)phosphoric acid as ion-pairing agent has been developed. Furthermore, a reversed-phase ion-pair liquid chromatographic separation method has been established to analyse the content...

  6. Lab-on-capillary: a rapid, simple and quantitative genetic analysis platform integrating nucleic acid extraction, amplification and detection.

    Science.gov (United States)

    Fu, Yu; Zhou, Xiaoming; Xing, Da

    2017-12-05

    In this work, we describe for the first time a genetic diagnosis platform employing a polydiallyldimethylammonium chloride (PDDA)-modified capillary and a liquid-based thermalization system for rapid, simple and quantitative DNA analysis with minimal user interaction. Positively charged PDDA is modified on the inner surface of the silicon dioxide capillary by using an electrostatic self-assembly approach that allows the negatively charged DNA to be separated from the lysate in less than 20 seconds. The capillary loaded with the PCR mix is incorporated in the thermalization system, which can achieve on-site real-time PCR. This system is based on the circulation of pre-heated liquids in the chamber, allowing for high-speed thermalization of the capillary and fast amplification. Multiple targets can be simultaneously analysed with multiplex spatial melting. Starting with live Escherichia coli (E. coli) cells in milk, as a realistic sample, the current method can achieve DNA extraction, amplification, and detection within 40 min.

  7. Comparison of methods for miRNA extraction from plasma and quantitative recovery of RNA from plasma and cerebrospinal fluid

    Directory of Open Access Journals (Sweden)

    Melissa A McAlexander

    2013-05-01

    Full Text Available Interest in extracellular RNA has intensified as evidence accumulates that these molecules may be useful as indicators of a wide variety of biological conditions. To establish specific extracellular RNA molecules as clinically relevant biomarkers, reproducible recovery from biological samples and reliable measurements of the isolated RNA are paramount. Towards these ends, careful and rigorous comparisons of technical procedures are needed at all steps from sample handling to RNA isolation to RNA measurement protocols. In the investigations described in this methods paper, RT-qPCR was used to examine the apparent recovery of specific endogenous miRNAs and a spiked-in synthetic RNA from blood plasma samples. RNA was isolated using several widely used RNA isolation kits, with or without the addition of glycogen as a carrier. Kits examined included total RNA isolation systems that have been commercially available for several years and commonly adapted for extraction of biofluid RNA, as well as more recently introduced biofluids-specific RNA methods. Our conclusions include the following: some RNA isolation methods appear to be superior to others for the recovery of RNA from biological fluids; addition of a carrier molecule seems to be beneficial for some but not all isolation methods; and partially or fully quantitative recovery of RNA is observed from increasing volumes of plasma and cerebrospinal fluid.

  8. Information retrieval and terminology extraction in online resources for patients with diabetes.

    Science.gov (United States)

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

  9. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  10. Methods to extract information on the atomic and molecular states from scientific abstracts

    International Nuclear Information System (INIS)

    Sasaki, Akira; Ueshima, Yutaka; Yamagiwa, Mitsuru; Murata, Masaki; Kanamaru, Toshiyuki; Shirado, Tamotsu; Isahara, Hitoshi

    2005-01-01

    We propose a new application of information technology to recognize and extract expressions of atomic and molecular states from electrical forms of scientific abstracts. Present results will help scientists to understand atomic states as well as the physics discussed in the articles. Combining with the internet search engines, it will make one possible to collect not only atomic and molecular data but broader scientific information over a wide range of research fields. (author)

  11. System and method for extracting physiological information from remotely detected electromagnetic radiation

    NARCIS (Netherlands)

    2016-01-01

    The present invention relates to a device and a method for extracting physiological information indicative of at least one health symptom from remotely detected electromagnetic radiation. The device comprises an interface (20) for receiving a data stream comprising remotely detected image data

  12. System and method for extracting physiological information from remotely detected electromagnetic radiation

    NARCIS (Netherlands)

    2015-01-01

    The present invention relates to a device and a method for extracting physiological information indicative of at least one health symptom from remotely detected electromagnetic radiation. The device comprises an interface (20) for receiving a data stream comprising remotely detected image data

  13. Network and Ensemble Enabled Entity Extraction in Informal Text (NEEEEIT) final report

    Energy Technology Data Exchange (ETDEWEB)

    Kegelmeyer, Philip W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shead, Timothy M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunlavy, Daniel M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Information Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods.

  14. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  15. EXTRACT

    DEFF Research Database (Denmark)

    Pafilis, Evangelos; Buttigieg, Pier Luigi; Ferrell, Barbra

    2016-01-01

    The microbial and molecular ecology research communities have made substantial progress on developing standards for annotating samples with environment metadata. However, sample manual annotation is a highly labor intensive process and requires familiarity with the terminologies used. We have the...... and text-mining-assisted curation revealed that EXTRACT speeds up annotation by 15-25% and helps curators to detect terms that would otherwise have been missed.Database URL: https://extract.hcmr.gr/......., organism, tissue and disease terms. The evaluators in the BioCreative V Interactive Annotation Task found the system to be intuitive, useful, well documented and sufficiently accurate to be helpful in spotting relevant text passages and extracting organism and environment terms. Comparison of fully manual...

  16. Quantitative determination, Metal analysis and Antiulcer evaluation of Methanol seeds extract of Citrullus lanatus Thunb (Cucurbitaceae in Rats

    Directory of Open Access Journals (Sweden)

    Okunrobo O. Lucky

    2012-10-01

    Full Text Available Objective: The use of herbs in treatment of diseases is gradually becoming universally accepted especially in non industrialized societies. Citrullus lanatus Thunb (Cucurbitaceae commonly called water melon is widely consumed in this part of the world as food and medicine. This work was conducted to investigate the phytochemical composition, proximate and metal content analysis of the seed of Citrullus lanatus and to determine the antiulcer action of the methanol seed extract. Methods: Phytochemical screening, proximate and metal content analysis was done using the standard procedures and the antiulcer activity was evaluated against acetylsalicylic acid-induced ulcers. Results: The results revealed the presence of the following phytochemicals; flavonoids, saponins, tannins, alkaloids, glycosides. Proximate analysis indicated high concentration of carbohydrate, protein and fat while metal analysis showed the presence of sodium, calcium, zinc, magnesium at levels within the recommended dietary intake. Antiulcer potential of the extract against acetylsalicylic acid induced ulceration of gastric mucosa of Wister rats was evaluated at three doses (200mg/kg, 400mg/kg, and 800mg/kg. The ulcer parameters investigated included ulcer number, ulcer severity, ulcer index and percentage ulcer protection. The antiulcer activity was compared against ranitidine at 20mg/kg. The extract exhibited a dose related antiulcer activity with maximum activity at 800mg/kg (P<0.001. Conclusions: Proximate and metal content analysis of the seeds provides information that the consumption of the seeds of Citrullus lanatus is safe. This present study also provides preliminary data for the first time that the seeds of Citrullus lanatus possesses antiulcer activity in animal model.

  17. Research in health sciences library and information science: a quantitative analysis.

    Science.gov (United States)

    Dimitroff, A

    1992-10-01

    A content analysis of research articles published between 1966 and 1990 in the Bulletin of the Medical Library Association was undertaken. Four specific questions were addressed: What subjects are of interest to health sciences librarians? Who is conducting this research? How do health sciences librarians conduct their research? Do health sciences librarians obtain funding for their research activities? Bibliometric characteristics of the research articles are described and compared to characteristics of research in library and information science as a whole in terms of subject and methodology. General findings were that most research in health sciences librarianship is conducted by librarians affiliated with academic health sciences libraries (51.8%); most deals with an applied (45.7%) or a theoretical (29.2%) topic; survey (41.0%) or observational (20.7%) research methodologies are used; descriptive quantitative analytical techniques are used (83.5%); and over 25% of research is funded. The average number of authors was 1.85, average article length was 7.25 pages, and average number of citations per article was 9.23. These findings are consistent with those reported in the general library and information science literature for the most part, although specific differences do exist in methodological and analytical areas.

  18. Deriving Quantitative Crystallographic Information from the Wavelength-Resolved Neutron Transmission Analysis Performed in Imaging Mode

    Directory of Open Access Journals (Sweden)

    Hirotaka Sato

    2017-12-01

    Full Text Available Current status of Bragg-edge/dip neutron transmission analysis/imaging methods is presented. The method can visualize real-space distributions of bulk crystallographic information in a crystalline material over a large area (~10 cm with high spatial resolution (~100 μm. Furthermore, by using suitable spectrum analysis methods for wavelength-dependent neutron transmission data, quantitative visualization of the crystallographic information can be achieved. For example, crystallographic texture imaging, crystallite size imaging and crystalline phase imaging with texture/extinction corrections are carried out by the Rietveld-type (wide wavelength bandwidth profile fitting analysis code, RITS (Rietveld Imaging of Transmission Spectra. By using the single Bragg-edge analysis mode of RITS, evaluations of crystal lattice plane spacing (d-spacing relating to macro-strain and d-spacing distribution’s FWHM (full width at half maximum relating to micro-strain can be achieved. Macro-strain tomography is performed by a new conceptual CT (computed tomography image reconstruction algorithm, the tensor CT method. Crystalline grains and their orientations are visualized by a fast determination method of grain orientation for Bragg-dip neutron transmission spectrum. In this paper, these imaging examples with the spectrum analysis methods and the reliabilities evaluated by optical/electron microscope and X-ray/neutron diffraction, are presented. In addition, the status at compact accelerator driven pulsed neutron sources is also presented.

  19. RESEARCH ON REMOTE SENSING GEOLOGICAL INFORMATION EXTRACTION BASED ON OBJECT ORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2018-04-01

    Full Text Available The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  20. Protective role of Tinospora cordifolia extract against radiation-induced qualitative, quantitative and biochemical alterations in testes

    International Nuclear Information System (INIS)

    Sharma, Priyanka; Parmar, Jyoti; Sharma, Priyanka; Verma, Preeti; Goyal, P.K.

    2012-01-01

    restoring almost normal structure at the end of experimentation. Furthermore, TCE administration inhibited radiation-induced elevation of lipid per-oxidation (LPO) and reduction of glutathione (GSH) and catalase (CAT) levels in testes. These observations signify that the Tinospora cordifolia root extract can be used as an efficient radio- protector against radiation mediated qualitative, quantitative and biochemical alterations in testes. (author)

  1. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    Science.gov (United States)

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  2. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2018-04-01

    Full Text Available Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT. First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  3. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Science.gov (United States)

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  4. Understanding the information needs of people with haematological cancers. A meta-ethnography of quantitative and qualitative research.

    Science.gov (United States)

    Atherton, K; Young, B; Salmon, P

    2017-11-01

    Clinical practice in haematological oncology often involves difficult diagnostic and treatment decisions. In this context, understanding patients' information needs and the functions that information serves for them is particularly important. We systematically reviewed qualitative and quantitative evidence on haematological oncology patients' information needs to inform how these needs can best be addressed in clinical practice. PsycINFO, Medline and CINAHL Plus electronic databases were searched for relevant empirical papers published from January 2003 to July 2016. Synthesis of the findings drew on meta-ethnography and meta-study. Most quantitative studies used a survey design and indicated that patients are largely content with the information they receive from physicians, however much or little they actually receive, although a minority of patients are not content with information. Qualitative studies suggest that a sense of being in a caring relationship with a physician allows patients to feel content with the information they have been given, whereas patients who lack such a relationship want more information. The qualitative evidence can help explain the lack of association between the amount of information received and contentment with it in the quantitative research. Trusting relationships are integral to helping patients feel that their information needs have been met. © 2017 John Wiley & Sons Ltd.

  5. Quantitative Analysis of Bioactive Compounds from Aromatic Plants by Means of Dynamic Headspace Extraction and Multiple Headspace Extraction-Gas Chromatography-Mass Spectrometry

    NARCIS (Netherlands)

    Omar, Jone; Olivares, Maitane; Alonso, Ibone; Vallejo, Asier; Aizpurua-Olaizola, Oier; Etxebarria, Nestor

    2016-01-01

    Seven monoterpenes in 4 aromatic plants (sage, cardamom, lavender, and rosemary) were quantified in liquid extracts and directly in solid samples by means of dynamic headspace-gas chromatography-mass spectrometry (DHS-GC-MS) and multiple headspace extraction-gas chromatography-mass spectrometry

  6. Solvent Front Position Extraction procedure with thin-layer chromatography as a mode of multicomponent sample preparation for quantitative analysis by instrumental technique.

    Science.gov (United States)

    Klimek-Turek, A; Sikora, E; Dzido, T H

    2017-12-29

    A concept of using thin-layer chromatography to multicomponent sample preparation for quantitative determination of solutes followed by instrumental technique is presented. Thin-layer chromatography (TLC) is used to separate chosen substances and their internal standard from other components (matrix) and to form a single spot/zone containing them at the solvent front position. The location of the analytes and internal standard in the solvent front zone allows their easy extraction followed by quantitation by HPLC. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Investigation of UO2 as an accelerator for quantitative extraction of F- and Cl- in ThO2 and sintered ThO2

    International Nuclear Information System (INIS)

    Pandey, Ashish; Fulzele, Ajit; Das, D.K.; Prakash, Amrit; Behere, P.G.; Afzal, Mohd

    2013-01-01

    This paper presents UO 2 as an effective accelerator for the quantitative extraction of F - and Cl - from ThO 2 and sintered ThO 2 . Thoria requires higher temperature to loose its structural integrity to release halides. Sample composed of UO 2 and ThO 2 or UO 2 and sintered ThO 2 gives quantitative yield of F - and Cl - even at lower temperature. Accelerator amount and pyrohydrolysis conditions were optimized. The pyrohydrolyzate was analyzed for F - and Cl - by ISE. The limit of detection was 1 μg/g in the samples with good recovery (95%) and relative standard deviation less than 5%. (author)

  8. How predictive quantitative modelling of tissue organisation can inform liver disease pathogenesis.

    Science.gov (United States)

    Drasdo, Dirk; Hoehme, Stefan; Hengstler, Jan G

    2014-10-01

    From the more than 100 liver diseases described, many of those with high incidence rates manifest themselves by histopathological changes, such as hepatitis, alcoholic liver disease, fatty liver disease, fibrosis, and, in its later stages, cirrhosis, hepatocellular carcinoma, primary biliary cirrhosis and other disorders. Studies of disease pathogeneses are largely based on integrating -omics data pooled from cells at different locations with spatial information from stained liver structures in animal models. Even though this has led to significant insights, the complexity of interactions as well as the involvement of processes at many different time and length scales constrains the possibility to condense disease processes in illustrations, schemes and tables. The combination of modern imaging modalities with image processing and analysis, and mathematical models opens up a promising new approach towards a quantitative understanding of pathologies and of disease processes. This strategy is discussed for two examples, ammonia metabolism after drug-induced acute liver damage, and the recovery of liver mass as well as architecture during the subsequent regeneration process. This interdisciplinary approach permits integration of biological mechanisms and models of processes contributing to disease progression at various scales into mathematical models. These can be used to perform in silico simulations to promote unravelling the relation between architecture and function as below illustrated for liver regeneration, and bridging from the in vitro situation and animal models to humans. In the near future novel mechanisms will usually not be directly elucidated by modelling. However, models will falsify hypotheses and guide towards the most informative experimental design. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  9. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...... or disappear depending on the experimental conditions. Such biomarkers are found by comparing the relative volumes of individual spots in the individual gels. Multivariate statistical analysis and modelling of 2-DE data for comparison and classification is an alternative approach utilising the combination...

  10. From remote sensing data about information extraction for 3D geovisualization - Development of a workflow

    International Nuclear Information System (INIS)

    Tiede, D.

    2010-01-01

    With an increased availability of high (spatial) resolution remote sensing imagery since the late nineties, the need to develop operative workflows for the automated extraction, provision and communication of information from such data has grown. Monitoring requirements, aimed at the implementation of environmental or conservation targets, management of (environmental-) resources, and regional planning as well as international initiatives, especially the joint initiative of the European Commission and ESA (European Space Agency) for Global Monitoring for Environment and Security (GMES) play also a major part. This thesis addresses the development of an integrated workflow for the automated provision of information derived from remote sensing data. Considering applied data and fields of application, this work aims to design the workflow as generic as possible. Following research questions are discussed: What are the requirements of a workflow architecture that seamlessly links the individual workflow elements in a timely manner and secures accuracy of the extracted information effectively? How can the workflow retain its efficiency if mounds of data are processed? How can the workflow be improved with regards to automated object-based image analysis (OBIA)? Which recent developments could be of use? What are the limitations or which workarounds could be applied in order to generate relevant results? How can relevant information be prepared target-oriented and communicated effectively? How can the more recently developed freely available virtual globes be used for the delivery of conditioned information under consideration of the third dimension as an additional, explicit carrier of information? Based on case studies comprising different data sets and fields of application it is demonstrated how methods to extract and process information as well as to effectively communicate results can be improved and successfully combined within one workflow. It is shown that (1

  11. Addressing Risk Assessment for Patient Safety in Hospitals through Information Extraction in Medical Reports

    Science.gov (United States)

    Proux, Denys; Segond, Frédérique; Gerbier, Solweig; Metzger, Marie Hélène

    Hospital Acquired Infections (HAI) is a real burden for doctors and risk surveillance experts. The impact on patients' health and related healthcare cost is very significant and a major concern even for rich countries. Furthermore required data to evaluate the threat is generally not available to experts and that prevents from fast reaction. However, recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems allow now to address this problem.

  12. From Specific Information Extraction to Inferences: A Hierarchical Framework of Graph Comprehension

    Science.gov (United States)

    2004-09-01

    The skill to interpret the information displayed in graphs is so important to have, the National Council of Teachers of Mathematics has created...guidelines to ensure that students learn these skills ( NCTM : Standards for Mathematics , 2003). These guidelines are based primarily on the extraction of...graphical perception. Human Computer Interaction, 8, 353-388. NCTM : Standards for Mathematics . (2003, 2003). Peebles, D., & Cheng, P. C.-H. (2002

  13. Extracting breathing rate information from a wearable reflectance pulse oximeter sensor.

    Science.gov (United States)

    Johnston, W S; Mendelson, Y

    2004-01-01

    The integration of multiple vital physiological measurements could help combat medics and field commanders to better predict a soldier's health condition and enhance their ability to perform remote triage procedures. In this paper we demonstrate the feasibility of extracting accurate breathing rate information from a photoplethysmographic signal that was recorded by a reflectance pulse oximeter sensor mounted on the forehead and subsequently processed by a simple time domain filtering and frequency domain Fourier analysis.

  14. Towards a Quantitative Performance Measurement Framework to Assess the Impact of Geographic Information Standards

    Science.gov (United States)

    Vandenbroucke, D.; Van Orshoven, J.; Vancauwenberghe, G.

    2012-12-01

    Over the last decennia, the use of Geographic Information (GI) has gained importance, in public as well as in private sector. But even if many spatial data and related information exist, data sets are scattered over many organizations and departments. In practice it remains difficult to find the spatial data sets needed, and to access, obtain and prepare them for using in applications. Therefore Spatial Data Infrastructures (SDI) haven been developed to enhance the access, the use and sharing of GI. SDIs consist of a set of technological and non-technological components to reach this goal. Since the nineties many SDI initiatives saw light. Ultimately, all these initiatives aim to enhance the flow of spatial data between organizations (users as well as producers) involved in intra- and inter-organizational and even cross-country business processes. However, the flow of information and its re-use in different business processes requires technical and semantic interoperability: the first should guarantee that system components can interoperate and use the data, while the second should guarantee that data content is understood by all users in the same way. GI-standards within the SDI are necessary to make this happen. However, it is not known if this is realized in practice. Therefore the objective of the research is to develop a quantitative framework to assess the impact of GI-standards on the performance of business processes. For that purpose, indicators are defined and tested in several cases throughout Europe. The proposed research will build upon previous work carried out in the SPATIALIST project. It analyzed the impact of different technological and non-technological factors on the SDI-performance of business processes (Dessers et al., 2011). The current research aims to apply quantitative performance measurement techniques - which are frequently used to measure performance of production processes (Anupindi et al., 2005). Key to reach the research objectives

  15. Extraction of land cover change information from ENVISAT-ASAR data in Chengdu Plain

    Science.gov (United States)

    Xu, Wenbo; Fan, Jinlong; Huang, Jianxi; Tian, Yichen; Zhang, Yong

    2006-10-01

    Land cover data are essential to most global change research objectives, including the assessment of current environmental conditions and the simulation of future environmental scenarios that ultimately lead to public policy development. Chinese Academy of Sciences generated a nationwide land cover database in order to carry out the quantification and spatial characterization of land use/cover changes (LUCC) in 1990s. In order to improve the reliability of the database, we will update the database anytime. But it is difficult to obtain remote sensing data to extract land cover change information in large-scale. It is hard to acquire optical remote sensing data in Chengdu plain, so the objective of this research was to evaluate multitemporal ENVISAT advanced synthetic aperture radar (ASAR) data for extracting land cover change information. Based on the fieldwork and the nationwide 1:100000 land cover database, the paper assesses several land cover changes in Chengdu plain, for example: crop to buildings, forest to buildings, and forest to bare land. The results show that ENVISAT ASAR data have great potential for the applications of extracting land cover change information.

  16. KneeTex: an ontology-driven system for information extraction from MRI reports.

    Science.gov (United States)

    Spasić, Irena; Zhao, Bo; Jones, Christopher B; Button, Kate

    2015-01-01

    In the realm of knee pathology, magnetic resonance imaging (MRI) has the advantage of visualising all structures within the knee joint, which makes it a valuable tool for increasing diagnostic accuracy and planning surgical treatments. Therefore, clinical narratives found in MRI reports convey valuable diagnostic information. A range of studies have proven the feasibility of natural language processing for information extraction from clinical narratives. However, no study focused specifically on MRI reports in relation to knee pathology, possibly due to the complexity of knee anatomy and a wide range of conditions that may be associated with different anatomical entities. In this paper we describe KneeTex, an information extraction system that operates in this domain. As an ontology-driven information extraction system, KneeTex makes active use of an ontology to strongly guide and constrain text analysis. We used automatic term recognition to facilitate the development of a domain-specific ontology with sufficient detail and coverage for text mining applications. In combination with the ontology, high regularity of the sublanguage used in knee MRI reports allowed us to model its processing by a set of sophisticated lexico-semantic rules with minimal syntactic analysis. The main processing steps involve named entity recognition combined with coordination, enumeration, ambiguity and co-reference resolution, followed by text segmentation. Ontology-based semantic typing is then used to drive the template filling process. We adopted an existing ontology, TRAK (Taxonomy for RehAbilitation of Knee conditions), for use within KneeTex. The original TRAK ontology expanded from 1,292 concepts, 1,720 synonyms and 518 relationship instances to 1,621 concepts, 2,550 synonyms and 560 relationship instances. This provided KneeTex with a very fine-grained lexico-semantic knowledge base, which is highly attuned to the given sublanguage. Information extraction results were evaluated

  17. A comparison of sorptive extraction techniques coupled to a new quantitative, sensitive, high throughput GC-MS/MS method for methoxypyrazine analysis in wine.

    Science.gov (United States)

    Hjelmeland, Anna K; Wylie, Philip L; Ebeler, Susan E

    2016-02-01

    Methoxypyrazines are volatile compounds found in plants, microbes, and insects that have potent vegetal and earthy aromas. With sensory detection thresholds in the low ng L(-1) range, modest concentrations of these compounds can profoundly impact the aroma quality of foods and beverages, and high levels can lead to consumer rejection. The wine industry routinely analyzes the most prevalent methoxypyrazine, 2-isobutyl-3-methoxypyrazine (IBMP), to aid in harvest decisions, since concentrations decrease during berry ripening. In addition to IBMP, three other methoxypyrazines IPMP (2-isopropyl-3-methoxypyrazine), SBMP (2-sec-butyl-3-methoxypyrazine), and EMP (2-ethyl-3-methoxypyrazine) have been identified in grapes and/or wine and can impact aroma quality. Despite their routine analysis in the wine industry (mostly IBMP), accurate methoxypyrazine quantitation is hindered by two major challenges: sensitivity and resolution. With extremely low sensory detection thresholds (~8-15 ng L(-1) in wine for IBMP), highly sensitive analytical methods to quantify methoxypyrazines at trace levels are necessary. Here we were able to achieve resolution of IBMP as well as IPMP, EMP, and SBMP from co-eluting compounds using one-dimensional chromatography coupled to positive chemical ionization tandem mass spectrometry. Three extraction techniques HS-SPME (headspace-solid phase microextraction), SBSE (stirbar sorptive extraction), and HSSE (headspace sorptive extraction) were validated and compared. A 30 min extraction time was used for HS-SPME and SBSE extraction techniques, while 120 min was necessary to achieve sufficient sensitivity for HSSE extractions. All extraction methods have limits of quantitation (LOQ) at or below 1 ng L(-1) for all four methoxypyrazines analyzed, i.e., LOQ's at or below reported sensory detection limits in wine. The method is high throughput, with resolution of all compounds possible with a relatively rapid 27 min GC oven program. Copyright © 2015

  18. SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.

    Science.gov (United States)

    Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen

    2012-07-23

    We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.

  19. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    information. In particular, for feature extraction, we develop a new set of kernel descriptors−Context Kernel Descriptors (CKD), which enhance the original KDES by embedding the spatial context into the descriptors. Context cues contained in the context kernel enforce some degree of spatial consistency, thus...... improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting...... as the information about the underlying labels of the CKD using CSQMI. Thus the resulting codebook and reduced CKD are discriminative. We verify the effectiveness of our method on several public image benchmark datasets such as YaleB, Caltech-101 and CIFAR-10, as well as a challenging chicken feet dataset of our own...

  20. Method of extracting significant trouble information of nuclear power plants using probabilistic analysis technique

    International Nuclear Information System (INIS)

    Shimada, Yoshio; Miyazaki, Takamasa

    2005-01-01

    In order to analyze and evaluate large amounts of trouble information of overseas nuclear power plants, it is necessary to select information that is significant in terms of both safety and reliability. In this research, a method of efficiently and simply classifying degrees of importance of components in terms of safety and reliability while paying attention to root-cause components appearing in the information was developed. Regarding safety, the reactor core damage frequency (CDF), which is used in the probabilistic analysis of a reactor, was used. Regarding reliability, the automatic plant trip probability (APTP), which is used in the probabilistic analysis of automatic reactor trips, was used. These two aspects were reflected in the development of criteria for classifying degrees of importance of components. By applying these criteria, a simple method of extracting significant trouble information of overseas nuclear power plants was developed. (author)

  1. [Studies on preparative technology and quantitative determination for extracts of total saponin in roof of Panax japonicus].

    Science.gov (United States)

    He, Yu-min; Lu, Ke-ming; Yuan, Ding; Zhang, Chang-cheng

    2008-11-01

    To explore the optimum extraction and purification condition of the total saponins in the root of Panax japonicus (RPJ), and establish its quality control methods. Designed L16 (4(5)) orthogonal test with the extraction rate of total saponins as index, to determine the rational extraction process, and the techniques of water-saturated n-butanol extraction and acetone precipitation were applied to purify the alcohol extract of RPJ. Total saponins were detected by spectrophotometry and its triterpenoidal sapogenin oleanolic acid detected by HPLC. The optimum conditions of total saponins from RPJ was as follows: the material was pulverized, dipped in 60% ethanol aqueous solution as extract solvent at 10 times of volume, and refluxed 3 times for 3 h each time. Extractant of water-saturated n-butanol with extraction times of 3 and precipitant of acetone with precipitation amount of 4-5 times were included in the purification process, which would obtain the quality products. The content of total saponins could reach to 83.48%, and oleanolic acid to 38.30%. The optimized preparative technology is stable, convenient and practical. The extract rate of RPJ was high and steady with this technology, which provided new evidence for industrializing production of the plant and developing new drug.

  2. Automated concept-level information extraction to reduce the need for custom software and rules development.

    Science.gov (United States)

    D'Avolio, Leonard W; Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D

    2011-01-01

    Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. A 'learn by example' approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download.

  3. Quantitative Detection of Trace Level Cloxacillin in Food Samples Using Magnetic Molecularly Imprinted Polymer Extraction and Surface-Enhanced Raman Spectroscopy Nanopillars

    DEFF Research Database (Denmark)

    Ashley, Jon; Wu, Kaiyu; Hansen, Mikkel Fougt

    2017-01-01

    There is an increasing demand for rapid, sensitive, and low cost analytical methods to routinely screen antibiotic residues in food products. Conventional detection of antibiotics involves sample preparation by liquid-liquid or solid-phase extraction, followed by analysis using liquid...... with surface-enhanced Raman spectroscopy (SERS)-based detection for quantitative analysis of cloxacillin in pig serum. MMIP microspheres were synthesized using a core-shell technique. The large loading capacity and high selectivity of the MMIP microspheres enabled efficient extraction of cloxacillin, while...... using an internal standard. By coherently combining MMIP extraction and silicon nanopillar-based SERS biosensor, good sensitivity toward cloxacillin was achieved. The detection limit was 7.8 pmol. Cloxacillin recoveries from spiked pig plasma samples were found to be more than 80%....

  4. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  5. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  6. A cascade of classifiers for extracting medication information from discharge summaries

    Directory of Open Access Journals (Sweden)

    Halgrim Scott

    2011-07-01

    Full Text Available Abstract Background Extracting medication information from clinical records has many potential applications, and recently published research, systems, and competitions reflect an interest therein. Much of the early extraction work involved rules and lexicons, but more recently machine learning has been applied to the task. Methods We present a hybrid system consisting of two parts. The first part, field detection, uses a cascade of statistical classifiers to identify medication-related named entities. The second part uses simple heuristics to link those entities into medication events. Results The system achieved performance that is comparable to other approaches to the same task. This performance is further improved by adding features that reference external medication name lists. Conclusions This study demonstrates that our hybrid approach outperforms purely statistical or rule-based systems. The study also shows that a cascade of classifiers works better than a single classifier in extracting medication information. The system is available as is upon request from the first author.

  7. Three-dimensional information extraction from GaoFen-1 satellite images for landslide monitoring

    Science.gov (United States)

    Wang, Shixin; Yang, Baolin; Zhou, Yi; Wang, Futao; Zhang, Rui; Zhao, Qing

    2018-05-01

    To more efficiently use GaoFen-1 (GF-1) satellite images for landslide emergency monitoring, a Digital Surface Model (DSM) can be generated from GF-1 across-track stereo image pairs to build a terrain dataset. This study proposes a landslide 3D information extraction method based on the terrain changes of slope objects. The slope objects are mergences of segmented image objects which have similar aspects; and the terrain changes are calculated from the post-disaster Digital Elevation Model (DEM) from GF-1 and the pre-disaster DEM from GDEM V2. A high mountain landslide that occurred in Wenchuan County, Sichuan Province is used to conduct a 3D information extraction test. The extracted total area of the landslide is 22.58 ha; the displaced earth volume is 652,100 m3; and the average sliding direction is 263.83°. The accuracies of them are 0.89, 0.87 and 0.95, respectively. Thus, the proposed method expands the application of GF-1 satellite images to the field of landslide emergency monitoring.

  8. Freezing fecal samples prior to DNA extraction affects the Firmicutes to Bacteroidetes ratio determined by downstream quantitative PCR analysis

    DEFF Research Database (Denmark)

    Bahl, Martin Iain; Bergström, Anders; Licht, Tine Rask

    2012-01-01

    Freezing stool samples prior to DNA extraction and downstream analysis is widely used in metagenomic studies of the human microbiota but may affect the inferred community composition. In this study, DNA was extracted either directly or following freeze storage of three homogenized human fecal...

  9. Freezing fecal samples prior to DNA extraction affects the Firmicutes to Bacteroidetes ratio determined by downstream quantitative PCR analysis

    DEFF Research Database (Denmark)

    Bahl, Martin Iain; Bergström, Anders; Licht, Tine Rask

    Freezing stool samples prior to DNA extraction and downstream analysis is widely used in metagenomic studies of the human microbiota but may affect the inferred community composition. In this study DNA was extracted either directly or following freeze storage of three homogenized human fecal...

  10. Information Quality of a Nursing Information System depends on the nurses: A combined quantitative and qualitative evaluation

    NARCIS (Netherlands)

    Michel-Verkerke, M.B.

    2012-01-01

    Purpose Providing access to patient information is the key factor in nurses’ adoption of a Nursing Information System (NIS). In this study the requirements for information quality and the perceived quality of information are investigated. A teaching hospital in the Netherlands has developed a NIS as

  11. A Quantitative Study on the Relationship of Information Security Policy Awareness, Enforcement, and Maintenance to Information Security Program Effectiveness

    Science.gov (United States)

    Francois, Michael T.

    2016-01-01

    Today's organizations rely heavily on information technology to conduct their daily activities. Therefore, their information security systems are an area of heightened security concern. As a result, organizations implement information security programs to address and mitigate that concern. However, even with the emphasis on information security,…

  12. Identification and quantitative determination of carbohydrates in ethanolic extracts of two conifers using 13C NMR spectroscopy.

    Science.gov (United States)

    Duquesnoy, Emilie; Castola, Vincent; Casanova, Joseph

    2008-04-07

    We developed a method for the direct identification and quantification of carbohydrates in raw vegetable extracts using (13)C NMR spectroscopy without any preliminary step of precipitation or reduction of the components. This method has been validated (accuracy, precision and response linearity) using pure compounds and artificial mixtures before being applied to authentic ethanolic extracts of pine needles, pine wood and pine cones and fir twigs. We determined that carbohydrates represented from 15% to 35% of the crude extracts in which pinitol was the principal constituent accompanied by arabinitol, mannitol, glucose and fructose.

  13. A quantitative approach for risk-informed safety significance categorization in option-2

    International Nuclear Information System (INIS)

    Ha, Jun Su; Seong, Poong Hyun

    2004-01-01

    OPTION-2 recommends that Structures, Systems, or Components (SSCs) of Nuclear Power Plants (NPPs) should be categorized into four groups according to their safety significance as well as whether they are safety-related or not. With changes to the scope of SSCs covered by 10 CFR 50, safety-related components which categorized into low safety significant SSC (RISC-3 SSC) can be exempted from the existing conservative burden (or requirements). As OPTION-2 paradigm is applied, a lot of SSCs may be categorized into RISC-3 SSCs. Changes in treatment of the RISC-3 SSCs will be recommended and then finally the recommended changes shall be evaluated. Consequently, before recommending the changes in treatment, probable candidate SSCs for the changes in treatment need to be identified for efficient risk-informed regulation and application (RIRA). Hence, in this work, a validation focused on the RISC-3 SSCs is proposed to identify probable candidate SSCs. Burden to Importance Ratio (BIR) is utilized as a quantitative measure for the validation. BIR is a measure representing the extent of resources or requirements imposed on a SSC with respect to the value of the importance measure of the SSC. Therefore SSCs having high BIR can be considered as probable candidate SSCs for the changes in treatment. In addition, the final decision whether RISC-3 SSCs can be considered as probable candidate SSCs or not should be made by an expert panel. For the effective decision making, a structured mathematical decision-making process is constructed based on Belief Networks (BBN) to overcome demerits of conventional group meeting based on unstructured discussion for decision-making. To demonstrate the usefulness of the proposed approach, the approach is applied to 22 components selected from 512 In-Service Test (IST) components of Ulchin unit 3. The results of the application show that the proposed approach can identify probable candidate SSCs for changes in treatment. The identification of the

  14. THE EXTRACTION OF INDOOR BUILDING INFORMATION FROM BIM TO OGC INDOORGML

    Directory of Open Access Journals (Sweden)

    T.-A. Teo

    2017-07-01

    Full Text Available Indoor Spatial Data Infrastructure (indoor-SDI is an important SDI for geosptial analysis and location-based services. Building Information Model (BIM has high degree of details in geometric and semantic information for building. This study proposed direct conversion schemes to extract indoor building information from BIM to OGC IndoorGML. The major steps of the research include (1 topological conversion from building model into indoor network model; and (2 generation of IndoorGML. The topological conversion is a major process of generating and mapping nodes and edges from IFC to indoorGML. Node represents every space (e.g. IfcSpace and objects (e.g. IfcDoor in the building while edge shows the relationships between nodes. According to the definition of IndoorGML, the topological model in the dual space is also represented as a set of nodes and edges. These definitions of IndoorGML are the same as in the indoor network. Therefore, we can extract the necessary data in the indoor network and easily convert them into IndoorGML based on IndoorGML Schema. The experiment utilized a real BIM model to examine the proposed method. The experimental results indicated that the 3D indoor model (i.e. IndoorGML model can be automatically imported from IFC model by the proposed procedure. In addition, the geometric and attribute of building elements are completely and correctly converted from BIM to indoor-SDI.

  15. Methods from Information Extraction from LIDAR Intensity Data and Multispectral LIDAR Technology

    Science.gov (United States)

    Scaioni, M.; Höfle, B.; Baungarten Kersting, A. P.; Barazzetti, L.; Previtali, M.; Wujanz, D.

    2018-04-01

    LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on `Information Extraction from LiDAR Intensity Data' has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  16. The effect of informed consent on stress levels associated with extraction of impacted mandibular third molars.

    Science.gov (United States)

    Casap, Nardy; Alterman, Michael; Sharon, Guy; Samuni, Yuval

    2008-05-01

    To evaluate the effect of informed consent on stress levels associated with removal of impacted mandibular third molars. A total of 60 patients scheduled for extraction of impacted mandibular third molars participated in this study. The patients were unaware of the study's objectives. Data from 20 patients established the baseline levels of electrodermal activity (EDA). The remaining 40 patients were randomly assigned into 2 equal groups receiving either a detailed document of informed consent, disclosing the possible risks involved with the surgery, or a simplified version. Pulse, blood pressure, and EDA were monitored before, during, and after completion of the consent document. Changes in EDA, but not in blood pressure, were measured on completion of either version of the consent document. A greater increase in EDA was associated with the detailed version of the consent document (P = .004). A similar concomitant increase (although nonsignificant) in pulse values was monitored on completion of both versions. Completion of overdisclosed document of informed consent is associated with changes in physiological parameters. The results suggest that overdetailed listing and disclosure before extraction of impacted mandibular third molars can increase patient stress.

  17. METHODS FROM INFORMATION EXTRACTION FROM LIDAR INTENSITY DATA AND MULTISPECTRAL LIDAR TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2018-04-01

    Full Text Available LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on ‘Information Extraction from LiDAR Intensity Data’ has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  18. About increasing informativity of diagnostic system of asynchronous electric motor by extracting additional information from values of consumed current parameter

    Science.gov (United States)

    Zhukovskiy, Y.; Korolev, N.; Koteleva, N.

    2018-05-01

    This article is devoted to expanding the possibilities of assessing the technical state of the current consumption of asynchronous electric drives, as well as increasing the information capacity of diagnostic methods, in conditions of limited access to equipment and incompleteness of information. The method of spectral analysis of the electric drive current can be supplemented by an analysis of the components of the current of the Park's vector. The research of the hodograph evolution in the moment of appearance and development of defects was carried out using the example of current asymmetry in the phases of an induction motor. The result of the study is the new diagnostic parameters of the asynchronous electric drive. During the research, it was proved that the proposed diagnostic parameters allow determining the type and level of the defect. At the same time, there is no need to stop the equipment and taky it out of service for repair. Modern digital control and monitoring systems can use the proposed parameters based on the stator current of an electrical machine to improve the accuracy and reliability of obtaining diagnostic patterns and predicting their changes in order to improve the equipment maintenance systems. This approach can also be used in systems and objects where there are significant parasitic vibrations and unsteady loads. The extraction of useful information can be carried out in electric drive systems in the structure of which there is a power electric converter.

  19. Quantitative Analysis of Total Petroleum Hydrocarbons in Soils: Comparison between Reflectance Spectroscopy and Solvent Extraction by 3 Certified Laboratories

    Directory of Open Access Journals (Sweden)

    Guy Schwartz

    2012-01-01

    Full Text Available The commonly used analytic method for assessing total petroleum hydrocarbons (TPH in soil, EPA method 418.1, is usually based on extraction with 1,1,2-trichlorotrifluoroethane (Freon 113 and FTIR spectroscopy of the extracted solvent. This method is widely used for initial site investigation, due to the relative low price per sample. It is known that the extraction efficiency varies depending on the extracting solvent and other sample properties. This study’s main goal was to evaluate reflectance spectroscopy as a tool for TPH assessment, as compared with three commercial certified laboratories using traditional methods. Large variations were found between the results of the three commercial laboratories, both internally (average deviation up to 20%, and between laboratories (average deviation up to 103%. Reflectance spectroscopy method was found be as good as the commercial laboratories in terms of accuracy and could be a viable field-screening tool that is rapid, environmental friendly, and cost effective.

  20. Multi-Paradigm and Multi-Lingual Information Extraction as Support for Medical Web Labelling Authorities

    Directory of Open Access Journals (Sweden)

    Martin Labsky

    2010-10-01

    Full Text Available Until recently, quality labelling of medical web content has been a pre-dominantly manual activity. However, the advances in automated text processing opened the way to computerised support of this activity. The core enabling technology is information extraction (IE. However, the heterogeneity of websites offering medical content imposes particular requirements on the IE techniques to be applied. In the paper we discuss these requirements and describe a multi-paradigm approach to IE addressing them. Experiments on multi-lingual data are reported. The research has been carried out within the EU MedIEQ project.

  1. Quantitative extraction of the bedrock exposure rate based on unmanned aerial vehicle data and Landsat-8 OLI image in a karst environment

    Science.gov (United States)

    Wang, Hongyan; Li, Qiangzi; Du, Xin; Zhao, Longcai

    2017-12-01

    In the karst regions of southwest China, rocky desertification is one of the most serious problems in land degradation. The bedrock exposure rate is an important index to assess the degree of rocky desertification in karst regions. Because of the inherent merits of macro-scale, frequency, efficiency, and synthesis, remote sensing is a promising method to monitor and assess karst rocky desertification on a large scale. However, actual measurement of the bedrock exposure rate is difficult and existing remote-sensing methods cannot directly be exploited to extract the bedrock exposure rate owing to the high complexity and heterogeneity of karst environments. Therefore, using unmanned aerial vehicle (UAV) and Landsat-8 Operational Land Imager (OLI) data for Xingren County, Guizhou Province, quantitative extraction of the bedrock exposure rate based on multi-scale remote-sensing data was developed. Firstly, we used an object-oriented method to carry out accurate classification of UAVimages. From the results of rock extraction, the bedrock exposure rate was calculated at the 30 m grid scale. Parts of the calculated samples were used as training data; other data were used for model validation. Secondly, in each grid the band reflectivity of Landsat-8 OLI data was extracted and a variety of rock and vegetation indexes (e.g., NDVI and SAVI) were calculated. Finally, a network model was established to extract the bedrock exposure rate. The correlation coefficient of the network model was 0.855, that of the validation model was 0.677 and the root mean square error of the validation model was 0.073. This method is valuable for wide-scale estimation of bedrock exposure rate in karst environments. Using the quantitative inversion model, a distribution map of the bedrock exposure rate in Xingren County was obtained.

  2. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    Science.gov (United States)

    Wuhrer, R.; Moran, K.

    2014-03-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.

  3. Quantitative X-ray mapping, scatter diagrams and the generation of correction maps to obtain more information about your material

    International Nuclear Information System (INIS)

    Wuhrer, R; Moran, K

    2014-01-01

    Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper

  4. Scholarly Information Extraction Is Going to Make a Quantum Leap with PubMed Central (PMC).

    Science.gov (United States)

    Matthies, Franz; Hahn, Udo

    2017-01-01

    With the increasing availability of complete full texts (journal articles), rather than their surrogates (titles, abstracts), as resources for text analytics, entirely new opportunities arise for information extraction and text mining from scholarly publications. Yet, we gathered evidence that a range of problems are encountered for full-text processing when biomedical text analytics simply reuse existing NLP pipelines which were developed on the basis of abstracts (rather than full texts). We conducted experiments with four different relation extraction engines all of which were top performers in previous BioNLP Event Extraction Challenges. We found that abstract-trained engines loose up to 6.6% F-score points when run on full-text data. Hence, the reuse of existing abstract-based NLP software in a full-text scenario is considered harmful because of heavy performance losses. Given the current lack of annotated full-text resources to train on, our study quantifies the price paid for this short cut.

  5. A study on the quantitative model of human response time using the amount and the similarity of information

    International Nuclear Information System (INIS)

    Lee, Sung Jin

    2006-02-01

    The mental capacity to retain or recall information, or memory is related to human performance during processing of information. Although a large number of studies have been carried out on human performance, little is known about the similarity effect. The purpose of this study was to propose and validate a quantitative and predictive model on human response time in the user interface with the basic concepts of information amount, similarity and degree of practice. It was difficult to explain human performance by only similarity or information amount. There were two difficulties: constructing a quantitative model on human response time and validating the proposed model by experimental work. A quantitative model based on the Hick's law, the law of practice and similarity theory was developed. The model was validated under various experimental conditions by measuring the participants' response time in the environment of a computer-based display. Human performance was improved by degree of similarity and practice in the user interface. Also we found the age-related human performance which was degraded as he or she was more elder. The proposed model may be useful for training operators who will handle some interfaces and predicting human performance by changing system design

  6. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  7. Qualitative and Quantitative Information Flow Analysis for Multi-threaded Programs

    NARCIS (Netherlands)

    Ngo, Minh Tri

    2014-01-01

    In today’s information-based society, guaranteeing information security plays an important role in all aspects of life: governments, military, companies, financial information systems, web-based services etc. With the existence of Internet, Google, and shared-information networks, it is easier than

  8. 76 FR 9637 - Proposed Information Collection (Veteran Suicide Prevention Online Quantitative Surveys) Activity...

    Science.gov (United States)

    2011-02-18

    ... Collection (Veteran Suicide Prevention Online Quantitative Surveys) Activity: Comment Request AGENCY... prevention of suicide among Veterans and their families. DATES: Written comments and recommendations on the.... Abstract: VA's top priority is the prevention of Veterans suicide. It is imperative to reach these at-risk...

  9. 76 FR 27384 - Agency Information Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys...

    Science.gov (United States)

    2011-05-11

    ... Collection Activity (Veteran Suicide Prevention Online Quantitative Surveys) Under OMB Review AGENCY.... Abstract: VA's top priority is the prevention of Veterans suicide. It is imperative to reach these at-risk... families' awareness of VA's suicide prevention and mental health support services. In addition, the surveys...

  10. Developing an Approach to Prioritize River Restoration using Data Extracted from Flood Risk Information System Databases.

    Science.gov (United States)

    Vimal, S.; Tarboton, D. G.; Band, L. E.; Duncan, J. M.; Lovette, J. P.; Corzo, G.; Miles, B.

    2015-12-01

    Prioritizing river restoration requires information on river geometry. In many states in the US detailed river geometry has been collected for floodplain mapping and is available in Flood Risk Information Systems (FRIS). In particular, North Carolina has, for its 100 Counties, developed a database of numerous HEC-RAS models which are available through its Flood Risk Information System (FRIS). These models that include over 260 variables were developed and updated by numerous contractors. They contain detailed surveyed or LiDAR derived cross-sections and modeled flood extents for different extreme event return periods. In this work, over 4700 HEC-RAS models' data was integrated and upscaled to utilize detailed cross-section information and 100-year modelled flood extent information to enable river restoration prioritization for the entire state of North Carolina. We developed procedures to extract geomorphic properties such as entrenchment ratio, incision ratio, etc. from these models. Entrenchment ratio quantifies the vertical containment of rivers and thereby their vulnerability to flooding and incision ratio quantifies the depth per unit width. A map of entrenchment ratio for the whole state was derived by linking these model results to a geodatabase. A ranking of highly entrenched counties enabling prioritization for flood allowance and mitigation was obtained. The results were shared through HydroShare and web maps developed for their visualization using Google Maps Engine API.

  11. Extracting Low-Frequency Information from Time Attenuation in Elastic Waveform Inversion

    Science.gov (United States)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong

    2017-03-01

    Low-frequency information is crucial for recovering background velocity, but the lack of low-frequency information in field data makes inversion impractical without accurate initial models. Laplace-Fourier domain waveform inversion can recover a smooth model from real data without low-frequency information, which can be used for subsequent inversion as an ideal starting model. In general, it also starts with low frequencies and includes higher frequencies at later inversion stages, while the difference is that its ultralow frequency information comes from the Laplace-Fourier domain. Meanwhile, a direct implementation of the Laplace-transformed wavefield using frequency domain inversion is also very convenient. However, because broad frequency bands are often used in the pure time domain waveform inversion, it is difficult to extract the wavefields dominated by low frequencies in this case. In this paper, low-frequency components are constructed by introducing time attenuation into the recorded residuals, and the rest of the method is identical to the traditional time domain inversion. Time windowing and frequency filtering are also applied to mitigate the ambiguity of the inverse problem. Therefore, we can start at low frequencies and to move to higher frequencies. The experiment shows that the proposed method can achieve a good inversion result in the presence of a linear initial model and records without low-frequency information.

  12. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  13. Approaching the largest ‘API’: extracting information from the Internet with Python

    Directory of Open Access Journals (Sweden)

    Jonathan E. Germann

    2018-02-01

    Full Text Available This article explores the need for libraries to algorithmically access and manipulate the world’s largest API: the Internet. The billions of pages on the ‘Internet API’ (HTTP, HTML, CSS, XPath, DOM, etc. are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.

  14. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  15. The Analysis of Tree Species Distribution Information Extraction and Landscape Pattern Based on Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Yi Zeng

    2017-08-01

    Full Text Available The forest ecosystem is the largest land vegetation type, which plays the role of unreplacement with its unique value. And in the landscape scale, the research on forest landscape pattern has become the current hot spot, wherein the study of forest canopy structure is very important. They determines the process and the strength of forests energy flow, which influences the adjustments of ecosystem for climate and species diversity to some extent. The extraction of influencing factors of canopy structure and the analysis of the vegetation distribution pattern are especially important. To solve the problems, remote sensing technology, which is superior to other technical means because of its fine timeliness and large-scale monitoring, is applied to the study. Taking Lingkong Mountain as the study area, the paper uses the remote sensing image to analyze the forest distribution pattern and obtains the spatial characteristics of canopy structure distribution, and DEM data are as the basic data to extract the influencing factors of canopy structure. In this paper, pattern of trees distribution is further analyzed by using terrain parameters, spatial analysis tools and surface processes quantitative simulation. The Hydrological Analysis tool is used to build distributed hydrological model, and corresponding algorithm is applied to determine surface water flow path, rivers network and basin boundary. Results show that forest vegetation distribution of dominant tree species present plaque on the landscape scale and their distribution have spatial heterogeneity which is related to terrain factors closely. After the overlay analysis of aspect, slope and forest distribution pattern respectively, the most suitable area for stand growth and the better living condition are obtained.

  16. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  17. Inexperienced clinicians can extract pathoanatomic information from MRI narrative reports with high reproducability for use in research/quality assurance

    DEFF Research Database (Denmark)

    Kent, Peter; Briggs, Andrew M; Albert, Hanne Birgit

    2011-01-01

    Background Although reproducibility in reading MRI images amongst radiologists and clinicians has been studied previously, no studies have examined the reproducibility of inexperienced clinicians in extracting pathoanatomic information from magnetic resonance imaging (MRI) narrative reports and t...

  18. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    Science.gov (United States)

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  19. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  20. Overview of image processing tools to extract physical information from JET videos

    International Nuclear Information System (INIS)

    Craciunescu, T; Tiseanu, I; Zoita, V; Murari, A; Gelfusa, M

    2014-01-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  1. Extraction and Analysis of Information Related to Research & Development Declared Under an Additional Protocol

    International Nuclear Information System (INIS)

    Idinger, J.; Labella, R.; Rialhe, A.; Teller, N.

    2015-01-01

    The additional protocol (AP) provides important tools to strengthen and improve the effectiveness and efficiency of the safeguards system. Safeguards are designed to verify that States comply with their international commitments not to use nuclear material or to engage in nuclear-related activities for the purpose of developing nuclear weapons or other nuclear explosive devices. Under an AP based on INFCIRC/540, a State must provide to the IAEA additional information about, and inspector access to, all parts of its nuclear fuel cycle. In addition, the State has to supply information about its nuclear fuel cycle-related research and development (R&D) activities. The majority of States declare their R&D activities under the AP Articles 2.a.(i), 2.a.(x), and 2.b.(i) as part of initial declarations and their annual updates under the AP. In order to verify consistency and completeness of information provided under the AP by States, the Agency has started to analyze declared R&D information by identifying interrelationships between States in different R&D areas relevant to safeguards. The paper outlines the quality of R&D information provided by States to the Agency, describes how the extraction and analysis of relevant declarations are currently carried out at the Agency and specifies what kinds of difficulties arise during evaluation in respect to cross-linking international projects and finding gaps in reporting. In addition, the paper tries to elaborate how the reporting quality of AP information with reference to R&D activities and the assessment process of R&D information could be improved. (author)

  2. Quantitative assessment of information load on control room operator in emergency situations

    International Nuclear Information System (INIS)

    Filshtein, E.L.

    1986-01-01

    The information processing by the operator in reading-from-display mode is addressed with the following conclusions: 1) The information measure which is needed should be translatable into the time requirements and as such, should reflect the peculiarities associated with mental information processing abilities. 2) The Information Processing Unit (IPU) is introduced as a measure that reflects the peculiarities and, therefore, is better than the Information Entropy unit (H) for the problem under consideration. 3) All the messages that the operator might encounter are classified as belonging to one of three types, and the amount of processing information is quantified in the Information Processing Units (IPU). 4) A pilot study has been conducted to verify underlined assumptions and to evaluate the rate of information processing in reading-from-display mode

  3. A Network Neuroscience of Human Learning: Potential to Inform Quantitative Theories of Brain and Behavior.

    Science.gov (United States)

    Bassett, Danielle S; Mattar, Marcelo G

    2017-04-01

    Humans adapt their behavior to their external environment in a process often facilitated by learning. Efforts to describe learning empirically can be complemented by quantitative theories that map changes in neurophysiology to changes in behavior. In this review we highlight recent advances in network science that offer a sets of tools and a general perspective that may be particularly useful in understanding types of learning that are supported by distributed neural circuits. We describe recent applications of these tools to neuroimaging data that provide unique insights into adaptive neural processes, the attainment of knowledge, and the acquisition of new skills, forming a network neuroscience of human learning. While promising, the tools have yet to be linked to the well-formulated models of behavior that are commonly utilized in cognitive psychology. We argue that continued progress will require the explicit marriage of network approaches to neuroimaging data and quantitative models of behavior. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Information Risk Management: Qualitative or Quantitative? Cross industry lessons from medical and financial fields

    OpenAIRE

    Upasna Saluja; Norbik Bashah Idris

    2012-01-01

    Enterprises across the world are taking a hard look at their risk management practices. A number of qualitative and quantitative models and approaches are employed by risk practitioners to keep risk under check. As a norm most organizations end up choosing the more flexible, easier to deploy and customize qualitative models of risk assessment. In practice one sees that such models often call upon the practitioners to make qualitative judgments on a relative rating scale which brings in consid...

  5. Qualitative and quantitative information flow analysis for multi-thread programs

    NARCIS (Netherlands)

    Ngo, Minh Tri

    2014-01-01

    In today's information-based society, guaranteeing information security plays an important role in all aspects of life: communication between citizens and governments, military, companies, financial information systems, web-based services etc. With the increasing popularity of computer systems with

  6. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    Directory of Open Access Journals (Sweden)

    Li Yao

    2016-01-01

    Full Text Available Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.

  7. Experimental study of suitable methods for measuring the radioactivity of 90Sr during its fast quantitation by extraction with dicarbollides

    International Nuclear Information System (INIS)

    Svoboda, K.; Kyrs, M.

    1994-04-01

    The measurement of activity of 90 Sr during its fast isolation by extraction with cobalt dicarbollide and Slovafol 909 in a nitrobenzene-carbon tetrachloride mixed solvent and re-extraction with 0.15 M Chelaton IV at pH 10.2 was investigated. The use of a liquid scintillator allows the effect of the 90 Y beta activity to be eliminated more efficiently than if the evaporation residue is measured with a solid scintillator. Traces of dicarbollide, nitrobenzene and CCl 4 passed into the aqueous solution exert an unfavorable effect by shifting the spectral curve of the 90 Sr+ 90 Y beta radiation towards lower energies, the shift being dependent on the concentration of the above interferents in the aqueous phase. This effect can be eliminated by extracting the aqueous phase with the same volume of octyl alcohol or amyl acetate; while removing the organic interferents, this extraction brings about no apparent loss of strontium. (author). 3 tabs., 7 figs., 5 refs

  8. The Readability of Electronic Cigarette Health Information and Advice: A Quantitative Analysis of Web-Based Information.

    Science.gov (United States)

    Park, Albert; Zhu, Shu-Hong; Conway, Mike

    2017-01-06

    The popularity and use of electronic cigarettes (e-cigarettes) has increased across all demographic groups in recent years. However, little is currently known about the readability of health information and advice aimed at the general public regarding the use of e-cigarettes. The objective of our study was to examine the readability of publicly available health information as well as advice on e-cigarettes. We compared information and advice available from US government agencies, nongovernment organizations, English speaking government agencies outside the United States, and for-profit entities. A systematic search for health information and advice on e-cigarettes was conducted using search engines. We manually verified search results and converted to plain text for analysis. We then assessed readability of the collected documents using 4 readability metrics followed by pairwise comparisons of groups with adjustment for multiple comparisons. A total of 54 documents were collected for this study. All 4 readability metrics indicate that all information and advice on e-cigarette use is written at a level higher than that recommended for the general public by National Institutes of Health (NIH) communication guidelines. However, health information and advice written by for-profit entities, many of which were promoting e-cigarettes, were significantly easier to read. A substantial proportion of potential and current e-cigarette users are likely to have difficulty in fully comprehending Web-based health information regarding e-cigarettes, potentially hindering effective health-seeking behaviors. To comply with NIH communication guidelines, government entities and nongovernment organizations would benefit from improving the readability of e-cigarettes information and advice. ©Albert Park, Shu-Hong Zhu, Mike Conway. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 06.01.2017.

  9. Extracting information on the spatial variability in erosion rate stored in detrital cooling age distributions in river sands

    Science.gov (United States)

    Braun, Jean; Gemignani, Lorenzo; van der Beek, Peter

    2018-03-01

    One of the main purposes of detrital thermochronology is to provide constraints on the regional-scale exhumation rate and its spatial variability in actively eroding mountain ranges. Procedures that use cooling age distributions coupled with hypsometry and thermal models have been developed in order to extract quantitative estimates of erosion rate and its spatial distribution, assuming steady state between tectonic uplift and erosion. This hypothesis precludes the use of these procedures to assess the likely transient response of mountain belts to changes in tectonic or climatic forcing. Other methods are based on an a priori knowledge of the in situ distribution of ages to interpret the detrital age distributions. In this paper, we describe a simple method that, using the observed detrital mineral age distributions collected along a river, allows us to extract information about the relative distribution of erosion rates in an eroding catchment without relying on a steady-state assumption, the value of thermal parameters or an a priori knowledge of in situ age distributions. The model is based on a relatively low number of parameters describing lithological variability among the various sub-catchments and their sizes and only uses the raw ages. The method we propose is tested against synthetic age distributions to demonstrate its accuracy and the optimum conditions for it use. In order to illustrate the method, we invert age distributions collected along the main trunk of the Tsangpo-Siang-Brahmaputra river system in the eastern Himalaya. From the inversion of the cooling age distributions we predict present-day erosion rates of the catchments along the Tsangpo-Siang-Brahmaputra river system, as well as some of its tributaries. We show that detrital age distributions contain dual information about present-day erosion rate, i.e., from the predicted distribution of surface ages within each catchment and from the relative contribution of any given catchment to the

  10. Extracting information on the spatial variability in erosion rate stored in detrital cooling age distributions in river sands

    Directory of Open Access Journals (Sweden)

    J. Braun

    2018-03-01

    Full Text Available One of the main purposes of detrital thermochronology is to provide constraints on the regional-scale exhumation rate and its spatial variability in actively eroding mountain ranges. Procedures that use cooling age distributions coupled with hypsometry and thermal models have been developed in order to extract quantitative estimates of erosion rate and its spatial distribution, assuming steady state between tectonic uplift and erosion. This hypothesis precludes the use of these procedures to assess the likely transient response of mountain belts to changes in tectonic or climatic forcing. Other methods are based on an a priori knowledge of the in situ distribution of ages to interpret the detrital age distributions. In this paper, we describe a simple method that, using the observed detrital mineral age distributions collected along a river, allows us to extract information about the relative distribution of erosion rates in an eroding catchment without relying on a steady-state assumption, the value of thermal parameters or an a priori knowledge of in situ age distributions. The model is based on a relatively low number of parameters describing lithological variability among the various sub-catchments and their sizes and only uses the raw ages. The method we propose is tested against synthetic age distributions to demonstrate its accuracy and the optimum conditions for it use. In order to illustrate the method, we invert age distributions collected along the main trunk of the Tsangpo–Siang–Brahmaputra river system in the eastern Himalaya. From the inversion of the cooling age distributions we predict present-day erosion rates of the catchments along the Tsangpo–Siang–Brahmaputra river system, as well as some of its tributaries. We show that detrital age distributions contain dual information about present-day erosion rate, i.e., from the predicted distribution of surface ages within each catchment and from the relative contribution of

  11. Selection of Suitable DNA Extraction Methods for Genetically Modified Maize 3272, and Development and Evaluation of an Event-Specific Quantitative PCR Method for 3272.

    Science.gov (United States)

    Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi

    2016-01-01

    A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.

  12. A Quantitative and Qualitative Inquiry into Future Teachers' Use of Information and Communications Technology to Develop Students' Information Literacy Skills

    Science.gov (United States)

    Simard, Stéphanie; Karsenti, Thierry

    2016-01-01

    This study aims to understand how preservice programs prepare future teachers to use ICT to develop students' information literacy skills. A survey was conducted from January 2014 through May 2014 with 413 future teachers in four French Canadian universities. In the spring of 2015, qualitative data were also collected from 48 students in their…

  13. MedEx: a medication information extraction system for clinical narratives

    Science.gov (United States)

    Stenner, Shane P; Doan, Son; Johnson, Kevin B; Waitman, Lemuel R; Denny, Joshua C

    2010-01-01

    Medication information is one of the most important types of clinical data in electronic medical records. It is critical for healthcare safety and quality, as well as for clinical research that uses electronic medical record data. However, medication data are often recorded in clinical notes as free-text. As such, they are not accessible to other computerized applications that rely on coded data. We describe a new natural language processing system (MedEx), which extracts medication information from clinical notes. MedEx was initially developed using discharge summaries. An evaluation using a data set of 50 discharge summaries showed it performed well on identifying not only drug names (F-measure 93.2%), but also signature information, such as strength, route, and frequency, with F-measures of 94.5%, 93.9%, and 96.0% respectively. We then applied MedEx unchanged to outpatient clinic visit notes. It performed similarly with F-measures over 90% on a set of 25 clinic visit notes. PMID:20064797

  14. Videomicroscopic extraction of specific information on cell proliferation and migration in vitro

    International Nuclear Information System (INIS)

    Debeir, Olivier; Megalizzi, Veronique; Warzee, Nadine; Kiss, Robert; Decaestecker, Christine

    2008-01-01

    In vitro cell imaging is a useful exploratory tool for cell behavior monitoring with a wide range of applications in cell biology and pharmacology. Combined with appropriate image analysis techniques, this approach has been shown to provide useful information on the detection and dynamic analysis of cell events. In this context, numerous efforts have been focused on cell migration analysis. In contrast, the cell division process has been the subject of fewer investigations. The present work focuses on this latter aspect and shows that, in complement to cell migration data, interesting information related to cell division can be extracted from phase-contrast time-lapse image series, in particular cell division duration, which is not provided by standard cell assays using endpoint analyses. We illustrate our approach by analyzing the effects induced by two sigma-1 receptor ligands (haloperidol and 4-IBP) on the behavior of two glioma cell lines using two in vitro cell models, i.e., the low-density individual cell model and the high-density scratch wound model. This illustration also shows that the data provided by our approach are suggestive as to the mechanism of action of compounds, and are thus capable of informing the appropriate selection of further time-consuming and more expensive biological evaluations required to elucidate a mechanism

  15. 5W1H Information Extraction with CNN-Bidirectional LSTM

    Science.gov (United States)

    Nurdin, A.; Maulidevi, N. U.

    2018-03-01

    In this work, information about who, did what, when, where, why, and how on Indonesian news articles were extracted by combining Convolutional Neural Network and Bidirectional Long Short-Term Memory. Convolutional Neural Network can learn semantically meaningful representations of sentences. Bidirectional LSTM can analyze the relations among words in the sequence. We also use word embedding word2vec for word representation. By combining these algorithms, we obtained F-measure 0.808. Our experiments show that CNN-BLSTM outperforms other shallow methods, namely IBk, C4.5, and Naïve Bayes with the F-measure 0.655, 0.645, and 0.595, respectively.

  16. Metaproteomics: extracting and mining proteome information to characterize metabolic activities in microbial communities.

    Science.gov (United States)

    Abraham, Paul E; Giannone, Richard J; Xiong, Weili; Hettich, Robert L

    2014-06-17

    Contemporary microbial ecology studies usually employ one or more "omics" approaches to investigate the structure and function of microbial communities. Among these, metaproteomics aims to characterize the metabolic activities of the microbial membership, providing a direct link between the genetic potential and functional metabolism. The successful deployment of metaproteomics research depends on the integration of high-quality experimental and bioinformatic techniques for uncovering the metabolic activities of a microbial community in a way that is complementary to other "meta-omic" approaches. The essential, quality-defining informatics steps in metaproteomics investigations are: (1) construction of the metagenome, (2) functional annotation of predicted protein-coding genes, (3) protein database searching, (4) protein inference, and (5) extraction of metabolic information. In this article, we provide an overview of current bioinformatic approaches and software implementations in metaproteome studies in order to highlight the key considerations needed for successful implementation of this powerful community-biology tool. Copyright © 2014 John Wiley & Sons, Inc.

  17. Developing a Process Model for the Forensic Extraction of Information from Desktop Search Applications

    Directory of Open Access Journals (Sweden)

    Timothy Pavlic

    2008-03-01

    Full Text Available Desktop search applications can contain cached copies of files that were deleted from the file system. Forensic investigators see this as a potential source of evidence, as documents deleted by suspects may still exist in the cache. Whilst there have been attempts at recovering data collected by desktop search applications, there is no methodology governing the process, nor discussion on the most appropriate means to do so. This article seeks to address this issue by developing a process model that can be applied when developing an information extraction application for desktop search applications, discussing preferred methods and the limitations of each. This work represents a more structured approach than other forms of current research.

  18. An innovative method for extracting isotopic information from low-resolution gamma spectra

    International Nuclear Information System (INIS)

    Miko, D.; Estep, R.J.; Rawool-Sullivan, M.W.

    1998-01-01

    A method is described for the extraction of isotopic information from attenuated gamma ray spectra using the gross-count material basis set (GC-MBS) model. This method solves for the isotopic composition of an unknown mixture of isotopes attenuated through an absorber of unknown material. For binary isotopic combinations the problem is nonlinear in only one variable and is easily solved using standard line optimization techniques. Results are presented for NaI spectrum analyses of various binary combinations of enriched uranium, depleted uranium, low burnup Pu, 137 Cs, and 133 Ba attenuated through a suite of absorbers ranging in Z from polyethylene through lead. The GC-MBS method results are compared to those computed using ordinary response function fitting and with a simple net peak area method. The GC-MBS method was found to be significantly more accurate than the other methods over the range of absorbers and isotopic blends studied

  19. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  20. A quantitative approach to measure road network information based on edge diversity

    Science.gov (United States)

    Wu, Xun; Zhang, Hong; Lan, Tian; Cao, Weiwei; He, Jing

    2015-12-01

    The measure of map information has been one of the key issues in assessing cartographic quality and map generalization algorithms. It is also important for developing efficient approaches to transfer geospatial information. Road network is the most common linear object in real world. Approximately describe road network information will benefit road map generalization, navigation map production and urban planning. Most of current approaches focused on node diversities and supposed that all the edges are the same, which is inconsistent to real-life condition, and thus show limitations in measuring network information. As real-life traffic flow are directed and of different quantities, the original undirected vector road map was first converted to a directed topographic connectivity map. Then in consideration of preferential attachment in complex network study and rich-club phenomenon in social network, the from and to weights of each edge are assigned. The from weight of a given edge is defined as the connectivity of its end node to the sum of the connectivities of all the neighbors of the from nodes of the edge. After getting the from and to weights of each edge, edge information, node information and the whole network structure information entropies could be obtained based on information theory. The approach has been applied to several 1 square mile road network samples. Results show that information entropies based on edge diversities could successfully describe the structural differences of road networks. This approach is a complementarity to current map information measurements, and can be extended to measure other kinds of geographical objects.

  1. Quantitative feature extraction from the Chinese hamster ovary bioprocess bibliome using a novel meta-analysis workflow

    DEFF Research Database (Denmark)

    Golabgir, Aydin; Gutierrez, Jahir M.; Hefzi, Hooman

    2016-01-01

    compilation covers all published CHO cell studies from 1995 to 2015, and each study is classified by the types of phenotypic and bioprocess data contained therein. Using data from selected studies, we also present a quantitative meta-analysis of bioprocess characteristics across diverse culture conditions...... practices can limit research re-use in this field, we show that the statistical analysis of diverse legacy bioprocess data can provide insight into bioprocessing capabilities of CHO cell lines used in industry. The CHO bibliome can be accessed at http://lewislab.ucsd.edu/cho-bibliome/....

  2. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    Science.gov (United States)

    Hosseini-Golgoo, S. M.; Bozorgi, H.; Saberkari, A.

    2015-06-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively.

  3. Comparing success levels of different neural network structures in extracting discriminative information from the response patterns of a temperature-modulated resistive gas sensor

    International Nuclear Information System (INIS)

    Hosseini-Golgoo, S M; Bozorgi, H; Saberkari, A

    2015-01-01

    Performances of three neural networks, consisting of a multi-layer perceptron, a radial basis function, and a neuro-fuzzy network with local linear model tree training algorithm, in modeling and extracting discriminative features from the response patterns of a temperature-modulated resistive gas sensor are quantitatively compared. For response pattern recording, a voltage staircase containing five steps each with a 20 s plateau is applied to the micro-heater of the sensor, when 12 different target gases, each at 11 concentration levels, are present. In each test, the hidden layer neuron weights are taken as the discriminatory feature vector of the target gas. These vectors are then mapped to a 3D feature space using linear discriminant analysis. The discriminative information content of the feature vectors are determined by the calculation of the Fisher’s discriminant ratio, affording quantitative comparison among the success rates achieved by the different neural network structures. The results demonstrate a superior discrimination ratio for features extracted from local linear neuro-fuzzy and radial-basis-function networks with recognition rates of 96.27% and 90.74%, respectively. (paper)

  4. A quantitative assessment of changing trends in internet usage for cancer information.

    LENUS (Irish Health Repository)

    McHugh, Seamus M

    2012-02-01

    BACKGROUND: The internet is an important source of healthcare information. To date, assessment of its use as a source of oncologic information has been restricted to retrospective surveys. METHODS: The cancer-related searches of approximately 361,916,185 people in the United States and the United Kingdom were examined. Data were collected from two separate 100-day periods in 2008 and 2010. RESULTS: In 2008, there were 97,531 searches. The majority of searches related to basic cancer information (18,700, 19%), followed by treatment (8404, 9%) and diagnosis (6460, 7%). This compares with 179,025 searches in 2010 representing an increase of 183%. In 2008 breast cancer accounted for 21,102 (21%) individual searches, increasing to 85,825 searches in 2010. In 2010 a total of 0.2% (321) of searches focused on litigation, with those searching for breast cancer information most likely to research this topic (P=0.000). CONCLUSION: Use of the internet as a source of oncological information is increasing rapidly. These searches represent the most sensitive information relating to cancer, including prognosis and litigation. It is imperative now that efforts are made to ensure the reliability and comprehensiveness of this information.

  5. Extraction of prospecting information of uranium deposit based on high spatial resolution satellite data. Taking bashibulake region as an example

    International Nuclear Information System (INIS)

    Yang Xu; Liu Dechang; Zhang Jielin

    2008-01-01

    In this study, the signification and content of prospecting information of uranium deposit are expounded. Quickbird high spatial resolution satellite data are used to extract the prospecting information of uranium deposit in Bashibulake area in the north of Tarim Basin. By using the pertinent methods of image processing, the information of ore-bearing bed, ore-control structure and mineralized alteration have been extracted. The results show a high consistency with the field survey. The aim of this study is to explore practicability of high spatial resolution satellite data for prospecting minerals, and to broaden the thinking of prospectation at similar area. (authors)

  6. Providing Quantitative Information and a Nudge to Undergo Stool Testing in a Colorectal Cancer Screening Decision Aid: A Randomized Clinical Trial.

    Science.gov (United States)

    Schwartz, Peter H; Perkins, Susan M; Schmidt, Karen K; Muriello, Paul F; Althouse, Sandra; Rawl, Susan M

    2017-08-01

    Guidelines recommend that patient decision aids should provide quantitative information about probabilities of potential outcomes, but the impact of this information is unknown. Behavioral economics suggests that patients confused by quantitative information could benefit from a "nudge" towards one option. We conducted a pilot randomized trial to estimate the effect sizes of presenting quantitative information and a nudge. Primary care patients (n = 213) eligible for colorectal cancer screening viewed basic screening information and were randomized to view (a) quantitative information (quantitative module), (b) a nudge towards stool testing with the fecal immunochemical test (FIT) (nudge module), (c) neither a nor b, or (d) both a and b. Outcome measures were perceived colorectal cancer risk, screening intent, preferred test, and decision conflict, measured before and after viewing the decision aid, and screening behavior at 6 months. Patients viewing the quantitative module were more likely to be screened than those who did not ( P = 0.012). Patients viewing the nudge module had a greater increase in perceived colorectal cancer risk than those who did not ( P = 0.041). Those viewing the quantitative module had a smaller increase in perceived risk than those who did not ( P = 0.046), and the effect was moderated by numeracy. Among patients with high numeracy who did not view the nudge module, those who viewed the quantitative module had a greater increase in intent to undergo FIT ( P = 0.028) than did those who did not. The limitations of this study were the limited sample size and single healthcare system. Adding quantitative information to a decision aid increased uptake of colorectal cancer screening, while adding a nudge to undergo FIT did not increase uptake. Further research on quantitative information in decision aids is warranted.

  7. Modelling the Kampungkota: A quantitative approach in defining Indonesian informal settlements

    Science.gov (United States)

    Anindito, D. B.; Maula, F. K.; Akbar, R.

    2018-02-01

    Bandung City is home to 2.5 million inhabitants, some of which are living in slums and squatter. However, the terms conveying this type of housing is not adequate to describe that of Indonesian called as kampungkota. Several studies suggest various variables in constituting kampungkota qualitatively. This study delves to define kampungkota in a quantitative manner, using the characteristics of slums and squatter. The samples for this study are 151 villages (kelurahan) in Bandung City. Ordinary Least Squares, Geographically Weighted Regression, and Spatial Cluster and Outlier Analysis are employed. It is suggested that kampungkota may have distinguished variables regarding to its location. As kampungkota may be smaller than administrative area of kelurahan, it can develop beyond the jurisdiction of kelurahan, as indicated by the clustering pattern of kampungkota.

  8. In vitro studies reveal antiurolithic effect of Terminalia arjuna using quantitative morphological information from computerized microscopy

    Directory of Open Access Journals (Sweden)

    A. Mittal

    2015-10-01

    Full Text Available ABSTRACT Purpose: For most cases, urolithiasis is a condition where excessive oxalate is present in the urine. Many reports have documented free radical generation followed by hyperoxaluria as a consequence of which calcium oxalate (CaOx deposition occurs in the kidney tissue. The present study is aimed to exam the antilithiatic potency of the aqueous extract (AE of Terminalia arjuna (T. arjuna. Materials and Methods: The antilithiatic activity of Terminalia arjuna was investigated in vitro nucleation, aggregation and growth of the CaOx crystals as well as the morphology of CaOx crystals using the inbuilt software ‘Image-Pro Plus 7.0’ of Olympus upright microscope (BX53. Antioxidant activity of AE of Terminalia arjuna bark was also determined in vitro. Results: Terminalia arjuna extract exhibited a concentration dependent inhibition of nucleation and aggregation of CaOx crystals. The AE of Terminalia arjuna bark also inhibited the growth of CaOx crystals. At the same time, the AE also modified the morphology of CaOx crystals from hexagonal to spherical shape with increasing concentrations of AE and reduced the dimensions such as area, perimeter, length and width of CaOx crystals in a dose dependent manner. Also, the Terminalia arjuna AE scavenged the DPPH (2, 2-diphenyl-1-picrylhydrazyl radicals with an IC50 at 13.1µg/mL. Conclusions: The study suggests that Terminalia arjuna bark has the potential to scavenge DPPH radicals and inhibit CaOx crystallization in vitro. In the light of these studies, Terminalia arjuna can be regarded as a promising candidate from natural plant sources of antilithiatic and antioxidant activity with high value.

  9. Extracting chemical information from high-resolution Kβ X-ray emission spectroscopy

    Science.gov (United States)

    Limandri, S.; Robledo, J.; Tirao, G.

    2018-06-01

    High-resolution X-ray emission spectroscopy allows studying the chemical environment of a wide variety of materials. Chemical information can be obtained by fitting the X-ray spectra and observing the behavior of some spectral features. Spectral changes can also be quantified by means of statistical parameters calculated by considering the spectrum as a probability distribution. Another possibility is to perform statistical multivariate analysis, such as principal component analysis. In this work the performance of these procedures for extracting chemical information in X-ray emission spectroscopy spectra for mixtures of Mn2+ and Mn4+ oxides are studied. A detail analysis of the parameters obtained, as well as the associated uncertainties is shown. The methodologies are also applied for Mn oxidation state characterization of double perovskite oxides Ba1+xLa1-xMnSbO6 (with 0 ≤ x ≤ 0.7). The results show that statistical parameters and multivariate analysis are the most suitable for the analysis of this kind of spectra.

  10. Information Extraction of Tourist Geological Resources Based on 3d Visualization Remote Sensing Image

    Science.gov (United States)

    Wang, X.

    2018-04-01

    Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.

  11. Measuring nuclear reaction cross sections to extract information on neutrinoless double beta decay

    Science.gov (United States)

    Cavallaro, M.; Cappuzzello, F.; Agodi, C.; Acosta, L.; Auerbach, N.; Bellone, J.; Bijker, R.; Bonanno, D.; Bongiovanni, D.; Borello-Lewin, T.; Boztosun, I.; Branchina, V.; Bussa, M. P.; Calabrese, S.; Calabretta, L.; Calanna, A.; Calvo, D.; Carbone, D.; Chávez Lomelí, E. R.; Coban, A.; Colonna, M.; D'Agostino, G.; De Geronimo, G.; Delaunay, F.; Deshmukh, N.; de Faria, P. N.; Ferraresi, C.; Ferreira, J. L.; Finocchiaro, P.; Fisichella, M.; Foti, A.; Gallo, G.; Garcia, U.; Giraudo, G.; Greco, V.; Hacisalihoglu, A.; Kotila, J.; Iazzi, F.; Introzzi, R.; Lanzalone, G.; Lavagno, A.; La Via, F.; Lay, J. A.; Lenske, H.; Linares, R.; Litrico, G.; Longhitano, F.; Lo Presti, D.; Lubian, J.; Medina, N.; Mendes, D. R.; Muoio, A.; Oliveira, J. R. B.; Pakou, A.; Pandola, L.; Petrascu, H.; Pinna, F.; Reito, S.; Rifuggiato, D.; Rodrigues, M. R. D.; Russo, A. D.; Russo, G.; Santagati, G.; Santopinto, E.; Sgouros, O.; Solakci, S. O.; Souliotis, G.; Soukeras, V.; Spatafora, A.; Torresi, D.; Tudisco, S.; Vsevolodovna, R. I. M.; Wheadon, R. J.; Yildirin, A.; Zagatto, V. A. B.

    2018-02-01

    Neutrinoless double beta decay (0vββ) is considered the best potential resource to access the absolute neutrino mass scale. Moreover, if observed, it will signal that neutrinos are their own anti-particles (Majorana particles). Presently, this physics case is one of the most important research “beyond Standard Model” and might guide the way towards a Grand Unified Theory of fundamental interactions. Since the 0vββ decay process involves nuclei, its analysis necessarily implies nuclear structure issues. In the NURE project, supported by a Starting Grant of the European Research Council (ERC), nuclear reactions of double charge-exchange (DCE) are used as a tool to extract information on the 0vββ Nuclear Matrix Elements. In DCE reactions and ββ decay indeed the initial and final nuclear states are the same and the transition operators have similar structure. Thus the measurement of the DCE absolute cross-sections can give crucial information on ββ matrix elements. In a wider view, the NUMEN international collaboration plans a major upgrade of the INFN-LNS facilities in the next years in order to increase the experimental production of nuclei of at least two orders of magnitude, thus making feasible a systematic study of all the cases of interest as candidates for 0vββ.

  12. Unsupervised Symbolization of Signal Time Series for Extraction of the Embedded Information

    Directory of Open Access Journals (Sweden)

    Yue Li

    2017-03-01

    Full Text Available This paper formulates an unsupervised algorithm for symbolization of signal time series to capture the embedded dynamic behavior. The key idea is to convert time series of the digital signal into a string of (spatially discrete symbols from which the embedded dynamic information can be extracted in an unsupervised manner (i.e., no requirement for labeling of time series. The main challenges here are: (1 definition of the symbol assignment for the time series; (2 identification of the partitioning segment locations in the signal space of time series; and (3 construction of probabilistic finite-state automata (PFSA from the symbol strings that contain temporal patterns. The reported work addresses these challenges by maximizing the mutual information measures between symbol strings and PFSA states. The proposed symbolization method has been validated by numerical simulation as well as by experimentation in a laboratory environment. Performance of the proposed algorithm has been compared to that of two commonly used algorithms of time series partitioning.

  13. QUANTITATIVE ANALYSIS OF DIURON AND ITS MAJOR METABOLITES IN SURFACE AND GROUND WATER BY SOLID PHASE EXTRACTION (R821195)

    Science.gov (United States)

    The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...

  14. Dual-wavelength phase-shifting digital holography selectively extracting wavelength information from wavelength-multiplexed holograms.

    Science.gov (United States)

    Tahara, Tatsuki; Mori, Ryota; Kikunaga, Shuhei; Arai, Yasuhiko; Takaki, Yasuhiro

    2015-06-15

    Dual-wavelength phase-shifting digital holography that selectively extracts wavelength information from five wavelength-multiplexed holograms is presented. Specific phase shifts for respective wavelengths are introduced to remove the crosstalk components and extract only the object wave at the desired wavelength from the holograms. Object waves in multiple wavelengths are selectively extracted by utilizing 2π ambiguity and the subtraction procedures based on phase-shifting interferometry. Numerical results show the validity of the proposed technique. The proposed technique is also experimentally demonstrated.

  15. Composition of Essential Oils and Ethanol Extracts of the Leaves of Lippia Species: Identification, Quantitation and Antioxidant Capacity

    Directory of Open Access Journals (Sweden)

    Maria T.S. Trevisan

    2016-01-01

    Full Text Available The principal components of essential oils, obtained by steam-hydrodistillation from the fresh leaves of five species of the genus Lippia, namely Lippia gracilis AV, Lippia sidoides Mart , Lippia alba carvoneifera, Lippia alba citralifera and Lippia alba myrceneifera and ethanol extracts , were evaluated. The greater antioxidant capacity (IC 50 = 980 µg/mL; p 2000 µg/mL. Major compounds in the oils were, thymol (Lippia sidoides Mart, 13.1 g/k, carvone (Lippia alba carvoneifera, 9.6 g/kg, geranial ( Lippia alba citraleifera, 4.61 g/kg; Lippiaalba myrceneifera, 1.6 g/kg, neral ( Lippia alba citraleifera, 3.4 g/kg; Lippia alba myrceneifera, 1.16 g/k, carvacrol (Lippia gracilis AV, 5.36 g/kg and limonene (Lippia alba carvoneifera, 5.14 g/kg. Of these, carvone (IC 50 = 330 µM along with thymol (1.88 units, and carvacrol (1.74 units, were highly active in the hypoxanthine/xanthine oxidase, and ORAC assays respectively. The following compounds namely I: calceolarioside E, II: acteoside, III: isoacteoside, IV: luteolin, V: 5,7,3´,4´-tetrahydroxy-3,6-dimethoxy flavone (spinacetin , VI: naringenin, VII: apigenin, VIII: 6-methoxy apigenin (hispidulin, IX: 5,7,3´-trihydroxy-3,6,4´-trimethoxy flavone , X:5,7,4´-trihydroxy-3,6-dimethoxy flavone , XI: naringenin-4´-methyl ether, XII: 5,7-dihydroxy-3,6,4´-trimethoxy flavone (santin , XIII:5,7-dihydroxy-6,4´-dimethoxy-flavone (pectolinaringenin and XIV: 5-hydroxy-3,7,4´-trimethoxy flavone were identified in the ethanol extracts . Of the leaf ethanol extracts, strong antioxidant capacity was only evident in that of Lippia alba carvoneifera (IC 50 = 1.23 mg/mL. Overall, the data indicates that use of the leaves of Lippia species in food preparations, should be beneficial to health.

  16. Information Extraction and Dependency on Open Government Data (ogd) for Environmental Monitoring

    Science.gov (United States)

    Abdulmuttalib, Hussein

    2016-06-01

    Environmental monitoring practices support decision makers of different government / private institutions, besides environmentalists and planners among others. This support helps them act towards the sustainability of our environment, and also take efficient measures for protecting human beings in general, but it is difficult to explore useful information from 'OGD' and assure its quality for the purpose. On the other hand, Monitoring itself comprises detecting changes as happens, or within the mitigation period range, which means that any source of data, that is to be used for monitoring, should replicate the information related to the period of environmental monitoring, or otherwise it's considered almost useless or history. In this paper the assessment of information extraction and structuring from Open Government Data 'OGD', that can be useful to environmental monitoring is performed, looking into availability, usefulness to environmental monitoring of a certain type, checking its repetition period and dependences. The particular assessment is being performed on a small sample selected from OGD, bearing in mind the type of the environmental change monitored, such as the increase and concentrations of built up areas, and reduction of green areas, or monitoring the change of temperature in a specific area. The World Bank mentioned in its blog that Data is open if it satisfies both conditions of, being technically open, and legally open. The use of Open Data thus, is regulated by published terms of use, or an agreement which implies some conditions without violating the above mentioned two conditions. Within the scope of the paper I wish to share the experience of using some OGD for supporting an environmental monitoring work, that is performed to mitigate the production of carbon dioxide, by regulating energy consumption, and by properly designing the test area's landscapes, thus using Geodesign tactics, meanwhile wish to add to the results achieved by many

  17. Quantitative Analysis of First-Pass Contrast-Enhanced Myocardial Perfusion Multidetector CT Using a Patlak Plot Method and Extraction Fraction Correction During Adenosine Stress

    Science.gov (United States)

    Ichihara, Takashi; George, Richard T.; Silva, Caterina; Lima, Joao A. C.; Lardo, Albert C.

    2011-02-01

    The purpose of this study was to develop a quantitative method for myocardial blood flow (MBF) measurement that can be used to derive accurate myocardial perfusion measurements from dynamic multidetector computed tomography (MDCT) images by using a compartment model for calculating the first-order transfer constant (K1) with correction for the capillary transit extraction fraction (E). Six canine models of left anterior descending (LAD) artery stenosis were prepared and underwent first-pass contrast-enhanced MDCT perfusion imaging during adenosine infusion (0.14-0.21 mg/kg/min). K1 , which is the first-order transfer constant from left ventricular (LV) blood to myocardium, was measured using the Patlak plot method applied to time-attenuation curve data of the LV blood pool and myocardium. The results were compared against microsphere MBF measurements, and the extraction fraction of contrast agent was calculated. K1 is related to the regional MBF as K1=EF, E=(1-exp(-PS/F)), where PS is the permeability-surface area product and F is myocardial flow. Based on the above relationship, a look-up table from K1 to MBF can be generated and Patlak plot-derived K1 values can be converted to the calculated MBF. The calculated MBF and microsphere MBF showed a strong linear association. The extraction fraction in dogs as a function of flow (F) was E=(1-exp(-(0.2532F+0.7871)/F)) . Regional MBF can be measured accurately using the Patlak plot method based on a compartment model and look-up table with extraction fraction correction from K1 to MBF.

  18. Quantitative Analysis of Non-Financial Motivators and Job Satisfaction of Information Technology Professionals

    Science.gov (United States)

    Mieszczak, Gina L.

    2013-01-01

    Organizations depend extensively on Information Technology professionals to drive and deliver technology solutions quickly, efficiently, and effectively to achieve business goals and profitability. It has been demonstrated that professionals with experience specific to the company are valuable assets, and their departure puts technology projects…

  19. Success Rates by Software Development Methodology in Information Technology Project Management: A Quantitative Analysis

    Science.gov (United States)

    Wright, Gerald P.

    2013-01-01

    Despite over half a century of Project Management research, project success rates are still too low. Organizations spend a tremendous amount of valuable resources on Information Technology projects and seek to maximize the utility gained from their efforts. The author investigated the impact of software development methodology choice on ten…

  20. QUANTITATIVE СHARACTERISTICS OF COMPLEMENTARY INTEGRATED HEALTH CARE SYSTEM AND INTEGRATED MEDICATION MANAGEMENT INFORMATION SYSTEM

    Directory of Open Access Journals (Sweden)

    L. Yu. Babintseva

    2015-05-01

    i mportant elements of state regulation of the pharmaceutical sector health. For the first time creation of two information systems: integrated medication management infor mation system and integrated health care system in an integrated medical infor mation area, operating based on th e principle of complementarity was justified. Global and technological coefficients of these systems’ functioning were introduced.

  1. Presenting quantitative information about decision outcomes: a risk communication primer for patient decision aid developers

    NARCIS (Netherlands)

    Trevena, L.J.; Zikmund-Fisher, B.J.; Edwards, A.; Gaissmaier, W.; Galesic, M.; Han, P.K.J.; King, J.; Lawson, M.L.; Linder, S.K.; Lipkus, I.; Ozanne, E.; Peters, E.; Timmermans, D.R.M.; Woloshin, S.

    2013-01-01

    Background: Making evidence-based decisions often requires comparison of two or more options. Research-based evidence may exist which quantifies how likely the outcomes are for each option. Understanding these numeric estimates improves patients' risk perception and leads to better informed decision

  2. A quantitative study of seven historically informed performances of Bach's BWV1007 Prelude

    NARCIS (Netherlands)

    Vaquero, C.

    2015-01-01

    In the field of early music, the urge to realize historically informed interpretations has led to new perspectives about our musical legacy from scholars and performers alike. Consequently, different schools of early music performance practice have been developed through the 20th and 21st centuries.

  3. Qualitative and Quantitative Measures of Second Language Writing: Potential Outcomes of Informal Target Language Learning Abroad

    Science.gov (United States)

    Brown, N. Anthony; Solovieva, Raissa V.; Eggett, Dennis L.

    2011-01-01

    This research describes a method applied at a U.S. university in a third-year Russian language course designed to facilitate Advanced and Superior second language writing proficiency through the forum of argumentation and debate. Participants had extensive informal language experience living in a Russian-speaking country but comparatively little…

  4. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    Science.gov (United States)

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.

    2017-12-01

    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  5. Quantitative analysis of access strategies to remote information in network services

    DEFF Research Database (Denmark)

    Olsen, Rasmus Løvenstein; Schwefel, Hans-Peter; Hansen, Martin Bøgsted

    2006-01-01

    Remote access to dynamically changing information elements is a required functionality for various network services, including routing and instances of context-sensitive networking. Three fundamentally different strategies for such access are investigated in this paper: (1) a reactive approach in...

  6. Quantitative Modeling of Human Performance in Information Systems. Technical Research Note 232.

    Science.gov (United States)

    Baker, James D.

    1974-01-01

    A general information system model was developed which focuses on man and considers the computer only as a tool. The ultimate objective is to produce a simulator which will yield measures of system performance under different mixes of equipment, personnel, and procedures. The model is structured around three basic dimensions: (1) data flow and…

  7. The Quantitative Evaluation of Functional Neuroimaging Experiments: Mutual Information Learning Curves

    DEFF Research Database (Denmark)

    Kjems, Ulrik; Hansen, Lars Kai; Anderson, Jon

    2002-01-01

    Learning curves are presented as an unbiased means for evaluating the performance of models for neuroimaging data analysis. The learning curve measures the predictive performance in terms of the generalization or prediction error as a function of the number of independent examples (e.g., subjects......) used to determine the parameters in the model. Cross-validation resampling is used to obtain unbiased estimates of a generic multivariate Gaussian classifier, for training set sizes from 2 to 16 subjects. We apply the framework to four different activation experiments, in this case \\$\\backslash......\\$[/sup 15/ O]water data sets, although the framework is equally valid for multisubject fMRI studies. We demonstrate how the prediction error can be expressed as the mutual information between the scan and the scan label, measured in units of bits. The mutual information learning curve can be used...

  8. Quantitative measurements in laser-induced plasmas using optical probing. Final report

    International Nuclear Information System (INIS)

    Sweeney, D.W.

    1981-01-01

    Optical probing of laser induced plasmas can be used to quantitatively reconstruct electron number densities and magnetic fields. Numerical techniques for extracting quantitative information from the experimental data are described. A computer simulation of optical probing is used to determine the quantitative information that can be reasonably extracted from real experimental interferometric systems to reconstruct electron number density distributions. An example of a reconstructed interferogram shows a steepened electron distribution due to radiation pressure effects

  9. Linking quantitative microbial risk assessment and epidemiological data: informing safe drinking water trials in developing countries.

    Science.gov (United States)

    Enger, Kyle S; Nelson, Kara L; Clasen, Thomas; Rose, Joan B; Eisenberg, Joseph N S

    2012-05-01

    Intervention trials are used extensively to assess household water treatment (HWT) device efficacy against diarrheal disease in developing countries. Using these data for policy, however, requires addressing issues of generalizability (relevance of one trial in other contexts) and systematic bias associated with design and conduct of a study. To illustrate how quantitative microbial risk assessment (QMRA) can address water safety and health issues, we analyzed a published randomized controlled trial (RCT) of the LifeStraw Family Filter in the Congo. The model accounted for bias due to (1) incomplete compliance with filtration, (2) unexpected antimicrobial activity by the placebo device, and (3) incomplete recall of diarrheal disease. Effectiveness was measured using the longitudinal prevalence ratio (LPR) of reported diarrhea. The Congo RCT observed an LPR of 0.84 (95% CI: 0.61, 1.14). Our model predicted LPRs, assuming a perfect placebo, ranging from 0.50 (2.5-97.5 percentile: 0.33, 0.77) to 0.86 (2.5-97.5 percentile: 0.68, 1.09) for high (but not perfect) and low (but not zero) compliance, respectively. The calibration step provided estimates of the concentrations of three pathogen types (modeled as diarrheagenic E. coli, Giardia, and rotavirus) in drinking water, consistent with the longitudinal prevalence of reported diarrhea measured in the trial, and constrained by epidemiological data from the trial. Use of a QMRA model demonstrated the importance of compliance in HWT efficacy, the need for pathogen data from source waters, the effect of quantifying biases associated with epidemiological data, and the usefulness of generalizing the effectiveness of HWT trials to other contexts. © 2012 American Chemical Society

  10. ONLINE HEALTH INFORMATION SEEKING DURING ADOLESCENCE: A QUANTITATIVE STUDY REGARDING ROMANIAN TEENAGERS

    Directory of Open Access Journals (Sweden)

    Alina Catalina Duduciuc

    2015-12-01

    Full Text Available How Internet is used by individuals from different age groups to keep their health in check has become one of the major issue of both academic researchers and policy makers. The topic derives mainly from 2000-2014 data which converge towards an Internet accessing pattern as source of information regarding health. Previous studies showed that teenagers are the main consumers of the Internet and they often start surfing for online health concerns on social media (Facebook, Twitter and popular engines (Google, Yahoo. The current paper describes how Romanian teenagers (N=161, aged 14-19 browse for online topics to keep their health in check. Based on a questionnaire, the data revealed that the Internet is used to a certain extent by more than a third of the respondents for health topics and over half of them consider that the health related information helped them to achieve a good trim. Overall, the research outcomes showed that the adolescents seem less interested in using Internet for health information and sometimes challenge the credibility of online health content.

  11. Combining Sequential Extractions and X-ray Absorption Spectroscopy for Quantitative and Qualitative Zinc Speciation in Soil

    Science.gov (United States)

    Bauer, Tatiana; Minkina, Tatiana; Batukaev, Abdulmalik; Nevidomskaya, Dina; Burachevskaya, Marina; Tsitsuashvili, Viktoriya; Urazgildieva, Kamilya

    2017-04-01

    The combined use of X-ray absorption spectrometry and extractive fractionation is an effective approach for studying the interaction of metal ions with soil compounds and identifying the phases-carriers of metals in soil and their stable fixation. These studies were carried out using the technique of X-ray absorption spectroscopy and chemical extractive fractionation. In a model experiment the samples taken in Calcic Chernozem were artificially contaminated with higher portion of Zn(NO3)2 (2000 mg/kg). The metal were incubated in soil samples for 2 year. The samples of soil mineral and organic phases (calcite, kaolinite, bentonite, humic acids) were saturated with Zn2+ from a solution of nitrate salts of metal. The total content of Zn in soil and soil various phases was determined using the X-ray fluorescence method. Extended X-ray absorption fine structure (EXAFS) Zn was measured at the Structural Materials Science beamline of the Kurchatov Center for Synchrotron Radiation. Sequential fractionation of Zn in soil conducted by Tessier method (Tessier et al., 1979) which determining 5 fractions of metals in soil: exchangeable, bound to Fe-Mn oxide, bound to carbonate, bound to the organic matter, and bound to silicate (residual). This methodology has so far more than 4000 citations (Web of Science), which demonstrates the popularity of this approach. Much Zn compounds are contained in uncontaminated soils in stable primary and secondary silicates inherited from the parental rocks (67% of the total concentrations in all fractions), which is a regional trait of soils in the fore-Caucasian plain. Extracted fractionation of metal compounds in soil samples, artificially contaminated with Zn salts, indicates the priority holding of Zn2+ ions by silicates, carbonates and Fe-Mn oxides. The Zn content significantly increases in the exchangeable fraction. Atomic structure study of the soil various phases saturated with Zn2+ ion by using (XANES) X-ray absorption spectroscopy

  12. Particle-size distribution (PSD) of pulverized hair: A quantitative approach of milling efficiency and its correlation with drug extraction efficiency.

    Science.gov (United States)

    Chagas, Aline Garcia da Rosa; Spinelli, Eliani; Fiaux, Sorele Batista; Barreto, Adriana da Silva; Rodrigues, Silvana Vianna

    2017-08-01

    Different types of hair were submitted to different milling procedures and their resulting powders were analyzed by scanning electron microscopy (SEM) and laser diffraction (LD). SEM results were qualitative whereas LD results were quantitative and accurately characterized the hair powders through their particle size distribution (PSD). Different types of hair were submitted to an optimized milling conditions and their PSD was quite similar. A good correlation was obtained between PSD results and ketamine concentration in a hair sample analyzed by LC-MS/MS. Hair samples were frozen in liquid nitrogen for 5min and pulverized at 25Hz for 10min, resulting in 61% of particles sample extracted after pulverization comparing with the same sample cut in 1mm fragments. When milling time was extended to 25min, >90% of particles were sample retesting and quality control procedures. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Quantitative gas chromatography-olfactometry carried out at different dilutions of an extract. Key differences in the odor profiles of four high-quality Spanish aged red wines.

    Science.gov (United States)

    Ferreira, V; Aznar, M; López, R; Cacho, J

    2001-10-01

    Four Spanish aged red wines made in different wine-making areas have been extracted, and the extracts and their 1:5, 1:50, and 1:500 dilutions have been analyzed by a gas chromatography-olfactometry (GC-O) approach in which three judges evaluated odor intensity on a four-point scale. Sixty-nine different odor regions were detected in the GC-O profiles of wines, 63 of which could be identified. GC-O data have been processed to calculate averaged flavor dilution factors (FD). Different ANOVA strategies have been further applied on FD and on intensity data to check for significant differences among wines and to assess the effects of dilution and the judge. Data show that FD and the average intensity of the odorants are strongly correlated (r(2) = 0.892). However, the measurement of intensity represents a quantitative advantage in terms of detecting differences. For some odorants, dilution exerts a critical role in the detection of differences. Significant differences among wines have been found in 30 of the 69 odorants detected in the experiment. Most of these differences are introduced by grape compounds such as methyl benzoate and terpenols, by compounds released by the wood, such as furfural, (Z)-whiskey lactone, Furaneol, 4-propylguaiacol, eugenol, 4-ethylphenol, 2,6-dimethoxyphenol, isoeugenol, and ethyl vanillate, by compounds formed by lactic acid bacteria, such as 2,3-butanedione and acetoine, or by compounds formed during the oxidative storage of wines, such as methional, sotolon, o-aminoacetophenone, and phenylacetic acid. The most important differences from a quantitative point of view are due to 2-methyl-3-mercaptofuran, 4-propylguaiacol, 2,6-dimethoxyphenol, and isoeugenol.

  14. Machine learning classification of surgical pathology reports and chunk recognition for information extraction noise reduction.

    Science.gov (United States)

    Napolitano, Giulio; Marshall, Adele; Hamilton, Peter; Gavin, Anna T

    2016-06-01

    Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging. The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: 'semi-structured' and 'unstructured'. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry. The best result of 99.4% accuracy - which included only one semi-structured report predicted as unstructured - was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured. These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought. Copyright

  15. An information theory based approach for quantitative evaluation of man-machine interface complexity

    International Nuclear Information System (INIS)

    Kang, Hyun Gook

    1999-02-01

    In complex and high-risk work conditions, especially such as in nuclear power plants, human understanding of the plant is highly cognitive and thus largely dependent on the effectiveness of the man-machine interface system. In order to provide more effective and reliable operating conditions for future nuclear power plants, developing more credible and easy to use evaluation methods will afford great help in designing interface systems in a more efficient manner. In this study, in order to analyze the human-machine interactions, I propose the Human-processor Communication(HPC) model which is based on the information flow concept. It identifies the information flow around a human-processor. Information flow has two aspects: appearance and content. Based on the HPC model, I propose two kinds of measures for evaluating a user interface from the viewpoint of these two aspects of information flow. They measure the communicative complexity of each aspect. In this study, for the evaluation of the aspect of appearance, I propose three complexity measures: Operation Complexity, Transition Complexity, and Screen Complexity. Each one of these measures has its own physical meaning. Two experiments carried out in this work support the utility of these measures. The result of the quiz game experiment shows that as the complexity of task context increases, the usage of the interface system becomes more complex. The experimental results of the three example systems(digital view, LDP style view and hierarchy view) show the utility of the proposed complexity measures. In this study, for the evaluation of the aspect of content, I propose the degree of informational coincidence, R (K, P) as a measure for the usefulness of an alarm-processing system. It is designed to perform user-oriented evaluation based on the informational entropy concept. It will be especially useful inearly design phase because designers can estimate the usefulness of an alarm system by short calculations instead

  16. An information theory based approach for quantitative evaluation of man-machine interface complexity

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Hyun Gook

    1999-02-15

    In complex and high-risk work conditions, especially such as in nuclear power plants, human understanding of the plant is highly cognitive and thus largely dependent on the effectiveness of the man-machine interface system. In order to provide more effective and reliable operating conditions for future nuclear power plants, developing more credible and easy to use evaluation methods will afford great help in designing interface systems in a more efficient manner. In this study, in order to analyze the human-machine interactions, I propose the Human-processor Communication(HPC) model which is based on the information flow concept. It identifies the information flow around a human-processor. Information flow has two aspects: appearance and content. Based on the HPC model, I propose two kinds of measures for evaluating a user interface from the viewpoint of these two aspects of information flow. They measure the communicative complexity of each aspect. In this study, for the evaluation of the aspect of appearance, I propose three complexity measures: Operation Complexity, Transition Complexity, and Screen Complexity. Each one of these measures has its own physical meaning. Two experiments carried out in this work support the utility of these measures. The result of the quiz game experiment shows that as the complexity of task context increases, the usage of the interface system becomes more complex. The experimental results of the three example systems(digital view, LDP style view and hierarchy view) show the utility of the proposed complexity measures. In this study, for the evaluation of the aspect of content, I propose the degree of informational coincidence, R (K, P) as a measure for the usefulness of an alarm-processing system. It is designed to perform user-oriented evaluation based on the informational entropy concept. It will be especially useful inearly design phase because designers can estimate the usefulness of an alarm system by short calculations instead

  17. Study of time-frequency characteristics of single snores: extracting new information for sleep apnea diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Castillo Escario, Y.; Blanco Almazan, D.; Camara Vazquez, M.A.; Jane Campos, R.

    2016-07-01

    Obstructive sleep apnea (OSA) is a highly prevalent chronic disease, especially in elderly and obese population. Despite constituting a huge health and economic problem, most patients remain undiagnosed due to limitations in current strategies. Therefore, it is essential to find cost-effective diagnostic alternatives. One of these novel approaches is the analysis of acoustic snoring signals. Snoring is an early symptom of OSA which carries pathophysiological information of high diagnostic value. For this reason, the main objective of this work is to study the characteristics of single snores of different types, from healthy and OSA subjects. To do that, we analyzed snoring signals from previous databases and developed an experimental protocol to record simulated OSA-related sounds and characterize the response of two commercial tracheal microphones. Automatic programs for filtering, downsampling, event detection and time-frequency analysis were built in MATLAB. We found that time-frequency maps and spectral parameters (central, mean and peak frequency and energy in the 100-500 Hz band) allow distinguishing regular snores of healthy subjects from non-regular snores and snores of OSA subjects. Regarding the two commercial microphones, we found that one of them was a suitable snoring sensor, while the other had a too restricted frequency response. Future work shall include a higher number of episodes and subjects, but our study has contributed to show how important the differences between regular and non-regular snores can be for OSA diagnosis, and how much clinically relevant information can be extracted from time-frequency maps and spectral parameters of single snores. (Author)

  18. Quantitative proteomic analysis of extracellular matrix extracted from mono- and dual-species biofilms of Fusobacterium nucleatum and Porphyromonas gingivalis.

    Science.gov (United States)

    Mohammed, Marwan Mansoor Ali; Pettersen, Veronika Kuchařová; Nerland, Audun H; Wiker, Harald G; Bakken, Vidar

    2017-04-01

    The Gram-negative bacteria Fusobacterium nucleatum and Porphyromonas gingivalis are members of a complex dental biofilm associated with periodontal disease. In this study, we cultured F. nucleatum and P. gingivalis as mono- and dual-species biofilms, and analyzed the protein composition of the biofilms extracellular polymeric matrix (EPM) by high-resolution liquid chromatography-tandem mass spectrometry. Label-free quantitative proteomic analysis was used for identification of proteins and sequence-based functional characterization for their classification and prediction of possible roles in EPM. We identified 542, 93 and 280 proteins in the matrix of F. nucleatum, P. gingivalis, and the dual-species biofilm, respectively. Nearly 70% of all EPM proteins in the dual-species biofilm originated from F. nucleatum, and a majority of these were cytoplasmic proteins, suggesting an enhanced lysis of F. nucleatum cells. The proteomic analysis also indicated an interaction between the two species: 22 F. nucleatum proteins showed differential levels between the mono and dual-species EPMs, and 11 proteins (8 and 3 from F. nucleatum and P. gingivalis, respectively) were exclusively detected in the dual-species EPM. Oxidoreductases and chaperones were among the most abundant proteins identified in all three EPMs. The biofilm matrices in addition contained several known and hypothetical virulence proteins, which can mediate adhesion to the host cells and disintegration of the periodontal tissues. This study demonstrated that the biofilm matrix of two important periodontal pathogens consists of a multitude of proteins whose amounts and functionalities vary largely. Relatively high levels of several of the detected proteins might facilitate their potential use as targets for the inhibition of biofilm development. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Quantum measurement information as a key to energy extraction from local vacuums

    International Nuclear Information System (INIS)

    Hotta, Masahiro

    2008-01-01

    In this paper, a protocol is proposed in which energy extraction from local vacuum states is possible by using quantum measurement information for the vacuum state of quantum fields. In the protocol, Alice, who stays at a spatial point, excites the ground state of the fields by a local measurement. Consequently, wave packets generated by Alice's measurement propagate the vacuum to spatial infinity. Let us assume that Bob stays away from Alice and fails to catch the excitation energy when the wave packets pass in front of him. Next Alice announces her local measurement result to Bob by classical communication. Bob performs a local unitary operation depending on the measurement result. In this process, positive energy is released from the fields to Bob's apparatus of the unitary operation. In the field systems, wave packets are generated with negative energy around Bob's location. Soon afterwards, the negative-energy wave packets begin to chase after the positive-energy wave packets generated by Alice and form loosely bound states.

  20. Note on difference spectra for fast extraction of global image information.

    CSIR Research Space (South Africa)

    Van Wyk, BJ

    2007-06-01

    Full Text Available FOR FAST EXTRACTION OF GLOBAL IMAGE INFORMATION. B.J van Wyk* M.A. van Wyk* and F. van den Bergh** * c29c55c48c51c46c4bc03 c36c52c58c57c4bc03 c24c49c55c4cc46c44c51c03 c37c48c46c4bc51c4cc46c44c4fc03 c2cc51c56c57c4cc57c58c57c48c03 c4cc51c03 c28c4fc48c...46c57c55c52c51c4cc46c56c03 c0bc29cb6c36c24c37c2cc28c0cc03 c44c57c03 c57c4bc48c03 c37c56c4bc5ac44c51c48c03 c38c51c4cc59c48c55c56c4cc57c5cc03 c52c49c03 Technology, Private Bag X680, Pretoria 0001. ** Remote Sensing Research Group, Meraka Institute...

  1. Evaluation of RNA extraction methods and identification of putative reference genes for real-time quantitative polymerase chain reaction expression studies on olive (Olea europaea L.) fruits.

    Science.gov (United States)

    Nonis, Alberto; Vezzaro, Alice; Ruperti, Benedetto

    2012-07-11

    Genome wide transcriptomic surveys together with targeted molecular studies are uncovering an ever increasing number of differentially expressed genes in relation to agriculturally relevant processes in olive (Olea europaea L). These data need to be supported by quantitative approaches enabling the precise estimation of transcript abundance. qPCR being the most widely adopted technique for mRNA quantification, preliminary work needs to be done to set up robust methods for extraction of fully functional RNA and for the identification of the best reference genes to obtain reliable quantification of transcripts. In this work, we have assessed different methods for their suitability for RNA extraction from olive fruits and leaves and we have evaluated thirteen potential candidate reference genes on 21 RNA samples belonging to fruit developmental/ripening series and to leaves subjected to wounding. By using two different algorithms, GAPDH2 and PP2A1 were identified as the best reference genes for olive fruit development and ripening, and their effectiveness for normalization of expression of two ripening marker genes was demonstrated.

  2. Improved quantitation and reproducibility in multi-PET/CT lung studies by combining CT information.

    Science.gov (United States)

    Holman, Beverley F; Cuplov, Vesna; Millner, Lynn; Endozo, Raymond; Maher, Toby M; Groves, Ashley M; Hutton, Brian F; Thielemans, Kris

    2018-06-05

    Matched attenuation maps are vital for obtaining accurate and reproducible kinetic and static parameter estimates from PET data. With increased interest in PET/CT imaging of diffuse lung diseases for assessing disease progression and treatment effectiveness, understanding the extent of the effect of respiratory motion and establishing methods for correction are becoming more important. In a previous study, we have shown that using the wrong attenuation map leads to large errors due to density mismatches in the lung, especially in dynamic PET scans. Here, we extend this work to the case where the study is sub-divided into several scans, e.g. for patient comfort, each with its own CT (cine-CT and 'snap shot' CT). A method to combine multi-CT information into a combined-CT has then been developed, which averages the CT information from each study section to produce composite CT images with the lung density more representative of that in the PET data. This combined-CT was applied to nine patients with idiopathic pulmonary fibrosis, imaged with dynamic 18 F-FDG PET/CT to determine the improvement in the precision of the parameter estimates. Using XCAT simulations, errors in the influx rate constant were found to be as high as 60% in multi-PET/CT studies. Analysis of patient data identified displacements between study sections in the time activity curves, which led to an average standard error in the estimates of the influx rate constant of 53% with conventional methods. This reduced to within 5% after use of combined-CTs for attenuation correction of the study sections. Use of combined-CTs to reconstruct the sections of a multi-PET/CT study, as opposed to using the individually acquired CTs at each study stage, produces more precise parameter estimates and may improve discrimination between diseased and normal lung.

  3. A quantitative method to analyze the quality of EIA information in wind energy development and avian/bat assessments

    International Nuclear Information System (INIS)

    Chang, Tony; Nielsen, Erik; Auberle, William; Solop, Frederic I.

    2013-01-01

    The environmental impact assessment (EIA) has been a tool for decision makers since the enactment of the National Environmental Policy Act (NEPA). Since that time, few analyses have been performed to verify the quality of information and content within EIAs. High quality information within assessments is vital in order for decision makers, stake holders, and the public to understand the potential impact of proposed actions on the ecosystem and wildlife species. Low quality information has been a major cause for litigation and economic loss. Since 1999, wind energy development has seen an exponential growth with unknown levels of impact on wildlife species, in particular bird and bat species. The purpose of this article is to: (1) develop, validate, and apply a quantitative index to review avian/bat assessment quality for wind energy EIAs; and (2) assess the trends and status of avian/bat assessment quality in a sample of wind energy EIAs. This research presents the development and testing of the Avian and Bat Assessment Quality Index (ABAQI), a new approach to quantify information quality of ecological assessments within wind energy development EIAs in relation to avian and bat species based on review areas and factors derived from 23 state wind/wildlife siting guidance documents. The ABAQI was tested through a review of 49 publicly available EIA documents and validated by identifying high variation in avian and bat assessments quality for wind energy developments. Of all the reviewed EIAs, 66% failed to provide high levels of preconstruction avian and bat survey information, compared to recommended factors from state guidelines. This suggests the need for greater consistency from recommended guidelines by state, and mandatory compliance by EIA preparers to avoid possible habitat and species loss, wind energy development shut down, and future lawsuits. - Highlights: ► We developed, validated, and applied a quantitative index to review avian/bat assessment quality

  4. A quantitative method to analyze the quality of EIA information in wind energy development and avian/bat assessments

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Tony, E-mail: tc282@nau.edu [Environmental Science and Policy Program, School of Earth Science and Environmental Sustainability, Northern Arizona University, 602 S Humphreys P.O. Box 5694, Flagstaff, AZ, 86011 (United States); Nielsen, Erik, E-mail: erik.nielsen@nau.edu [Environmental Science and Policy Program, School of Earth Science and Environmental Sustainability, Northern Arizona University, 602 S Humphreys P.O. Box 5694, Flagstaff, AZ, 86011 (United States); Auberle, William, E-mail: william.auberle@nau.edu [Civil and Environmental Engineering Program, Department of Civil and Environmental Engineering, Northern Arizona University, 2112 S Huffer Ln P.O. Box 15600, Flagstaff, AZ, 860011 (United States); Solop, Frederic I., E-mail: fred.solop@nau.edu [Political Science Program, Department of Politics and International Affairs, Northern Arizona University, P.O. Box 15036, Flagstaff, AZ 86001 (United States)

    2013-01-15

    The environmental impact assessment (EIA) has been a tool for decision makers since the enactment of the National Environmental Policy Act (NEPA). Since that time, few analyses have been performed to verify the quality of information and content within EIAs. High quality information within assessments is vital in order for decision makers, stake holders, and the public to understand the potential impact of proposed actions on the ecosystem and wildlife species. Low quality information has been a major cause for litigation and economic loss. Since 1999, wind energy development has seen an exponential growth with unknown levels of impact on wildlife species, in particular bird and bat species. The purpose of this article is to: (1) develop, validate, and apply a quantitative index to review avian/bat assessment quality for wind energy EIAs; and (2) assess the trends and status of avian/bat assessment quality in a sample of wind energy EIAs. This research presents the development and testing of the Avian and Bat Assessment Quality Index (ABAQI), a new approach to quantify information quality of ecological assessments within wind energy development EIAs in relation to avian and bat species based on review areas and factors derived from 23 state wind/wildlife siting guidance documents. The ABAQI was tested through a review of 49 publicly available EIA documents and validated by identifying high variation in avian and bat assessments quality for wind energy developments. Of all the reviewed EIAs, 66% failed to provide high levels of preconstruction avian and bat survey information, compared to recommended factors from state guidelines. This suggests the need for greater consistency from recommended guidelines by state, and mandatory compliance by EIA preparers to avoid possible habitat and species loss, wind energy development shut down, and future lawsuits. - Highlights: Black-Right-Pointing-Pointer We developed, validated, and applied a quantitative index to review

  5. Ammonium chloride salting out extraction/cleanup for trace-level quantitative analysis in food and biological matrices by flow injection tandem mass spectrometry.

    Science.gov (United States)

    Nanita, Sergio C; Padivitage, Nilusha L T

    2013-03-20

    A sample extraction and purification procedure that uses ammonium-salt-induced acetonitrile/water phase separation was developed and demonstrated to be compatible with the recently reported method for pesticide residue analysis based on fast extraction and dilution flow injection mass spectrometry (FED-FI-MS). The ammonium salts evaluated were chloride, acetate, formate, carbonate, and sulfate. A mixture of NaCl and MgSO4, salts used in the well-known QuEChERS method, was also tested for comparison. With thermal decomposition/evaporation temperature of salts resulted in negligible ion source residual under typical electrospray conditions, leading to consistent method performance and less instrument cleaning. Although all ammonium salts tested induced acetonitrile/water phase separation, NH4Cl yielded the best performance, thus it was the preferred salting out agent. The NH4Cl salting out method was successfully coupled with FI/MS/MS and tested for fourteen pesticide active ingredients: chlorantraniliprole, cyantraniliprole, chlorimuron ethyl, oxamyl, methomyl, sulfometuron methyl, chlorsulfuron, triflusulfuron methyl, azimsulfuron, flupyrsulfuron methyl, aminocyclopyrachlor, aminocyclopyrachlor methyl, diuron and hexazinone. A validation study was conducted with nine complex matrices: sorghum, rice, grapefruit, canola, milk, eggs, beef, urine and blood plasma. The method is applicable to all analytes, except aminocyclopyrachlor. The method was deemed appropriate for quantitative analysis in 114 out of 126 analyte/matrix cases tested (applicability rate=0.90). The NH4Cl salting out extraction/cleanup allowed expansion of FI/MS/MS for analysis in food of plant and animal origin, and body fluids with increased ruggedness and sensitivity, while maintaining high-throughput (run time=30s/sample). Limits of quantitation (LOQs) of 0.01mgkg(-1) (ppm), the 'well-accepted standard' in pesticide residue analysis, were achieved in >80% of cases tested; while limits of detection

  6. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  7. Left atrial appendage segmentation and quantitative assisted diagnosis of atrial fibrillation based on fusion of temporal-spatial information.

    Science.gov (United States)

    Jin, Cheng; Feng, Jianjiang; Wang, Lei; Yu, Heng; Liu, Jiang; Lu, Jiwen; Zhou, Jie

    2018-05-01

    In this paper, we present an approach for left atrial appendage (LAA) multi-phase fast segmentation and quantitative assisted diagnosis of atrial fibrillation (AF) based on 4D-CT data. We take full advantage of the temporal dimension information to segment the living, flailed LAA based on a parametric max-flow method and graph-cut approach to build 3-D model of each phase. To assist the diagnosis of AF, we calculate the volumes of 3-D models, and then generate a "volume-phase" curve to calculate the important dynamic metrics: ejection fraction, filling flux, and emptying flux of the LAA's blood by volume. This approach demonstrates more precise results than the conventional approaches that calculate metrics by area, and allows for the quick analysis of LAA-volume pattern changes of in a cardiac cycle. It may also provide insight into the individual differences in the lesions of the LAA. Furthermore, we apply support vector machines (SVMs) to achieve a quantitative auto-diagnosis of the AF by exploiting seven features from volume change ratios of the LAA, and perform multivariate logistic regression analysis for the risk of LAA thrombosis. The 100 cases utilized in this research were taken from the Philips 256-iCT. The experimental results demonstrate that our approach can construct the 3-D LAA geometries robustly compared to manual annotations, and reasonably infer that the LAA undergoes filling, emptying and re-filling, re-emptying in a cardiac cycle. This research provides a potential for exploring various physiological functions of the LAA and quantitatively estimating the risk of stroke in patients with AF. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Extraction as a source of additional information when concentrations in multicomponent systems are simultaneously determined

    International Nuclear Information System (INIS)

    Perkov, I.G.

    1988-01-01

    Using as an example photometric determination of Nd and Sm in their joint presence, the possibility to use the influence of extraction on analytic signal increase is considered. It is shown that interligand exchange in extracts in combination with simultaneous determination of concentrations can be used as a simple means increasing the accuracy of determination. 5 refs.; 2 figs.; 3 tabs

  9. An Informed Approach to Improving Quantitative Literacy and Mitigating Math Anxiety in Undergraduates Through Introductory Science Courses

    Science.gov (United States)

    Follette, K.; McCarthy, D.

    2012-08-01

    Current trends in the teaching of high school and college science avoid numerical engagement because nearly all students lack basic arithmetic skills and experience anxiety when encountering numbers. Nevertheless, such skills are essential to science and vital to becoming savvy consumers, citizens capable of recognizing pseudoscience, and discerning interpreters of statistics in ever-present polls, studies, and surveys in which our society is awash. Can a general-education collegiate course motivate students to value numeracy and to improve their quantitative skills in what may well be their final opportunity in formal education? We present a tool to assess whether skills in numeracy/quantitative literacy can be fostered and improved in college students through the vehicle of non-major introductory courses in astronomy. Initial classroom applications define the magnitude of this problem and indicate that significant improvements are possible. Based on these initial results we offer this tool online and hope to collaborate with other educators, both formal and informal, to develop effective mechanisms for encouraging all students to value and improve their skills in basic numeracy.

  10. Presenting quantitative information about decision outcomes: a risk communication primer for patient decision aid developers

    Science.gov (United States)

    2013-01-01

    Background Making evidence-based decisions often requires comparison of two or more options. Research-based evidence may exist which quantifies how likely the outcomes are for each option. Understanding these numeric estimates improves patients’ risk perception and leads to better informed decision making. This paper summarises current “best practices” in communication of evidence-based numeric outcomes for developers of patient decision aids (PtDAs) and other health communication tools. Method An expert consensus group of fourteen researchers from North America, Europe, and Australasia identified eleven main issues in risk communication. Two experts for each issue wrote a “state of the art” summary of best evidence, drawing on the PtDA, health, psychological, and broader scientific literature. In addition, commonly used terms were defined and a set of guiding principles and key messages derived from the results. Results The eleven key components of risk communication were: 1) Presenting the chance an event will occur; 2) Presenting changes in numeric outcomes; 3) Outcome estimates for test and screening decisions; 4) Numeric estimates in context and with evaluative labels; 5) Conveying uncertainty; 6) Visual formats; 7) Tailoring estimates; 8) Formats for understanding outcomes over time; 9) Narrative methods for conveying the chance of an event; 10) Important skills for understanding numerical estimates; and 11) Interactive web-based formats. Guiding principles from the evidence summaries advise that risk communication formats should reflect the task required of the user, should always define a relevant reference class (i.e., denominator) over time, should aim to use a consistent format throughout documents, should avoid “1 in x” formats and variable denominators, consider the magnitude of numbers used and the possibility of format bias, and should take into account the numeracy and graph literacy of the audience. Conclusion A substantial and

  11. Validation and extraction of molecular-geometry information from small-molecule databases.

    Science.gov (United States)

    Long, Fei; Nicholls, Robert A; Emsley, Paul; Graǽulis, Saulius; Merkys, Andrius; Vaitkus, Antanas; Murshudov, Garib N

    2017-02-01

    A freely available small-molecule structure database, the Crystallography Open Database (COD), is used for the extraction of molecular-geometry information on small-molecule compounds. The results are used for the generation of new ligand descriptions, which are subsequently used by macromolecular model-building and structure-refinement software. To increase the reliability of the derived data, and therefore the new ligand descriptions, the entries from this database were subjected to very strict validation. The selection criteria made sure that the crystal structures used to derive atom types, bond and angle classes are of sufficiently high quality. Any suspicious entries at a crystal or molecular level were removed from further consideration. The selection criteria included (i) the resolution of the data used for refinement (entries solved at 0.84 Å resolution or higher) and (ii) the structure-solution method (structures must be from a single-crystal experiment and all atoms of generated molecules must have full occupancies), as well as basic sanity checks such as (iii) consistency between the valences and the number of connections between atoms, (iv) acceptable bond-length deviations from the expected values and (v) detection of atomic collisions. The derived atom types and bond classes were then validated using high-order moment-based statistical techniques. The results of the statistical analyses were fed back to fine-tune the atom typing. The developed procedure was repeated four times, resulting in fine-grained atom typing, bond and angle classes. The procedure will be repeated in the future as and when new entries are deposited in the COD. The whole procedure can also be applied to any source of small-molecule structures, including the Cambridge Structural Database and the ZINC database.

  12. Extracting respiratory information from seismocardiogram signals acquired on the chest using a miniature accelerometer

    International Nuclear Information System (INIS)

    Pandia, Keya; Inan, Omer T; Kovacs, Gregory T A; Giovangrandi, Laurent

    2012-01-01

    Seismocardiography (SCG) is a non-invasive measurement of the vibrations of the chest caused by the heartbeat. SCG signals can be measured using a miniature accelerometer attached to the chest, and are thus well-suited for unobtrusive and long-term patient monitoring. Additionally, SCG contains information relating to both cardiovascular and respiratory systems. In this work, algorithms were developed for extracting three respiration-dependent features of the SCG signal: intensity modulation, timing interval changes within each heartbeat, and timing interval changes between successive heartbeats. Simultaneously with a reference respiration belt, SCG signals were measured from 20 healthy subjects and a respiration rate was estimated using each of the three SCG features and the reference signal. The agreement between each of the three accelerometer-derived respiration rate measurements was computed with respect to the respiration rate derived from the reference respiration belt. The respiration rate obtained from the intensity modulation in the SCG signal was found to be in closest agreement with the respiration rate obtained from the reference respiration belt: the bias was found to be 0.06 breaths per minute with a 95% confidence interval of −0.99 to 1.11 breaths per minute. The limits of agreement between the respiration rates estimated using SCG (intensity modulation) and the reference were within the clinically relevant ranges given in existing literature, demonstrating that SCG could be used for both cardiovascular and respiratory monitoring. Furthermore, phases of each of the three SCG parameters were investigated at four instances of a respiration cycle—start inspiration, peak inspiration, start expiration, and peak expiration—and during breath hold (apnea). The phases of the three SCG parameters observed during the respiration cycle were congruent with existing literature and physiologically expected trends. (paper)

  13. Extracting key information from historical data to quantify the transmission dynamics of smallpox

    Directory of Open Access Journals (Sweden)

    Brockmann Stefan O

    2008-08-01

    Full Text Available Abstract Background Quantification of the transmission dynamics of smallpox is crucial for optimizing intervention strategies in the event of a bioterrorist attack. This article reviews basic methods and findings in mathematical and statistical studies of smallpox which estimate key transmission parameters from historical data. Main findings First, critically important aspects in extracting key information from historical data are briefly summarized. We mention different sources of heterogeneity and potential pitfalls in utilizing historical records. Second, we discuss how smallpox spreads in the absence of interventions and how the optimal timing of quarantine and isolation measures can be determined. Case studies demonstrate the following. (1 The upper confidence limit of the 99th percentile of the incubation period is 22.2 days, suggesting that quarantine should last 23 days. (2 The highest frequency (61.8% of secondary transmissions occurs 3–5 days after onset of fever so that infected individuals should be isolated before the appearance of rash. (3 The U-shaped age-specific case fatality implies a vulnerability of infants and elderly among non-immune individuals. Estimates of the transmission potential are subsequently reviewed, followed by an assessment of vaccination effects and of the expected effectiveness of interventions. Conclusion Current debates on bio-terrorism preparedness indicate that public health decision making must account for the complex interplay and balance between vaccination strategies and other public health measures (e.g. case isolation and contact tracing taking into account the frequency of adverse events to vaccination. In this review, we summarize what has already been clarified and point out needs to analyze previous smallpox outbreaks systematically.

  14. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings.

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-08-01

    Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. A 'Rich Picture' was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further consideration. Published by the BMJ Publishing Group

  15. Combining qualitative and quantitative operational research methods to inform quality improvement in pathways that span multiple settings

    Science.gov (United States)

    Crowe, Sonya; Brown, Katherine; Tregay, Jenifer; Wray, Jo; Knowles, Rachel; Ridout, Deborah A; Bull, Catherine; Utley, Martin

    2017-01-01

    Background Improving integration and continuity of care across sectors within resource constraints is a priority in many health systems. Qualitative operational research methods of problem structuring have been used to address quality improvement in services involving multiple sectors but not in combination with quantitative operational research methods that enable targeting of interventions according to patient risk. We aimed to combine these methods to augment and inform an improvement initiative concerning infants with congenital heart disease (CHD) whose complex care pathway spans multiple sectors. Methods Soft systems methodology was used to consider systematically changes to services from the perspectives of community, primary, secondary and tertiary care professionals and a patient group, incorporating relevant evidence. Classification and regression tree (CART) analysis of national audit datasets was conducted along with data visualisation designed to inform service improvement within the context of limited resources. Results A ‘Rich Picture’ was developed capturing the main features of services for infants with CHD pertinent to service improvement. This was used, along with a graphical summary of the CART analysis, to guide discussions about targeting interventions at specific patient risk groups. Agreement was reached across representatives of relevant health professions and patients on a coherent set of targeted recommendations for quality improvement. These fed into national decisions about service provision and commissioning. Conclusions When tackling complex problems in service provision across multiple settings, it is important to acknowledge and work with multiple perspectives systematically and to consider targeting service improvements in response to confined resources. Our research demonstrates that applying a combination of qualitative and quantitative operational research methods is one approach to doing so that warrants further

  16. Evaluation of needle trap micro-extraction and solid-phase micro-extraction: Obtaining comprehensive information on volatile emissions from in vitro cultures.

    Science.gov (United States)

    Oertel, Peter; Bergmann, Andreas; Fischer, Sina; Trefz, Phillip; Küntzel, Anne; Reinhold, Petra; Köhler, Heike; Schubert, Jochen K; Miekisch, Wolfram

    2018-05-14

    Volatile organic compounds (VOCs) emitted from in vitro cultures may reveal information on species and metabolism. Owing to low nmol L -1 concentration ranges, pre-concentration techniques are required for gas chromatography-mass spectrometry (GC-MS) based analyses. This study was intended to compare the efficiency of established micro-extraction techniques - solid-phase micro-extraction (SPME) and needle-trap micro-extraction (NTME) - for the analysis of complex VOC patterns. For SPME, a 75 μm Carboxen®/polydimethylsiloxane fiber was used. The NTME needle was packed with divinylbenzene, Carbopack X and Carboxen 1000. The headspace was sampled bi-directionally. Seventy-two VOCs were calibrated by reference standard mixtures in the range of 0.041-62.24 nmol L -1 by means of GC-MS. Both pre-concentration methods were applied to profile VOCs from cultures of Mycobacterium avium ssp. paratuberculosis. Limits of detection ranged from 0.004 to 3.93 nmol L -1 (median = 0.030 nmol L -1 ) for NTME and from 0.001 to 5.684 nmol L -1 (median = 0.043 nmol L -1 ) for SPME. NTME showed advantages in assessing polar compounds such as alcohols. SPME showed advantages in reproducibility but disadvantages in sensitivity for N-containing compounds. Micro-extraction techniques such as SPME and NTME are well suited for trace VOC profiling over cultures if the limitations of each technique is taken into account. Copyright © 2018 John Wiley & Sons, Ltd.

  17. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    Directory of Open Access Journals (Sweden)

    J. Sharmila

    2016-01-01

    Full Text Available Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodology in the exploration with the aid of Bayesian Networks (BN. In their methodology, they were learning on separating the web data and characteristic revelation in view of the Bayesian approach. Roused from their investigation, we mean to propose a web content mining methodology, in view of a Deep Learning Algorithm. The Deep Learning Algorithm gives the interest over BN on the basis that BN is not considered in any learning architecture planning like to propose system. The main objective of this investigation is web document extraction utilizing different grouping algorithm and investigation. This work extricates the data from the web URL. This work shows three classification algorithms, Deep Learning Algorithm, Bayesian Algorithm and BPNN Algorithm. Deep Learning is a capable arrangement of strategies for learning in neural system which is connected like computer vision, speech recognition, and natural language processing and biometrics framework. Deep Learning is one of the simple classification technique and which is utilized for subset of extensive field furthermore Deep Learning has less time for classification. Naive Bayes classifiers are a group of basic probabilistic classifiers in view of applying Bayes hypothesis with concrete independence assumptions between the features. At that point the BPNN algorithm is utilized for classification. Initially training and testing dataset contains more URL. We extract the content presently from the dataset. The

  18. An empirical Bayes method for updating inferences in analysis of quantitative trait loci using information from related genome scans.

    Science.gov (United States)

    Zhang, Kui; Wiener, Howard; Beasley, Mark; George, Varghese; Amos, Christopher I; Allison, David B

    2006-08-01

    Individual genome scans for quantitative trait loci (QTL) mapping often suffer from low statistical power and imprecise estimates of QTL location and effect. This lack of precision yields large confidence intervals for QTL location, which are problematic for subsequent fine mapping and positional cloning. In prioritizing areas for follow-up after an initial genome scan and in evaluating the credibility of apparent linkage signals, investigators typically examine the results of other genome scans of the same phenotype and informally update their beliefs about which linkage signals in their scan most merit confidence and follow-up via a subjective-intuitive integration approach. A method that acknowledges the wisdom of this general paradigm but formally borrows information from other scans to increase confidence in objectivity would be a benefit. We developed an empirical Bayes analytic method to integrate information from multiple genome scans. The linkage statistic obtained from a single genome scan study is updated by incorporating statistics from other genome scans as prior information. This technique does not require that all studies have an identical marker map or a common estimated QTL effect. The updated linkage statistic can then be used for the estimation of QTL location and effect. We evaluate the performance of our method by using extensive simulations based on actual marker spacing and allele frequencies from available data. Results indicate that the empirical Bayes method can account for between-study heterogeneity, estimate the QTL location and effect more precisely, and provide narrower confidence intervals than results from any single individual study. We also compared the empirical Bayes method with a method originally developed for meta-analysis (a closely related but distinct purpose). In the face of marked heterogeneity among studies, the empirical Bayes method outperforms the comparator.

  19. Assessment of the ion-trap mass spectrometer for routine qualitative and quantitative analysis of drugs of abuse extracted from urine.

    Science.gov (United States)

    Vorce, S P; Sklerov, J H; Kalasinsky, K S

    2000-10-01

    The ion-trap mass spectrometer (MS) has been available as a detector for gas chromatography (GC) for nearly two decades. However, it still occupies a minor role in forensic toxicology drug-testing laboratories. Quadrupole MS instruments make up the majority of GC detectors used in drug confirmation. This work addresses the use of these two MS detectors, comparing the ion ratio precision and quantitative accuracy for the analysis of different classes of abused drugs extracted from urine. Urine specimens were prepared at five concentrations each for amphetamine (AMP), methamphetamine (METH), benzoylecgonine (BZE), delta9-carboxy-tetrahydrocannabinol (delta9-THCCOOH), phencyclidine (PCP), morphine (MOR), codeine (COD), and 6-acetylmorphine (6-AM). Concentration ranges for AMP, METH, BZE, delta9-THCCOOH, PCP, MOR, COD, and 6-AM were 50-2500, 50-5000, 15-800, 1.5-65, 1-250, 500-32000, 250-21000, and 1.5-118 ng/mL, respectively. Sample extracts were injected into a GC-quadrupole MS operating in selected ion monitoring (SIM) mode and a GC-ion-trap MS operating in either selected ion storage (SIS) or full scan (FS) mode. Precision was assessed by the evaluation of five ion ratios for n = 15 injections at each concentration using a single-point calibration. Precision measurements for SIM ion ratios provided coefficients of variation (CV) between 2.6 and 9.8% for all drugs. By comparison, the SIS and FS data yielded CV ranges of 4.0-12.8% and 4.0-11.2%, respectively. The total ion ratio failure rates were 0.2% (SIM), 0.7% (SIS), and 1.2% (FS) for the eight drugs analyzed. Overall, the SIS mode produced stable, comparable mean ratios over the concentration ranges examined, but had greater variance within batch runs. Examination of postmortem and quality-control samples produced forensically accurate quantitation by SIS when compared to SIM. Furthermore, sensitivity of FS was equivalent to SIM for all compounds examined except for 6-AM.

  20. One-step extraction and quantitation of toxic alcohols and ethylene glycol in plasma by capillary gas chromatography (GC) with flame ionization detection (FID).

    Science.gov (United States)

    Orton, Dennis J; Boyd, Jessica M; Affleck, Darlene; Duce, Donna; Walsh, Warren; Seiden-Long, Isolde

    2016-01-01

    Clinical analysis of volatile alcohols (i.e. methanol, ethanol, isopropanol, and metabolite acetone) and ethylene glycol (EG) generally employs separate gas chromatography (GC) methods for analysis. Here, a method for combined analysis of volatile alcohols and EG is described. Volatile alcohols and EG were extracted with 2:1 (v:v) acetonitrile containing internal standards (IS) 1,2 butanediol (for EG) and n-propanol (for alcohols). Samples were analyzed on an Agilent 6890 GC FID. The method was evaluated for precision, accuracy, reproducibility, linearity, selectivity and limit of quantitation (LOQ), followed by correlation to existing GC methods using patient samples, Bio-Rad QC, and in-house prepared QC material. Inter-day precision was from 6.5-11.3% CV, and linearity was verified from down to 0.6mmol/L up to 150mmol/L for each analyte. The method showed good recovery (~100%) and the LOQ was calculated to be between 0.25 and 0.44mmol/L. Patient correlation against current GC methods showed good agreement (slopes from 1.03-1.12, and y-intercepts from 0 to 0.85mmol/L; R(2)>0.98; N=35). Carryover was negligible for volatile alcohols in the measuring range, and of the potential interferences tested, only toluene and 1,3 propanediol interfered. The method was able to resolve 2,3 butanediol, diethylene glycol, and propylene glycol in addition to the peaks quantified. Here we describe a simple procedure for simultaneous analysis of EG and volatile alcohols that comes at low cost and with a simple liquid-liquid extraction requiring no derivitization to obtain adequate sensitivity for clinical specimens. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  1. Information extraction from dynamic PS-InSAR time series using machine learning

    Science.gov (United States)

    van de Kerkhof, B.; Pankratius, V.; Chang, L.; van Swol, R.; Hanssen, R. F.

    2017-12-01

    Due to the increasing number of SAR satellites, with shorter repeat intervals and higher resolutions, SAR data volumes are exploding. Time series analyses of SAR data, i.e. Persistent Scatterer (PS) InSAR, enable the deformation monitoring of the built environment at an unprecedented scale, with hundreds of scatterers per km2, updated weekly. Potential hazards, e.g. due to failure of aging infrastructure, can be detected at an early stage. Yet, this requires the operational data processing of billions of measurement points, over hundreds of epochs, updating this data set dynamically as new data come in, and testing whether points (start to) behave in an anomalous way. Moreover, the quality of PS-InSAR measurements is ambiguous and heterogeneous, which will yield false positives and false negatives. Such analyses are numerically challenging. Here we extract relevant information from PS-InSAR time series using machine learning algorithms. We cluster (group together) time series with similar behaviour, even though they may not be spatially close, such that the results can be used for further analysis. First we reduce the dimensionality of the dataset in order to be able to cluster the data, since applying clustering techniques on high dimensional datasets often result in unsatisfying results. Our approach is to apply t-distributed Stochastic Neighbor Embedding (t-SNE), a machine learning algorithm for dimensionality reduction of high-dimensional data to a 2D or 3D map, and cluster this result using Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The results show that we are able to detect and cluster time series with similar behaviour, which is the starting point for more extensive analysis into the underlying driving mechanisms. The results of the methods are compared to conventional hypothesis testing as well as a Self-Organising Map (SOM) approach. Hypothesis testing is robust and takes the stochastic nature of the observations into account

  2. Synthesis of High-Frequency Ground Motion Using Information Extracted from Low-Frequency Ground Motion

    Science.gov (United States)

    Iwaki, A.; Fujiwara, H.

    2012-12-01

    Broadband ground motion computations of scenario earthquakes are often based on hybrid methods that are the combinations of deterministic approach in lower frequency band and stochastic approach in higher frequency band. Typical computation methods for low-frequency and high-frequency (LF and HF, respectively) ground motions are the numerical simulations, such as finite-difference and finite-element methods based on three-dimensional velocity structure model, and the stochastic Green's function method, respectively. In such hybrid methods, LF and HF wave fields are generated through two different methods that are completely independent of each other, and are combined at the matching frequency. However, LF and HF wave fields are essentially not independent as long as they are from the same event. In this study, we focus on the relation among acceleration envelopes at different frequency bands, and attempt to synthesize HF ground motion using the information extracted from LF ground motion, aiming to propose a new method for broad-band strong motion prediction. Our study area is Kanto area, Japan. We use the K-NET and KiK-net surface acceleration data and compute RMS envelope at four frequency bands: 0.5-1.0 Hz, 1.0-2.0 Hz, 2.0-4.0 Hz, .0-8.0 Hz, and 8.0-16.0 Hz. Taking the ratio of the envelopes of adjacent bands, we find that the envelope ratios have stable shapes at each site. The empirical envelope-ratio characteristics are combined with low-frequency envelope of the target earthquake to synthesize HF ground motion. We have applied the method to M5-class earthquakes and a M7 target earthquake that occurred in the vicinity of Kanto area, and successfully reproduced the observed HF ground motion of the target earthquake. The method can be applied to a broad band ground motion simulation for a scenario earthquake by combining numerically-computed low-frequency (~1 Hz) ground motion with the empirical envelope ratio characteristics to generate broadband ground motion

  3. Quantitative determination of sotolon, maltol and free furaneol in wine by solid-phase extraction and gas chromatography-ion-trap mass spectrometry.

    Science.gov (United States)

    Ferreira, Vicente; Jarauta, Idoia; López, Ricardo; Cacho, Juan

    2003-08-22

    A method for the analytical determination of sotolon [4,5-dimethyl-3-hydroxy-2(5H)-furanone], maltol [3-hydroxy-2-methyl-4H-pyran-4-one] and free furaneol [2,5-dimethyl-4-hydroxy-3(2H)-furanone] in wine has been developed. The analytes are extracted from 50 ml of wine in a solid-phase extraction cartridge filled with 800 mg of LiChrolut EN resins. Interferences are removed with 15 ml of a pentane-dichloromethane (20:1) solution, and analytes are recovered with 6 ml of dichloromethane. The extract is concentrated up to 0.1 ml and analyzed by GC-ion trap MS. Maltol and sotolon were determined by selected ion storage of ions in the m/z ranges 120-153 and 79-95, using the ions m/z 126 and 83 for quantitation, respectively. Furaneol was determined by non-resonant fragmentation of the m/z 128 mother ion and subsequent analysis of the m/z 81 ion. The detection limits of the method are in all cases between 0.5 and 1 microg l(-1), well below the olfactory thresholds of the compounds. The precision of the method is in the 4-5% range for levels in wine around 20 microg l(-1). Linearity holds at least up to 400 microg l(-1), and is satisfactory in all cases. The recoveries of maltol and sotolon are constant (70 and 64%, respectively) and do not depend on the type of wine. On the contrary, in the case of furaneol, red wines show constant and high recoveries (97%), while the recoveries on white wines range between 30 and 80%. Different experiments showed that this behavior is probably due to the existence of complexes formed between furaneol and sulphur dioxide or catechols. Sensory experiments confirmed that the complexed forms found in white wines are not perceived by orthonasal olfaction, and that the furaneol determined by the method can be considered as the free and odor-active fraction.

  4. TempoWordNet : une ressource lexicale pour l'extraction d'information temporelle

    OpenAIRE

    Hasanuzzaman , Mohammed

    2016-01-01

    The ability to capture the time information conveyed in natural language, where that information is expressed either explicitly, or implicitly, or connotative, is essential to many natural language processing applications such as information retrieval, question answering, automatic summarization, targeted marketing, loan repayment forecasting, and understanding economic patterns. Associating word senses with temporal orientation to grasp the temporal information in language is relatively stra...

  5. Chemical characterization of the aroma of Grenache rosé wines: aroma extract dilution analysis, quantitative determination, and sensory reconstitution studies.

    Science.gov (United States)

    Ferreira, Vicente; Ortín, Natalia; Escudero, Ana; López, Ricardo; Cacho, Juan

    2002-07-03

    The aroma of a Grenache rosé wine from Calatayud (Zaragoza, Spain) has been elucidated following a strategy consisting of an aroma extract dilution analysis (AEDA), followed by the quantitative analysis of the main odorants and the determination of odor activities values (OAVs) and, finally, by a series of reconstitution and omission tests with synthetic aroma models. Thirty-eight aroma compounds were found in the AEDA study, 35 of which were identified. Twenty-one compounds were at concentrations higher than their corresponding odor thresholds. An aroma model prepared by mixing the 24 compounds with OAV > 0.5 in a synthetic wine showed a high qualitative similarity with the aroma of the rosé wine. The addition of compounds with OAV 10 was very different from that of the wine. Omission tests revealed that the most important odorant of this Grenache rosé wine was 3-mercapto-1-hexanol, with a deep impact on the wine fruity and citric notes. The synergic action of Furaneol and homofuraneol also had an important impact on wine aroma, particularly in its fruity and caramel notes. The omission of beta-damascenone, which had the second highest OAV, caused only a slight decrease on the intensity of the aroma model. Still weaker was the sensory effect caused by the omission of 10 other compounds, such as fatty acids and their ethyl esters, isoamyl acetate, and higher alcohols.

  6. Characterization and comparison of key aroma compounds in raw and dry porcini mushroom (Boletus edulis) by aroma extract dilution analysis, quantitation and aroma recombination experiments.

    Science.gov (United States)

    Zhang, Huiying; Pu, Dandan; Sun, Baoguo; Ren, Fazheng; Zhang, Yuyu; Chen, Haitao

    2018-08-30

    A study was carried out to determine and compare the key aroma compounds in raw and dry porcini mushroom (Boletus edulis). The volatile fractions were prepared by solvent-assisted flavor evaporation (SAFE), and aroma extract dilution analysis (AEDA) combined with gas chromatography-mass spectrometry (GC-MS) was employed to identify the odorants. Selected aroma compounds were quantitated and odor activity values (OAVs) were calculated revealing OAVs ≥ 1 for 12 compounds in raw porcini, among which 1-octen-3-one showed the highest OAV. In addition to compounds with eight carbon atoms, 3-methylbutanal, (E,E)-2,4-decadienal and (E,E)-2,4-nonadienal were also responsible for the unique aroma profile. In dry mushroom OAVs ≥ 1 were obtained for 20 odorants. Among them, 3-(methylthio)propanal, 1-octen-3-one and pyrazines were determined as predominant odorants. Overall, drying increased complexity of volatile compounds, thus significantly changing the aroma profile of porcini, providing more desirable roasted and seasoning-like flavor and less grass-like and earthy notes. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. A novel strategy with standardized reference extract qualification and single compound quantitative evaluation for quality control of Panax notoginseng used as a functional food.

    Science.gov (United States)

    Li, S P; Qiao, C F; Chen, Y W; Zhao, J; Cui, X M; Zhang, Q W; Liu, X M; Hu, D J

    2013-10-25

    Root of Panax notoginseng (Burk.) F.H. Chen (Sanqi in Chinese) is one of traditional Chinese medicines (TCMs) based functional food. Saponins are the major bioactive components. The shortage of reference compounds or chemical standards is one of the main bottlenecks for quality control of TCMs. A novel strategy, i.e. standardized reference extract based qualification and single calibrated components directly quantitative estimation of multiple analytes, was proposed to easily and effectively control the quality of natural functional foods such as Sanqi. The feasibility and credibility of this methodology were also assessed with a developed fast HPLC method. Five saponins, including ginsenoside Rg1, Re, Rb1, Rd and notoginsenoside R1 were rapidly separated using a conventional HPLC in 20 min. The quantification method was also compared with individual calibration curve method. The strategy is feasible and credible, which is easily and effectively adapted for improving the quality control of natural functional foods such as Sanqi. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Quantitative measurements in laser induced plasmas using optical probing. Progress report, October 1, 1977--April 30, 1978

    International Nuclear Information System (INIS)

    Sweeney, D.W.

    1978-06-01

    Optical probing of laser induced plasmas can be used to quantitatively reconstruct electron number densities and magnetic fields. Numerical techniques for extracting quantitative information from the experimental data are described and four Abel inversion codes are provided. A computer simulation of optical probing is used to determine the quantitative information that can be reasonably extracted from real experimental systems. Examples of reconstructed electron number densities from interferograms of laser plasmas show steepened electron distributions

  9. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    OpenAIRE

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scien...

  10. The BEL information extraction workflow (BELIEF): evaluation in the BioCreative V BEL and IAT track

    OpenAIRE

    Madan, Sumit; Hodapp, Sven; Senger, Philipp; Ansari, Sam; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel; Fluck, Juliane

    2016-01-01

    Network-based approaches have become extremely important in systems biology to achieve a better understanding of biological mechanisms. For network representation, the Biological Expression Language (BEL) is well designed to collate findings from the scientific literature into biological network models. To facilitate encoding and biocuration of such findings in BEL, a BEL Information Extraction Workflow (BELIEF) was developed. BELIEF provides a web-based curation interface, the BELIEF Dashboa...

  11. An Investigation of the Relationship Between Automated Machine Translation Evaluation Metrics and User Performance on an Information Extraction Task

    Science.gov (United States)

    2007-01-01

    more reliable than BLEU and that it is easier to understand in terms familiar to NLP researchers. 19 2.2.3 METEOR Researchers at Carnegie Mellon...essential elements of infor- mation from output generated by three types of Arabic -English MT engines. The information extraction experiment was one of three...reviewing the task hierarchy and examining the MT output of several engines. A small, prior pilot experiment to evaluate Arabic -English MT engines for

  12. Comparison of Qinzhou bay wetland landscape information extraction by three methods

    Directory of Open Access Journals (Sweden)

    X. Chang

    2014-04-01

    and OO is 219 km2, 193.70 km2, 217.40 km2 respectively. The result indicates that SC is in the f irst place, followed by OO approach, and the third DT method when used to extract Qingzhou Bay coastal wetland.

  13. Extracting topographic structure from digital elevation data for geographic information-system analysis

    Science.gov (United States)

    Jenson, Susan K.; Domingue, Julia O.

    1988-01-01

    Software tools have been developed at the U.S. Geological Survey's EROS Data Center to extract topographic structure and to delineate watersheds and overland flow paths from digital elevation models. The tools are specialpurpose FORTRAN programs interfaced with general-purpose raster and vector spatial analysis and relational data base management packages.

  14. Leadership training in a family medicine residency program: Cross-sectional quantitative survey to inform curriculum development.

    Science.gov (United States)

    Gallagher, Erin; Moore, Ainsley; Schabort, Inge

    2017-03-01

    To assess the current status of leadership training as perceived by family medicine residents to inform the development of a formal leadership curriculum. Cross-sectional quantitative survey. Department of Family Medicine at McMaster University in Hamilton, Ont, in December 2013. A total of 152 first- and second-year family medicine residents. Family medicine residents' attitudes toward leadership, perceived level of training in various leadership domains, and identified opportunities for leadership training. Overall, 80% (152 of 190) of residents completed the survey. On a Likert scale (1 = strongly disagree, 4 = neutral, 7 = strongly agree), residents rated the importance of physician leadership in the clinical setting as high (6.23 of 7), whereas agreement with the statement "I am a leader" received the lowest rating (5.28 of 7). At least 50% of residents desired more training in the leadership domains of personal mastery, mentorship and coaching, conflict resolution, teaching, effective teamwork, administration, ideals of a healthy workplace, coalitions, and system transformation. At least 50% of residents identified behavioural sciences seminars, a lecture and workshop series, and a retreat as opportunities to expand leadership training. The concept of family physicians as leaders resonated highly with residents. Residents desired more personal and system-level leadership training. They also identified ways that leadership training could be expanded in the current curriculum and developed in other areas. The information gained from this survey might facilitate leadership development among residents through application of its results in a formal leadership curriculum. Copyright© the College of Family Physicians of Canada.

  15. Systematically extracting metal- and solvent-related occupational information from free-text responses to lifetime occupational history questionnaires.

    Science.gov (United States)

    Friesen, Melissa C; Locke, Sarah J; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A; Purdue, Mark; Colt, Joanne S

    2014-06-01

    Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants' jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Our study population comprised 2408 subjects, reporting 11991 jobs, from a case-control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert's independently assigned probability ratings to evaluate whether we missed identifying

  16. A Qualitative and Quantitative Comparative Analysis of Commercial and Independent Online Information for Hip Surgery: A Bias in Online Information Targeting Patients?

    Science.gov (United States)

    Kelly, Martin J; Feeley, Iain H; O'Byrne, John M

    2016-10-01

    Direct to consumer (DTC) advertising, targeting the public over the physician, is an increasingly pervasive presence in medical clinics. It is trending toward a format of online interaction rather than that of traditional print and television advertising. We analyze patient-focused Web pages from the top 5 companies supplying prostheses for total hip arthroplasties, comparing them to the top 10 independent medical websites. Quantitative comparison is performed using the Journal of American Medical Association benchmark and DISCERN criteria, and for comparative readability, we use the Flesch-Kincaid grade level, the Flesch reading ease, and the Gunning fog index. Content is analyzed for information on type of surgery and surgical approach. There is a statistically significant difference between the independent and DTC websites in both the mean DISCERN score (independent 74.6, standard deviation [SD] = 4.77; DTC 32.2, SD = 10.28; P = .0022) and the mean Journal of American Medical Association score (Independent 3.45, SD = 0.49; DTC 1.9, SD = 0.74; P = .004). The difference between the readability scores is not statistically significantly. The commercial content is found to be heavily biased in favor of the direct anterior approach and minimally invasive surgical techniques. We demonstrate that the quality of information on commercial websites is inferior to that of the independent sites. The advocacy of surgical approaches by industry to the patient group is a concern. This study underlines the importance of future regulation of commercial patient education Web pages. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. MIDAS. An algorithm for the extraction of modal information from experimentally determined transfer functions

    International Nuclear Information System (INIS)

    Durrans, R.F.

    1978-12-01

    In order to design reactor structures to withstand the large flow and acoustic forces present it is necessary to know something of their dynamic properties. In many cases these properties cannot be predicted theoretically and it is necessary to determine them experimentally. The algorithm MIDAS (Modal Identification for the Dynamic Analysis of Structures) which has been developed at B.N.L. for extracting these structural properties from experimental data is described. (author)

  18. Extracting Information about the Initial State from the Black Hole Radiation.

    Science.gov (United States)

    Lochan, Kinjalk; Padmanabhan, T

    2016-02-05

    The crux of the black hole information paradox is related to the fact that the complete information about the initial state of a quantum field in a collapsing spacetime is not available to future asymptotic observers, belying the expectations from a unitary quantum theory. We study the imprints of the initial quantum state contained in a specific class of distortions of the black hole radiation and identify the classes of in states that can be partially or fully reconstructed from the information contained within. Even for the general in state, we can uncover some specific information. These results suggest that a classical collapse scenario ignores this richness of information in the resulting spectrum and a consistent quantum treatment of the entire collapse process might allow us to retrieve much more information from the spectrum of the final radiation.

  19. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    Science.gov (United States)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  20. Sensitive UHPLC-MS/MS quantitation and pharmacokinetic comparisons of multiple alkaloids from Fuzi- Beimu and single herb aqueous extracts following oral delivery in rats.

    Science.gov (United States)

    Xu, Yanyan; Li, Yamei; Zhang, Pengjie; Yang, Bin; Wu, Huanyu; Guo, Xuejun; Li, Yubo; Zhang, Yanjun

    2017-07-15

    Aconiti Lateralis Radix Praeparata- Fritillariae Thunbergii bulbus, namely Fuzi- Beimu in Chinese, is a classic herb pair whose combined administration was prohibited according to the rule of "Eighteen antagonisms". However, incompatibility of Fuzi and Beimu has become controversial because of the application supported by many recorded ancient prescriptions and increasing modern researches and clinical practice. The present study aimed to investigate the pharmacokinetic differences of multiple alkaloids from Fuzi- Beimu and the single herb aqueous extracts following oral delivery in rats. Twelve alkaloids including aconitine, mesaconitine, hypaconitine, benzoylaconitine, benzoylmesaconitine, benzoylhypacoitine, neoline, fuziline, talatisamine, chasmanine, peimine and peimisine in rat plasma were simultaneously quantitated by using sensitive ultra-high performance liquid chromatography- tandem mass spectrometry (UHPLC-MS/MS), with the method developed and fully validated. Plasma concentrations of the twelve alkaloids after administration were determined and pharmacokinetic parameters were compared. Significant differences were observed for all alkaloids except aconitine, mesaconitine and benzoylaconitine for Fuzi- Beimu group in comparison with the single herb group. AUC 0-t and T 1/2 of hypaconitine were increased significantly. AUC 0-t and C max were increased and T max decreased significantly for benzoylmesaconitine and benzoylhypacoitine. Fuziline showed significantly increased AUC 0-t , C max and T max . T 1/2 of neoline was notably increased. T 1/2 and T max were significantly elevated for talatisamine while C max decreased. T max of chasmanine was significantly increased and C max decreased. Extremely significant increase of T max was found for peimisine, and significant increase of T 1/2 for peimine. Results revealed that combined use of Fuzi and Beimu significantly influenced the system exposure and pharmacokinetic behaviors of multiple alkaloids from both

  1. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  2. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  3. Unsupervised improvement of named entity extraction in short informal context using disambiguation clues

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2012-01-01

    Short context messages (like tweets and SMS’s) are a potentially rich source of continuously and instantly updated information. Shortness and informality of such messages are challenges for Natural Language Processing tasks. Most efforts done in this direction rely on machine learning techniques

  4. Automated Methods to Extract Patient New Information from Clinical Notes in Electronic Health Record Systems

    Science.gov (United States)

    Zhang, Rui

    2013-01-01

    The widespread adoption of Electronic Health Record (EHR) has resulted in rapid text proliferation within clinical care. Clinicians' use of copying and pasting functions in EHR systems further compounds this by creating a large amount of redundant clinical information in clinical documents. A mixture of redundant information (especially outdated…

  5. Extracting principles for information management adaptability during crisis response : A dynamic capability view

    NARCIS (Netherlands)

    Bharosa, N.; Janssen, M.F.W.H.A.

    2010-01-01

    During crises, relief agency commanders have to make decisions in a complex and uncertain environment, requiring them to continuously adapt to unforeseen environmental changes. In the process of adaptation, the commanders depend on information management systems for information. Yet there are still

  6. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  7. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  8. Extracting protein dynamics information from overlapped NMR signals using relaxation dispersion difference NMR spectroscopy.

    Science.gov (United States)

    Konuma, Tsuyoshi; Harada, Erisa; Sugase, Kenji

    2015-12-01

    Protein dynamics plays important roles in many biological events, such as ligand binding and enzyme reactions. NMR is mostly used for investigating such protein dynamics in a site-specific manner. Recently, NMR has been actively applied to large proteins and intrinsically disordered proteins, which are attractive research targets. However, signal overlap, which is often observed for such proteins, hampers accurate analysis of NMR data. In this study, we have developed a new methodology called relaxation dispersion difference that can extract conformational exchange parameters from overlapped NMR signals measured using relaxation dispersion spectroscopy. In relaxation dispersion measurements, the signal intensities of fluctuating residues vary according to the Carr-Purcell-Meiboon-Gill pulsing interval, whereas those of non-fluctuating residues are constant. Therefore, subtraction of each relaxation dispersion spectrum from that with the highest signal intensities, measured at the shortest pulsing interval, leaves only the signals of the fluctuating residues. This is the principle of the relaxation dispersion difference method. This new method enabled us to extract exchange parameters from overlapped signals of heme oxygenase-1, which is a relatively large protein. The results indicate that the structural flexibility of a kink in the heme-binding site is important for efficient heme binding. Relaxation dispersion difference requires neither selectively labeled samples nor modification of pulse programs; thus it will have wide applications in protein dynamics analysis.

  9. Extracting protein dynamics information from overlapped NMR signals using relaxation dispersion difference NMR spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Konuma, Tsuyoshi [Icahn School of Medicine at Mount Sinai, Department of Structural and Chemical Biology (United States); Harada, Erisa [Suntory Foundation for Life Sciences, Bioorganic Research Institute (Japan); Sugase, Kenji, E-mail: sugase@sunbor.or.jp, E-mail: sugase@moleng.kyoto-u.ac.jp [Kyoto University, Department of Molecular Engineering, Graduate School of Engineering (Japan)

    2015-12-15

    Protein dynamics plays important roles in many biological events, such as ligand binding and enzyme reactions. NMR is mostly used for investigating such protein dynamics in a site-specific manner. Recently, NMR has been actively applied to large proteins and intrinsically disordered proteins, which are attractive research targets. However, signal overlap, which is often observed for such proteins, hampers accurate analysis of NMR data. In this study, we have developed a new methodology called relaxation dispersion difference that can extract conformational exchange parameters from overlapped NMR signals measured using relaxation dispersion spectroscopy. In relaxation dispersion measurements, the signal intensities of fluctuating residues vary according to the Carr-Purcell-Meiboon-Gill pulsing interval, whereas those of non-fluctuating residues are constant. Therefore, subtraction of each relaxation dispersion spectrum from that with the highest signal intensities, measured at the shortest pulsing interval, leaves only the signals of the fluctuating residues. This is the principle of the relaxation dispersion difference method. This new method enabled us to extract exchange parameters from overlapped signals of heme oxygenase-1, which is a relatively large protein. The results indicate that the structural flexibility of a kink in the heme-binding site is important for efficient heme binding. Relaxation dispersion difference requires neither selectively labeled samples nor modification of pulse programs; thus it will have wide applications in protein dynamics analysis.

  10. Wavelet analysis of molecular dynamics: Efficient extraction of time-frequency information in ultrafast optical processes

    International Nuclear Information System (INIS)

    Prior, Javier; Castro, Enrique; Chin, Alex W.; Almeida, Javier; Huelga, Susana F.; Plenio, Martin B.

    2013-01-01

    New experimental techniques based on nonlinear ultrafast spectroscopies have been developed over the last few years, and have been demonstrated to provide powerful probes of quantum dynamics in different types of molecular aggregates, including both natural and artificial light harvesting complexes. Fourier transform-based spectroscopies have been particularly successful, yet “complete” spectral information normally necessitates the loss of all information on the temporal sequence of events in a signal. This information though is particularly important in transient or multi-stage processes, in which the spectral decomposition of the data evolves in time. By going through several examples of ultrafast quantum dynamics, we demonstrate that the use of wavelets provide an efficient and accurate way to simultaneously acquire both temporal and frequency information about a signal, and argue that this greatly aids the elucidation and interpretation of physical process responsible for non-stationary spectroscopic features, such as those encountered in coherent excitonic energy transport

  11. Extracting information from an ensemble of GCMs to reliably assess future global runoff change

    NARCIS (Netherlands)

    Sperna Weiland, F.C.; Beek, L.P.H. van; Weerts, A.H.; Bierkens, M.F.P.

    2011-01-01

    Future runoff projections derived from different global climate models (GCMs) show large differences. Therefore, within this study the, information from multiple GCMs has been combined to better assess hydrological changes. For projections of precipitation and temperature the Reliability ensemble

  12. Investigation of the Impact of Extracting and Exchanging Health Information by Using Internet and Social Networks.

    Science.gov (United States)

    Pistolis, John; Zimeras, Stelios; Chardalias, Kostas; Roupa, Zoe; Fildisis, George; Diomidous, Marianna

    2016-06-01

    Social networks (1) have been embedded in our daily life for a long time. They constitute a powerful tool used nowadays for both searching and exchanging information on different issues by using Internet searching engines (Google, Bing, etc.) and Social Networks (Facebook, Twitter etc.). In this paper, are presented the results of a research based on the frequency and the type of the usage of the Internet and the Social Networks by the general public and the health professionals. The objectives of the research were focused on the investigation of the frequency of seeking and meticulously searching for health information in the social media by both individuals and health practitioners. The exchanging of information is a procedure that involves the issues of reliability and quality of information. In this research, by using advanced statistical techniques an effort is made to investigate the participant's profile in using social networks for searching and exchanging information on health issues. Based on the answers 93 % of the people, use the Internet to find information on health-subjects. Considering principal component analysis, the most important health subjects were nutrition (0.719 %), respiratory issues (0.79 %), cardiological issues (0.777%), psychological issues (0.667%) and total (73.8%). The research results, based on different statistical techniques revealed that the 61.2% of the males and 56.4% of the females intended to use the social networks for searching medical information. Based on the principal components analysis, the most important sources that the participants mentioned, were the use of the Internet and social networks for exchanging information on health issues. These sources proved to be of paramount importance to the participants of the study. The same holds for nursing, medical and administrative staff in hospitals.

  13. Amplitude extraction in pseudoscalar-meson photoproduction: towards a situation of complete information

    International Nuclear Information System (INIS)

    Nys, Jannes; Vrancx, Tom; Ryckebusch, Jan

    2015-01-01

    A complete set for pseudoscalar-meson photoproduction is a minimum set of observables from which one can determine the underlying reaction amplitudes unambiguously. The complete sets considered in this work involve single- and double-polarization observables. It is argued that for extracting amplitudes from data, the transversity representation of the reaction amplitudes offers advantages over alternate representations. It is shown that with the available single-polarization data for the p(γ,K + )Λ reaction, the energy and angular dependence of the moduli of the normalized transversity amplitudes in the resonance region can be determined to a fair accuracy. Determining the relative phases of the amplitudes from double-polarization observables is far less evident. (paper)

  14. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  15. The Application of Chinese High-Spatial Remote Sensing Satellite Image in Land Law Enforcement Information Extraction

    Science.gov (United States)

    Wang, N.; Yang, R.

    2018-04-01

    Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2 m, 1 m and 2.1 m and the multispectral images (MS) resolution are 8 m, 4 m, 5.8 m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.

  16. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    International Nuclear Information System (INIS)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun; Sasaki, Masahide

    2004-01-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decoding in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques

  17. Comparative evaluation of three automated systems for DNA extraction in conjunction with three commercially available real-time PCR assays for quantitation of plasma Cytomegalovirus DNAemia in allogeneic stem cell transplant recipients.

    Science.gov (United States)

    Bravo, Dayana; Clari, María Ángeles; Costa, Elisa; Muñoz-Cobo, Beatriz; Solano, Carlos; José Remigia, María; Navarro, David

    2011-08-01

    Limited data are available on the performance of different automated extraction platforms and commercially available quantitative real-time PCR (QRT-PCR) methods for the quantitation of cytomegalovirus (CMV) DNA in plasma. We compared the performance characteristics of the Abbott mSample preparation system DNA kit on the m24 SP instrument (Abbott), the High Pure viral nucleic acid kit on the COBAS AmpliPrep system (Roche), and the EZ1 Virus 2.0 kit on the BioRobot EZ1 extraction platform (Qiagen) coupled with the Abbott CMV PCR kit, the LightCycler CMV Quant kit (Roche), and the Q-CMV complete kit (Nanogen), for both plasma specimens from allogeneic stem cell transplant (Allo-SCT) recipients (n = 42) and the OptiQuant CMV DNA panel (AcroMetrix). The EZ1 system displayed the highest extraction efficiency over a wide range of CMV plasma DNA loads, followed by the m24 and the AmpliPrep methods. The Nanogen PCR assay yielded higher mean CMV plasma DNA values than the Abbott and the Roche PCR assays, regardless of the platform used for DNA extraction. Overall, the effects of the extraction method and the QRT-PCR used on CMV plasma DNA load measurements were less pronounced for specimens with high CMV DNA content (>10,000 copies/ml). The performance characteristics of the extraction methods and QRT-PCR assays evaluated herein for clinical samples were extensible at cell-based standards from AcroMetrix. In conclusion, different automated systems are not equally efficient for CMV DNA extraction from plasma specimens, and the plasma CMV DNA loads measured by commercially available QRT-PCRs can differ significantly. The above findings should be taken into consideration for the establishment of cutoff values for the initiation or cessation of preemptive antiviral therapies and for the interpretation of data from clinical studies in the Allo-SCT setting.

  18. Extraction of basic roadway information for non-state roads in Florida : [summary].

    Science.gov (United States)

    2015-07-01

    The Florida Department of Transportation (FDOT) maintains a map of all the roads in Florida, : containing over one and a half million road links. For planning purposes, a wide variety : of information, such as stop lights, signage, lane number, and s...

  19. Synthetic aperture radar ship discrimination, generation and latent variable extraction using information maximizing generative adversarial networks

    CSIR Research Space (South Africa)

    Schwegmann, Colin P

    2017-07-01

    Full Text Available such as Synthetic Aperture Radar imagery. To aid in the creation of improved machine learning-based ship detection and discrimination methods this paper applies a type of neural network known as an Information Maximizing Generative Adversarial Network. Generative...

  20. High-sensitivity quantitation of a biopharmaceutical Nanobody® in plasma by single-cartridge multi-dimensional solid-phase extraction and UPLC-MS/MS

    NARCIS (Netherlands)

    Bronsema, Kees; Bischoff, Rainer; Bouche, Marie-Paule; Mortier, Kjell; van de Merbel, Nico C.

    2015-01-01

    Background: A major challenge in protein quantitation based on enzymatic digestion of complex biological samples and subsequent LC-MS/MS analysis of a signature peptide is dealing with the high complexity of the matrix after digestion, which can reduce sensitivity considerably. For the quantitation

  1. Determination of the Antibiotic Oxytetracycline in Commercial Milk by Solid-Phase Extraction: A High-Performance Liquid Chromatography (HPLC) Experiment for Quantitative Instrumental Analysis

    Science.gov (United States)

    Mei-Ratliff, Yuan

    2012-01-01

    Trace levels of oxytetracylcine spiked into commercial milk samples are extracted, cleaned up, and preconcentrated using a C[subscript 18] solid-phase extraction column. The extract is then analyzed by a high-performance liquid chromatography (HPLC) instrument equipped with a UV detector and a C[subscript 18] column (150 mm x 4.6 mm x 3.5 [mu]m).…

  2. Extract of Acanthospermum hispidum

    African Journals Online (AJOL)

    Administrator

    quantitatively. Acute toxicity study of the extract was conducted, and diabetic rats induced using alloxan (80 mg/kg ... Type 2 diabetes is one of the leading causes of mortality and ..... (2011): Phytochemical screening and extraction - A review.

  3. You had me at "Hello": Rapid extraction of dialect information from spoken words.

    Science.gov (United States)

    Scharinger, Mathias; Monahan, Philip J; Idsardi, William J

    2011-06-15

    Research on the neuronal underpinnings of speaker identity recognition has identified voice-selective areas in the human brain with evolutionary homologues in non-human primates who have comparable areas for processing species-specific calls. Most studies have focused on estimating the extent and location of these areas. In contrast, relatively few experiments have investigated the time-course of speaker identity, and in particular, dialect processing and identification by electro- or neuromagnetic means. We show here that dialect extraction occurs speaker-independently, pre-attentively and categorically. We used Standard American English and African-American English exemplars of 'Hello' in a magnetoencephalographic (MEG) Mismatch Negativity (MMN) experiment. The MMN as an automatic change detection response of the brain reflected dialect differences that were not entirely reducible to acoustic differences between the pronunciations of 'Hello'. Source analyses of the M100, an auditory evoked response to the vowels suggested additional processing in voice-selective areas whenever a dialect change was detected. These findings are not only relevant for the cognitive neuroscience of language, but also for the social sciences concerned with dialect and race perception. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Extraction of indirectly captured information for use in a comparison of offline pH measurement technologies.

    Science.gov (United States)

    Ritchie, Elspeth K; Martin, Elaine B; Racher, Andy; Jaques, Colin

    2017-06-10

    Understanding the causes of discrepancies in pH readings of a sample can allow more robust pH control strategies to be implemented. It was found that 59.4% of differences between two offline pH measurement technologies for an historical dataset lay outside an expected instrument error range of ±0.02pH. A new variable, Osmo Res , was created using multiple linear regression (MLR) to extract information indirectly captured in the recorded measurements for osmolality. Principal component analysis and time series analysis were used to validate the expansion of the historical dataset with the new variable Osmo Res . MLR was used to identify variables strongly correlated (p<0.05) with differences in pH readings by the two offline pH measurement technologies. These included concentrations of specific chemicals (e.g. glucose) and Osmo Res, indicating culture medium and bolus feed additions as possible causes of discrepancies between the offline pH measurement technologies. Temperature was also identified as statistically significant. It is suggested that this was a result of differences in pH-temperature compensations employed by the pH measurement technologies. In summary, a method for extracting indirectly captured information has been demonstrated, and it has been shown that competing pH measurement technologies were not necessarily interchangeable at the desired level of control (±0.02pH). Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Optimization of Extraction Process for Antidiabetic and Antioxidant Activities of Kursi Wufarikun Ziyabit Using Response Surface Methodology and Quantitative Analysis of Main Components.

    Science.gov (United States)

    Edirs, Salamet; Turak, Ablajan; Numonov, Sodik; Xin, Xuelei; Aisa, Haji Akber

    2017-01-01

    By using extraction yield, total polyphenolic content, antidiabetic activities (PTP-1B and α -glycosidase), and antioxidant activity (ABTS and DPPH) as indicated markers, the extraction conditions of the prescription Kursi Wufarikun Ziyabit (KWZ) were optimized by response surface methodology (RSM). Independent variables were ethanol concentration, extraction temperature, solid-to-solvent ratio, and extraction time. The result of RSM analysis showed that the four variables investigated have a significant effect ( p analysis of effective part of KWZ was characterized via UPLC method, 12 main components were identified by standard compounds, and all of them have shown good regression within the test ranges and the total content of them was 11.18%.

  6. Evaluation of ultrasound-assisted extraction as sample pre-treatment for quantitative determination of rare earth elements in marine biological tissues by inductively coupled plasma-mass spectrometry

    International Nuclear Information System (INIS)

    Costas, M.; Lavilla, I.; Gil, S.; Pena, F.; Calle, I.; Cabaleiro, N. de la; Bendicho, C.

    2010-01-01

    In this work, the determination of rare earth elements (REEs), i.e. Y, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb and Lu in marine biological tissues by inductively coupled-mass spectrometry (ICP-MS) after a sample preparation method based on ultrasound-assisted extraction (UAE) is described. The suitability of the extracts for ICP-MS measurements was evaluated. For that, studies were focused on the following issues: (i) use of clean up of extracts with a C18 cartridge for non-polar solid phase extraction; (ii) use of different internal standards; (iii) signal drift caused by changes in the nebulization efficiency and salt deposition on the cones during the analysis. The signal drift produced by direct introduction of biological extracts in the instrument was evaluated using a calibration verification standard for bracketing (standard-sample bracketing, SSB) and cumulative sum (CUSUM) control charts. Parameters influencing extraction such as extractant composition, mass-to-volume ratio, particle size, sonication time and sonication amplitude were optimized. Diluted single acids (HNO 3 and HCl) and mixtures (HNO 3 + HCl) were evaluated for improving the extraction efficiency. Quantitative recoveries for REEs were achieved using 5 mL of 3% (v/v) HNO 3 + 2% (v/v) HCl, particle size <200 μm, 3 min of sonication time and 50% of sonication amplitude. Precision, expressed as relative standard deviation from three independent extractions, ranged from 0.1 to 8%. In general, LODs were improved by a factor of 5 in comparison with those obtained after microwave-assisted digestion (MAD). The accuracy of the method was evaluated using the CRM BCR-668 (mussel tissue). Different seafood samples of common consumption were analyzed by ICP-MS after UAE and MAD.

  7. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    Science.gov (United States)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  8. A Virtual Emergency Telemedicine Serious Game in Medical Training: A Quantitative, Professional Feedback-Informed Evaluation Study.

    Science.gov (United States)

    Nicolaidou, Iolie; Antoniades, Athos; Constantinou, Riana; Marangos, Charis; Kyriacou, Efthyvoulos; Bamidis, Panagiotis; Dafli, Eleni; Pattichis, Constantinos S

    2015-06-17

    Serious games involving virtual patients in medical education can provide a controlled setting within which players can learn in an engaging way, while avoiding the risks associated with real patients. Moreover, serious games align with medical students' preferred learning styles. The Virtual Emergency TeleMedicine (VETM) game is a simulation-based game that was developed in collaboration with the mEducator Best Practice network in response to calls to integrate serious games in medical education and training. The VETM game makes use of data from an electrocardiogram to train practicing doctors, nurses, or medical students for problem-solving in real-life clinical scenarios through a telemedicine system and virtual patients. The study responds to two gaps: the limited number of games in emergency cardiology and the lack of evaluations by professionals. The objective of this study is a quantitative, professional feedback-informed evaluation of one scenario of VETM, involving cardiovascular complications. The study has the following research question: "What are professionals' perceptions of the potential of the Virtual Emergency Telemedicine game for training people involved in the assessment and management of emergency cases?" The evaluation of the VETM game was conducted with 90 professional ambulance crew nursing personnel specializing in the assessment and management of emergency cases. After collaboratively trying out one VETM scenario, participants individually completed an evaluation of the game (36 questions on a 5-point Likert scale) and provided written and verbal comments. The instrument assessed six dimensions of the game: (1) user interface, (2) difficulty level, (3) feedback, (4) educational value, (5) user engagement, and (6) terminology. Data sources of the study were 90 questionnaires, including written comments from 51 participants, 24 interviews with 55 participants, and 379 log files of their interaction with the game. Overall, the results were

  9. Information extracting and processing with diffraction enhanced imaging of X-ray

    International Nuclear Information System (INIS)

    Chen Bo; Chinese Academy of Science, Beijing; Chen Chunchong; Jiang Fan; Chen Jie; Ming Hai; Shu Hang; Zhu Peiping; Wang Junyue; Yuan Qingxi; Wu Ziyu

    2006-01-01

    X-ray imaging at high energies has been used for many years in many fields. Conventional X-ray imaging is based on the different absorption within a sample. It is difficult to distinguish different tissues of a biological sample because of their small difference in absorption. The authors use the diffraction enhanced imaging (DEI) method. The authors took images of absorption, extinction, scattering and refractivity. In the end, the authors presented pictures of high resolution with all these information combined. (authors)

  10. Architecture and data processing alternatives for the TSE computer. Volume 2: Extraction of topological information from an image by the Tse computer

    Science.gov (United States)

    Jones, J. R.; Bodenheimer, R. E.

    1976-01-01

    A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.

  11. Comparative Analysis of Chemical Composition, Antioxidant Activity and Quantitative Characterization of Some Phenolic Compounds in Selected Herbs and Spices in Different Solvent Extraction Systems.

    Science.gov (United States)

    Sepahpour, Shabnam; Selamat, Jinap; Abdul Manap, Mohd Yazid; Khatib, Alfi; Abdull Razis, Ahmad Faizal

    2018-02-13

    This study evaluated the efficacy of various organic solvents (80% acetone, 80% ethanol, 80% methanol) and distilled water for extracting antioxidant phenolic compounds from turmeric, curry leaf, torch ginger and lemon grass extracts. They were analyzed regarding the total phenol and flavonoid contents, antioxidant activity and concentration of some phenolic compounds. Antioxidant activity was determined by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) free radical scavenging assay and the ferric reducing antioxidant power (FRAP) assay. Quantification of phenolic compounds was carried out using high-performance liquid chromatography (HPLC). All the extracts possessed antioxidant activity, however, the different solvents showed different efficiencies in the extraction of phenolic compounds. Turmeric showed the highest DPPH values (67.83-13.78%) and FRAP (84.9-2.3 mg quercetin/g freeze-dried crude extract), followed by curry leaf, torch ginger and lemon grass. While 80% acetone was shown to be the most efficient solvent for the extraction of total phenolic compounds from turmeric, torch ginger and lemon grass (221.68, 98.10 and 28.19 mg GA/g freeze dried crude extract, respectively), for the recovery of phenolic compounds from curry leaf (92.23 mg GA/g freeze-dried crude extract), 80% ethanol was the most appropriate solvent. Results of HPLC revealed that the amount of phenolic compounds varied depending on the types of solvents used.

  12. Comparative Analysis of Chemical Composition, Antioxidant Activity and Quantitative Characterization of Some Phenolic Compounds in Selected Herbs and Spices in Different Solvent Extraction Systems

    Directory of Open Access Journals (Sweden)

    Shabnam Sepahpour

    2018-02-01

    Full Text Available This study evaluated the efficacy of various organic solvents (80% acetone, 80% ethanol, 80% methanol and distilled water for extracting antioxidant phenolic compounds from turmeric, curry leaf, torch ginger and lemon grass extracts. They were analyzed regarding the total phenol and flavonoid contents, antioxidant activity and concentration of some phenolic compounds. Antioxidant activity was determined by the 2,2-diphenyl-1-picrylhydrazyl (DPPH free radical scavenging assay and the ferric reducing antioxidant power (FRAP assay. Quantification of phenolic compounds was carried out using high-performance liquid chromatography (HPLC. All the extracts possessed antioxidant activity, however, the different solvents showed different efficiencies in the extraction of phenolic compounds. Turmeric showed the highest DPPH values (67.83–13.78% and FRAP (84.9–2.3 mg quercetin/g freeze-dried crude extract, followed by curry leaf, torch ginger and lemon grass. While 80% acetone was shown to be the most efficient solvent for the extraction of total phenolic compounds from turmeric, torch ginger and lemon grass (221.68, 98.10 and 28.19 mg GA/g freeze dried crude extract, respectively, for the recovery of phenolic compounds from curry leaf (92.23 mg GA/g freeze-dried crude extract, 80% ethanol was the most appropriate solvent. Results of HPLC revealed that the amount of phenolic compounds varied depending on the types of solvents used.

  13. What do professional forecasters' stock market expectations tell us about herding, information extraction and beauty contests?

    DEFF Research Database (Denmark)

    Rangvid, Jesper; Schmeling, M.; Schrimpf, A.

    2013-01-01

    We study how professional forecasters form equity market expectations based on a new micro-level dataset which includes rich cross-sectional information about individual characteristics. We focus on testing whether agents rely on the beliefs of others, i.e., consensus expectations, when forming...... their own forecast. We find strong evidence that the average of all forecasters' beliefs influences an individual's own forecast. This effect is stronger for young and less experienced forecasters as well as forecasters whose pay depends more on performance relative to a benchmark. Further tests indicate...

  14. Examination of Information Technology (IT) Certification and the Human Resources (HR) Professional Perception of Job Performance: A Quantitative Study

    Science.gov (United States)

    O'Horo, Neal O.

    2013-01-01

    The purpose of this quantitative survey study was to test the Leontief input/output theory relating the input of IT certification to the output of the English-speaking U.S. human resource professional perceived IT professional job performance. Participants (N = 104) rated their perceptions of IT certified vs. non-IT certified professionals' job…

  15. Quantitative predictions from competition theory with incomplete information on model parameters tested against experiments across diverse taxa

    OpenAIRE

    Fort, Hugo

    2017-01-01

    We derive an analytical approximation for making quantitative predictions for ecological communities as a function of the mean intensity of the inter-specific competition and the species richness. This method, with only a fraction of the model parameters (carrying capacities and competition coefficients), is able to predict accurately empirical measurements covering a wide variety of taxa (algae, plants, protozoa).

  16. CLASSIFICATION OF INFORMAL SETTLEMENTS THROUGH THE INTEGRATION OF 2D AND 3D FEATURES EXTRACTED FROM UAV DATA

    Directory of Open Access Journals (Sweden)

    C. M. Gevaert

    2016-06-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing very high resolution and up-to-date information to support informal settlement upgrading projects. In order to provide accurate basemaps, urban scene understanding through the identification and classification of buildings and terrain is imperative. However, common characteristics of informal settlements such as small, irregular buildings with heterogeneous roof material and large presence of clutter challenge state-of-the-art algorithms. Especially the dense buildings and steeply sloped terrain cause difficulties in identifying elevated objects. This work investigates how 2D radiometric and textural features, 2.5D topographic features, and 3D geometric features obtained from UAV imagery can be integrated to obtain a high classification accuracy in challenging classification problems for the analysis of informal settlements. It compares the utility of pixel-based and segment-based features obtained from an orthomosaic and DSM with point-based and segment-based features extracted from the point cloud to classify an unplanned settlement in Kigali, Rwanda. Findings show that the integration of 2D and 3D features leads to higher classification accuracies.

  17. Extraction of compositional and hydration information of sulfates from laser-induced plasma spectra recorded under Mars atmospheric conditions - Implications for ChemCam investigations on Curiosity rover

    Energy Technology Data Exchange (ETDEWEB)

    Sobron, Pablo, E-mail: pablo.sobron@asc-csa.gc.ca [Department of Earth and Planetary Sciences and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Wang, Alian [Department of Earth and Planetary Sciences and McDonnell Center for the Space Sciences, Washington University, St. Louis, MO 63130 (United States); Sobron, Francisco [Unidad Asociada UVa-CSIC a traves del Centro de Astrobiologia, Parque Tecnologico de Boecillo, Parcela 203, Boecillo (Valladolid), 47151 (Spain)

    2012-02-15

    Given the volume of spectral data required for providing accurate compositional information and thereby insight in mineralogy and petrology from laser-induced breakdown spectroscopy (LIBS) measurements, fast data processing tools are a must. This is particularly true during the tactical operations of rover-based planetary exploration missions such as the Mars Science Laboratory rover, Curiosity, which will carry a remote LIBS spectrometer in its science payload. We have developed: an automated fast pre-processing sequence of algorithms for converting a series of LIBS spectra (typically 125) recorded from a single target into a reliable SNR-enhanced spectrum; a dedicated routine to quantify its spectral features; and a set of calibration curves using standard hydrous and multi-cation sulfates. These calibration curves allow deriving the elemental compositions and the degrees of hydration of various hydrous sulfates, one of the two major types of secondary minerals found on Mars. Our quantitative tools are built upon calibration-curve modeling, through the correlation of the elemental concentrations and the peak areas of the atomic emission lines observed in the LIBS spectra of standard samples. At present, we can derive the elemental concentrations of K, Na, Ca, Mg, Fe, Al, S, O, and H in sulfates, as well as the hydration degrees of Ca- and Mg-sulfates, from LIBS spectra obtained in both Earth atmosphere and Mars atmospheric conditions in a Planetary Environment and Analysis Chamber (PEACh). In addition, structural information can be potentially obtained for various Fe-sulfates. - Highlights: Black-Right-Pointing-Pointer Routines for LIBS spectral data fast automated processing. Black-Right-Pointing-Pointer Identification of elements and determination of the elemental composition. Black-Right-Pointing-Pointer Calibration curves for sulfate samples in Earth and Mars atmospheric conditions. Black-Right-Pointing-Pointer Fe curves probably related to the crystalline

  18. Breast cancer and quality of life: medical information extraction from health forums.

    Science.gov (United States)

    Opitz, Thomas; Aze, Jérome; Bringay, Sandra; Joutard, Cyrille; Lavergne, Christian; Mollevi, Caroline

    2014-01-01

    Internet health forums are a rich textual resource with content generated through free exchanges among patients and, in certain cases, health professionals. We tackle the problem of retrieving clinically relevant information from such forums, with relevant topics being defined from clinical auto-questionnaires. Texts in forums are largely unstructured and noisy, calling for adapted preprocessing and query methods. We minimize the number of false negatives in queries by using a synonym tool to achieve query expansion of initial topic keywords. To avoid false positives, we propose a new measure based on a statistical comparison of frequent co-occurrences in a large reference corpus (Web) to keep only relevant expansions. Our work is motivated by a study of breast cancer patients' health-related quality of life (QoL). We consider topics defined from a breast-cancer specific QoL-questionnaire. We quantify and structure occurrences in posts of a specialized French forum and outline important future developments.

  19. EXTRACTION OF BENTHIC COVER INFORMATION FROM VIDEO TOWS AND PHOTOGRAPHS USING OBJECT-BASED IMAGE ANALYSIS

    Directory of Open Access Journals (Sweden)

    M. T. L. Estomata

    2012-07-01

    Full Text Available Mapping benthic cover in deep waters comprises a very small proportion of studies in the field of research. Majority of benthic cover mapping makes use of satellite images and usually, classification is carried out only for shallow waters. To map the seafloor in optically deep waters, underwater videos and photos are needed. Some researchers have applied this method on underwater photos, but made use of different classification methods such as: Neural Networks, and rapid classification via down sampling. In this study, accurate bathymetric data obtained using a multi-beam echo sounder (MBES was attempted to be used as complementary data with the underwater photographs. Due to the absence of a motion reference unit (MRU, which applies correction to the data gathered by the MBES, accuracy of the said depth data was compromised. Nevertheless, even with the absence of accurate bathymetric data, object-based image analysis (OBIA, which used rule sets based on information such as shape, size, area, relative distance, and spectral information, was still applied. Compared to pixel-based classifications, OBIA was able to classify more specific benthic cover types other than coral and sand, such as rubble and fish. Through the use of rule sets on area, less than or equal to 700 pixels for fish and between 700 to 10,000 pixels for rubble, as well as standard deviation values to distinguish texture, fish and rubble were identified. OBIA produced benthic cover maps that had higher overall accuracy, 93.78±0.85%, as compared to pixel-based methods that had an average accuracy of only 87.30±6.11% (p-value = 0.0001, α = 0.05.

  20. Information Extraction and Interpretation Analysis of Mineral Potential Targets Based on ETM+ Data and GIS technology: A Case Study of Copper and Gold Mineralization in Burma

    International Nuclear Information System (INIS)

    Wenhui, Du; Yongqing, Chen; Nana, Guo; Yinglong, Hao; Pengfei, Zhao; Gongwen, Wang

    2014-01-01

    Mineralization-alteration and structure information extraction plays important roles in mineral resource prospecting and assessment using remote sensing data and the Geographical Information System (GIS) technology. Choosing copper and gold mines in Burma as example, the authors adopt band ratio, threshold segmentation and principal component analysis (PCA) to extract the hydroxyl alteration information using ETM+ remote sensing images. Digital elevation model (DEM) (30m spatial resolution) and ETM+ data was used to extract linear and circular faults that are associated with copper and gold mineralization. Combining geological data and the above information, the weights of evidence method and the C-A fractal model was used to integrate and identify the ore-forming favourable zones in this area. Research results show that the high grade potential targets are located with the known copper and gold deposits, and the integrated information can be used to the next exploration for the mineral resource decision-making

  1. Rate phenomena in uranium extraction by amines

    International Nuclear Information System (INIS)

    Coleman, C.F.; McDowell, W.J.

    1979-01-01

    Kinetics studies and other rate measurements are reviewed in the amine extraction of uranium and of some other related and associated metal ions. Equilibration is relatively fast in the uranium sulfate systems most important to uranium hydrometallurgy. Significantly slow equilibration has been encountered in some other systems. Most of the recorded rate information, both qualitative and quantitative, has come from exploratory and process-development work, while some kinetics studies have been directed specifically toward elucidation of extraction mechanisms. 71 references

  2. The BEL information extraction workflow (BELIEF): evaluation in the BioCreative V BEL and IAT track.

    Science.gov (United States)

    Madan, Sumit; Hodapp, Sven; Senger, Philipp; Ansari, Sam; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel; Fluck, Juliane

    2016-01-01

    Network-based approaches have become extremely important in systems biology to achieve a better understanding of biological mechanisms. For network representation, the Biological Expression Language (BEL) is well designed to collate findings from the scientific literature into biological network models. To facilitate encoding and biocuration of such findings in BEL, a BEL Information Extraction Workflow (BELIEF) was developed. BELIEF provides a web-based curation interface, the BELIEF Dashboard, that incorporates text mining techniques to support the biocurator in the generation of BEL networks. The underlying UIMA-based text mining pipeline (BELIEF Pipeline) uses several named entity recognition processes and relationship extraction methods to detect concepts and BEL relationships in literature. The BELIEF Dashboard allows easy curation of the automatically generated BEL statements and their context annotations. Resulting BEL statements and their context annotations can be syntactically and semantically verified to ensure consistency in the BEL network. In summary, the workflow supports experts in different stages of systems biology network building. Based on the BioCreative V BEL track evaluation, we show that the BELIEF Pipeline automatically extracts relationships with an F-score of 36.4% and fully correct statements can be obtained with an F-score of 30.8%. Participation in the BioCreative V Interactive task (IAT) track with BELIEF revealed a systems usability scale (SUS) of 67. Considering the complexity of the task for new users-learning BEL, working with a completely new interface, and performing complex curation-a score so close to the overall SUS average highlights the usability of BELIEF.Database URL: BELIEF is available at http://www.scaiview.com/belief/. © The Author(s) 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Quantitative characterization of pyrimidine dimer excision from UV-irradiated DNA (excision capacity) by cell-free extracts of the yeast Saccharomyces cerevisiae

    International Nuclear Information System (INIS)

    Bekker, M.L.; Kaboev, O.K.; Akhmedov, A.T.; Luchkina, L.A.

    1984-01-01

    Cell-free extracts from wild-type yeast (RAD + ) and from rad mutants belonging to the RAD3 epistatic group (rad1-1, rad2-1, rad3-1, rad4-1) contain activities catalyzing the excision of pyrimidine dimers (PD) from purified ultraviolet-irradiated DNA which was not pre-treated with exogenous UV-endonuclease. The level of these activities in cell-free extracts from rad mutants did not differ from that in wild-type extract and was close to the in vivo excision capacity of the latter calculated from the LD 37 (about 10 4 PD per haploid genome). (Auth.)

  4. Art or Science? An Evidence-Based Approach to Human Facial Beauty a Quantitative Analysis Towards an Informed Clinical Aesthetic Practice.

    Science.gov (United States)

    Harrar, Harpal; Myers, Simon; Ghanem, Ali M

    2018-02-01

    Patients often seek guidance from the aesthetic practitioners regarding treatments to enhance their 'beauty'. Is there a science behind the art of assessment and if so is it measurable? Through the centuries, this question has challenged scholars, artists and surgeons. This study aims to undertake a review of the evidence behind quantitative facial measurements in assessing beauty to help the practitioner in everyday aesthetic practice. A Medline, Embase search for beauty, facial features and quantitative analysis was undertaken. Inclusion criteria were studies on adults, and exclusions included studies undertaken for dental, cleft lip, oncology, burns or reconstructive surgeries. The abstracts and papers were appraised, and further studies excluded that were considered inappropriate. The data were extracted using a standardised table. The final dataset was appraised in accordance with the PRISMA checklist and Holland and Rees' critique tools. Of the 1253 studies screened, 1139 were excluded from abstracts and a further 70 excluded from full text articles. The remaining 44 were assessed qualitatively and quantitatively. It became evident that the datasets were not comparable. Nevertheless, common themes were obvious, and these were summarised. Despite measures of the beauty of individual components to the sum of all the parts, such as symmetry and the golden ratio, we are yet far from establishing what truly constitutes quantitative beauty. Perhaps beauty is truly in the 'eyes of the beholder' (and perhaps in the eyes of the subject too). This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  5. Citizen-Centric Urban Planning through Extracting Emotion Information from Twitter in an Interdisciplinary Space-Time-Linguistics Algorithm

    Directory of Open Access Journals (Sweden)

    Bernd Resch

    2016-07-01

    Full Text Available Traditional urban planning processes typically happen in offices and behind desks. Modern types of civic participation can enhance those processes by acquiring citizens’ ideas and feedback in participatory sensing approaches like “People as Sensors”. As such, citizen-centric planning can be achieved by analysing Volunteered Geographic Information (VGI data such as Twitter tweets and posts from other social media channels. These user-generated data comprise several information dimensions, such as spatial and temporal information, and textual content. However, in previous research, these dimensions were generally examined separately in single-disciplinary approaches, which does not allow for holistic conclusions in urban planning. This paper introduces TwEmLab, an interdisciplinary approach towards extracting citizens’ emotions in different locations within a city. More concretely, we analyse tweets in three dimensions (space, time, and linguistics, based on similarities between each pair of tweets as defined by a specific set of functional relationships in each dimension. We use a graph-based semi-supervised learning algorithm to classify the data into discrete emotions (happiness, sadness, fear, anger/disgust, none. Our proposed solution allows tweets to be classified into emotion classes in a multi-parametric approach. Additionally, we created a manually annotated gold standard that can be used to evaluate TwEmLab’s performance. Our experimental results show that we are able to identify tweets carrying emotions and that our approach bears extensive potential to reveal new insights into citizens’ perceptions of the city.

  6. A simple method for normalization of DNA extraction to improve the quantitative detection of soil-borne plant pathogenic oomycetes by real-time PCR.

    Science.gov (United States)

    Li, M; Ishiguro, Y; Kageyama, K; Zhu, Z

    2015-08-01

    Most of the current research into the quantification of soil-borne pathogenic oomycetes lacks determination of DNA extraction efficiency, probably leading to an incorrect estimation of DNA quantity. In this study, we developed a convenient method by using a 100 bp artificially synthesized DNA sequence derived from the mitochondrion NADH dehydrogenase subunit 2 gene of Thunnus thynnus as a control to determine the DNA extraction efficiency. The control DNA was added to soils and then co-extracted along with soil genomic DNA. DNA extraction efficiency was determined by the control DNA. Two different DNA extraction methods were compared and evaluated using different types of soils, and the commercial kit was proved to give more consistent results. We used the control DNA combined with real-time PCR to quantify the oomycete DNAs from 12 naturally infested soils. Detectable target DNA concentrations were three to five times higher after normalization. Our tests also showed that the extraction efficiencies varied on a sample-to-sample basis and were simple and useful for the accurate quantification of soil-borne pathogenic oomycetes. Oomycetes include many important plant pathogens. Accurate quantification of these pathogens is essential in the management of diseases. This study reports an easy method utilizing an external DNA control for the normalization of DNA extraction by real-time PCR. By combining two different efficient soil DNA extraction methods, the developed quantification method dramatically improved the results. This study also proves that the developed normalization method is necessary and useful for the accurate quantification of soil-borne plant pathogenic oomycetes. © 2015 The Society for Applied Microbiology.

  7. Eodataservice.org: Big Data Platform to Enable Multi-disciplinary Information Extraction from Geospatial Data

    Science.gov (United States)

    Natali, S.; Mantovani, S.; Barboni, D.; Hogan, P.

    2017-12-01

    In 1999, US Vice-President Al Gore outlined the concept of `Digital Earth' as a multi-resolution, three-dimensional representation of the planet to find, visualise and make sense of vast amounts of geo- referenced information on physical and social environments, allowing to navigate through space and time, accessing historical and forecast data to support scientists, policy-makers, and any other user. The eodataservice platform (http://eodataservice.org/) implements the Digital Earth Concept: eodatasevice is a cross-domain platform that makes available a large set of multi-year global environmental collections allowing data discovery, visualization, combination, processing and download. It implements a "virtual datacube" approach where data stored on distributed data centers are made available via standardized OGC-compliant interfaces. Dedicated web-based Graphic User Interfaces (based on the ESA-NASA WebWorldWind technology) as well as web-based notebooks (e.g. Jupyter notebook), deskop GIS tools and command line interfaces can be used to access and manipulate the data. The platform can be fully customized on users' needs. So far eodataservice has been used for the following thematic applications: High resolution satellite data distribution Land surface monitoring using SAR surface deformation data Atmosphere, ocean and climate applications Climate-health applications Urban Environment monitoring Safeguard of cultural heritage sites Support to farmers and (re)-insurances in the agriculturés field In the current work, the EO Data Service concept is presented as key enabling technology; furthermore various examples are provided to demonstrate the high level of interdisciplinarity of the platform.

  8. A METHOD OF EXTRACTING SHORELINE BASED ON SEMANTIC INFORMATION USING DUAL-LENGTH LiDAR DATA

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available Shoreline is a spatial varying separation between water and land. By utilizing dual-wavelength LiDAR point data together with semantic information that shoreline often appears beyond water surface profile and is observable on the beach, the paper generates the shoreline and the details are as follows: (1 Gain the water surface profile: first we obtain water surface by roughly selecting water points based on several features of water body, then apply least square fitting method to get the whole water trend surface. Then we get the ground surface connecting the under -water surface by both TIN progressive filtering method and surface interpolation method. After that, we have two fitting surfaces intersected to get water surface profile of the island. (2 Gain the sandy beach: we grid all points and select the water surface profile grids points as seeds, then extract sandy beach points based on eight-neighborhood method and features, then we get all sandy beaches. (3 Get the island shoreline: first we get the sandy beach shoreline based on intensity information, then we get a threshold value to distinguish wet area and dry area, therefore we get the shoreline of several sandy beaches. In some extent, the shoreline has the same height values within a small area, by using all the sandy shoreline points to fit a plane P, and the intersection line of the ground surface and the shoreline plane P can be regarded as the island shoreline. By comparing with the surveying shoreline, the results show that the proposed method can successfully extract shoreline.

  9. Quantitative and qualitative effects of phosphorus on extracts and exudates of sudangrass roots in relation to vesicular-arbuscular mycorrhiza formation.

    Science.gov (United States)

    Schwab, S M; Menge, J A; Leonard, R T

    1983-11-01

    A comparison was made of water-soluble root exudates and extracts of Sorghum vulgare Pers. grown under two levels of P nutrition. An increase in P nutrition significantly decreased the concentration of carbohydrates, carboxylic acids, and amino acids in exudates, and decreased the concentration of carboxylic acids in extracts. Higher P did not affect the relative proportions of specific carboxylic acids and had little effect on proportions of specific amino acids in both extracts and exudates. Phosphorus amendment resulted in an increase in the relative proportion of arabinose and a decrease in the proportion of fructose in exudates, but did not have a large effect on the proportion of individual sugars in extracts. The proportions of specific carbohydrates, carboxylic acids, and amino acids varied between exudates and extracts. Therefore, the quantity and composition of root extracts may not be a reliable predictor of the availability of substrate for symbiotic vesicular-arbuscular mycorrhizal fungi. Comparisons of the rate of leakage of compounds from roots with the growth rate of vesicular-arbuscular mycorrhizal fungi suggest that the fungus must either be capable of using a variety of organic substrates for growth, or be capable of inducing a much higher rate of movement of specific organic compounds across root cell membranes than occurs through passive exudation as measured in this study.

  10. A Quantitative Study on Japanese Internet User's Awareness to Information Security: Necessity and Importance of Education and Policy

    OpenAIRE

    Toshihiko Takemura; Atsushi Umino

    2009-01-01

    In this paper, the authors examine whether or not there Institute for Information and Communications Policy shows are differences of Japanese Internet users awareness to information security based on individual attributes by using analysis of variance based on non-parametric method. As a result, generally speaking, it is found that Japanese Internet users' awareness to information security is different by individual attributes. Especially, the authors verify that the users who received the in...

  11. Laser heat stimulation of tiny skin areas adds valuable information to quantitative sensory testing in postherpetic neuralgia.

    Science.gov (United States)

    Franz, Marcel; Spohn, Dorothee; Ritter, Alexander; Rolke, Roman; Miltner, Wolfgang H R; Weiss, Thomas

    2012-08-01

    Patients suffering from postherpetic neuralgia often complain about hypo- or hypersensation in the affected dermatome. The loss of thermal sensitivity has been demonstrated by quantitative sensory testing as being associated with small-fiber (Aδ- and C-fiber) deafferentation. We aimed to compare laser stimulation (radiant heat) to thermode stimulation (contact heat) with regard to their sensitivity and specificity to detect thermal sensory deficits related to small-fiber dysfunction in postherpetic neuralgia. We contrasted detection rate of laser stimuli with 5 thermal parameters (thresholds of cold/warm detection, cold/heat pain, and sensory limen) of quantitative sensory testing. Sixteen patients diagnosed with unilateral postherpetic neuralgia and 16 age- and gender-matched healthy control subjects were tested. Quantitative sensory testing and laser stimulation of tiny skin areas were performed in the neuralgia-affected skin and in the contralateral homologue of the neuralgia-free body side. Across the 5 thermal parameters of thermode stimulation, only one parameter (warm detection threshold) revealed sensory abnormalities (thermal hypoesthesia to warm stimuli) in the neuralgia-affected skin area of patients but not in the contralateral area, as compared to the control group. In contrast, patients perceived significantly less laser stimuli both in the affected skin and in the contralateral skin compared to controls. Overall, laser stimulation proved more sensitive and specific in detecting thermal sensory abnormalities in the neuralgia-affected skin, as well as in the control skin, than any single thermal parameter of thermode stimulation. Thus, laser stimulation of tiny skin areas might be a useful diagnostic tool for small-fiber dysfunction. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  12. Intelligent tecnology of information analysis and quantitative evaluation of condition with preliminary and inaccurate data of current observations

    Directory of Open Access Journals (Sweden)

    Yu. Savva

    1999-08-01

    Full Text Available The article presents an approach to design of data processing systems of ecological monitoring based on modern information technologies: data mining, genetic algorithms and geoinformatics.

  13. Quantitative determination of plant phenolics in Urtica dioica extracts by high-performance liquid chromatography coupled with tandem mass spectrometric detection.

    Science.gov (United States)

    Orčić, Dejan; Francišković, Marina; Bekvalac, Kristina; Svirčev, Emilija; Beara, Ivana; Lesjak, Marija; Mimica-Dukić, Neda

    2014-01-15

    A method for quantification of 45 plant phenolics (including benzoic acids, cinnamic acids, flavonoid aglycones, C- and O-glycosides, coumarins, and lignans) in plant extracts was developed, based on reversed phase HPLC separation of extract components, followed by tandem mass spectrometric detection. The phenolic profile of 80% MeOH extracts of the stinging nettle (Urtica dioica L.) herb, root, stem, leaf and inflorescence was obtained by using this method. Twenty-one of the investigated compounds were present at levels above the reliable quantification limit, with 5-O-caffeoylquinic acid, rutin and isoquercitrin as the most abundant. The inflorescence extracts were by far the richest in phenolics, with the investigated compounds amounting 2.5-5.1% by weight. As opposed to this, the root extracts were poor in phenolics, with only several acids and derivatives being present in significant amounts. The results obtained by the developed method represent the most detailed U. dioica chemical profile so far. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. In-line monitoring of extraction process of scutellarein from Erigeron breviscapus (vant.) Hand-Mazz based on qualitative and quantitative uses of near-infrared spectroscopy.

    Science.gov (United States)

    Wu, Yongjiang; Jin, Ye; Ding, Haiying; Luan, Lianjun; Chen, Yong; Liu, Xuesong

    2011-09-01

    The application of near-infrared (NIR) spectroscopy for in-line monitoring of extraction process of scutellarein from Erigeron breviscapus (vant.) Hand-Mazz was investigated. For NIR measurements, two fiber optic probes designed to transmit NIR radiation through a 2 mm pathlength flow cell were utilized to collect spectra in real-time. High performance liquid chromatography (HPLC) was used as a reference method to determine scutellarein in extract solution. Partial least squares regression (PLSR) calibration model of Savitzky-Golay smoothing NIR spectra in the 5450-10,000 cm(-1) region gave satisfactory predictive results for scutellarein. The results showed that the correlation coefficients of calibration and cross validation were 0.9967 and 0.9811, respectively, and the root mean square error of calibration and cross validation were 0.044 and 0.105, respectively. Furthermore, both the moving block standard deviation (MBSD) method and conformity test were used to identify the end point of extraction process, providing real-time data and instant feedback about the extraction course. The results obtained in this study indicated that the NIR spectroscopy technique provides an efficient and environmentally friendly approach for fast determination of scutellarein and end point control of extraction process. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Acceptance Factors Influencing Adoption of National Institute of Standards and Technology Information Security Standards: A Quantitative Study

    Science.gov (United States)

    Kiriakou, Charles M.

    2012-01-01

    Adoption of a comprehensive information security governance model and security controls is the best option organizations may have to protect their information assets and comply with regulatory requirements. Understanding acceptance factors of the National Institute of Standards and Technology (NIST) Risk Management Framework (RMF) comprehensive…

  16. An audit of the reliability of influenza vaccination and medical information extracted from eHealth records in general practice.

    Science.gov (United States)

    Regan, Annette K; Gibbs, Robyn A; Effler, Paul V

    2018-05-31

    To evaluate the reliability of information in general practice (GP) electronic health records (EHRs), 2100 adult patients were randomly selected for interview regarding the presence of specific medical conditions and recent influenza vaccination. Agreement between self-report and data extracted from EHRs was compared using Cohen's kappa coefficient (k) and interpreted in accordance with Altman's Kappa Benchmarking criteria; 377 (18%) patients declined participation, and 608 (29%) could not be contacted. Of 1115 (53%) remaining, 856 (77%) were active patients (≥3 visits to the GP practice in the last two years) who provided complete information for analysis. Although a higher proportion of patients self-reported being vaccinated or having a medical condition compared to the EHR (50.7% vs 36.9%, and 39.4% vs 30.3%, respectively), there was "good" agreement between self-report and EHR for both vaccination status (κ = 0.67) and medical conditions (κ = 0.66). These findings suggest EHR may be useful for public health surveillance. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  17. Basicity determination for neutral phosphorus organic extragents by NMR 31P-method in two-phase systems, and quantitative interrelations of acido-basic extractive properties

    International Nuclear Information System (INIS)

    Laskorin, B.N.; Yakshin, V.V.; Meshcheryakov, N.M.; Yagodin, V.G.

    1988-01-01

    Consideration is given to the method for determination of basicity of neutral organophosphorus compounds of XGZP=0 type (X, G, Z=C 4 H 9 , C 8 H 17 , C 6 H 5 ). The method is based on change of chemical shift of phosphorus-31 nuclei in two-phase extraction system depending on acidity function H O , H A , H PO . It is shown that the method can be used for evaluation and forecasting of phosphine oxide ability in the processes of UO 2 SO 4 solvent extraction from aqueous solutions of sulfuric acid

  18. QUANTITATIVE DETERMINATION OF CHIRAL DICHLORPROP AND MECOPROP ENANTIOMERS IN DRINKING AND SURFACE WATERS BY SOLID-PHASE EXTRACTION AND CAPILLARY ELECTROPHORESIS

    Czech Academy of Sciences Publication Activity Database

    Tříska, Jan; Vrchotová, Naděžda

    2002-01-01

    Roč. 11, č. 7 (2002), s. 332-336 ISSN 1018-4619 Institutional research plan: CEZ:AV0Z6087904 Keywords : capillary electrophoresis * solid-phase extraction * chiral herbicides Subject RIV: CB - Analytical Chemistry, Separation Impact factor: 0.309, year: 2002

  19. Liquid chromatography-tandem mass spectrometric assay for the quantitative determination of the tyrosine kinase inhibitor quizartinib in mouse plasma using salting-out liquid-liquid extraction

    NARCIS (Netherlands)

    Retmana, Irene A; Wang, Jing; Schinkel, Alfred H; Schellens, Jan H M; Beijnen, Jos H; Sparidans, Rolf W

    2017-01-01

    A bioanalytical assay for quizartinib -a potent, and selective FLT3 tyrosine kinase inhibitor- in mouse plasma was developed and validated. Salting-out assisted liquid-liquid extraction (SALLE), using acetonitrile and magnesium sulfate, was selected as sample pretreatment with deuterated quizartinib

  20. The quantum-mechanical approach to construction of quantitative assessments of some documentary information properties (on example of nuclear knowledge)

    International Nuclear Information System (INIS)

    Lebedev, A A; Maksimov, N V; Smirnova, E V

    2017-01-01

    The paper presents a model of information interactions, based on a probabilistic concept of meanings. The proposed hypothesis about the wave nature of information and use of quantum mechanics mathematical apparatus allow to consider the phenomena of interference and diffraction with respect to the linguistic variables, and to quantify dynamics of terms in subject areas. Retrospective database INIS IAEA was used as an experimental base. (paper)

  1. Quantitative Indicators for Behaviour Drift Detection from Home Automation Data.

    Science.gov (United States)

    Veronese, Fabio; Masciadri, Andrea; Comai, Sara; Matteucci, Matteo; Salice, Fabio

    2017-01-01

    Smart Homes diffusion provides an opportunity to implement elderly monitoring, extending seniors' independence and avoiding unnecessary assistance costs. Information concerning the inhabitant behaviour is contained in home automation data, and can be extracted by means of quantitative indicators. The application of such approach proves it can evidence behaviour changes.

  2. Quantitative radiography

    International Nuclear Information System (INIS)

    Brase, J.M.; Martz, H.E.; Waltjen, K.E.; Hurd, R.L.; Wieting, M.G.

    1986-01-01

    Radiographic techniques have been used in nondestructive evaluation primarily to develop qualitative information (i.e., defect detection). This project applies and extends the techniques developed in medical x-ray imaging, particularly computed tomography (CT), to develop quantitative information (both spatial dimensions and material quantities) on the three-dimensional (3D) structure of solids. Accomplishments in FY 86 include (1) improvements in experimental equipment - an improved microfocus system that will give 20-μm resolution and has potential for increased imaging speed, and (2) development of a simple new technique for displaying 3D images so as to clearly show the structure of the object. Image reconstruction and data analysis for a series of synchrotron CT experiments conducted by LLNL's Chemistry Department has begun

  3. Quantitative aspects of directly coupled supercritical fluid extraction-capillary gas chromatography with a conventional split/splitless injector as interface

    OpenAIRE

    Lou, X.W.; Janssen, J.G.M.; Cramers, C.A.

    1993-01-01

    The quant. aspects of online supercrit. fluid extn.-capillary gas chromatog. (SFE-GC) with a split/splitless injector as interface were studied. Special attention was paid to the discrimination behavior and the reproducibility of the split/splitless interface. A simple exptl. set-up is proposed that allows accurate quantitation in online SFE-split GC. The results obtained in online SFE-GC compare favorably with those from conventional GC with split injection. Discrimination is absent when wor...

  4. Correlation Feature Selection and Mutual Information Theory Based Quantitative Research on Meteorological Impact Factors of Module Temperature for Solar Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Yujing Sun

    2016-12-01

    Full Text Available The module temperature is the most important parameter influencing the output power of solar photovoltaic (PV systems, aside from solar irradiance. In this paper, we focus on the interdisciplinary research that combines the correlation analysis, mutual information (MI and heat transfer theory, which aims to figure out the correlative relations between different meteorological impact factors (MIFs and PV module temperature from both quality and quantitative aspects. The identification and confirmation of primary MIFs of PV module temperature are investigated as the first step of this research from the perspective of physical meaning and mathematical analysis about electrical performance and thermal characteristic of PV modules based on PV effect and heat transfer theory. Furthermore, the quantitative description of the MIFs influence on PV module temperature is mathematically formulated as several indexes using correlation-based feature selection (CFS and MI theory to explore the specific impact degrees under four different typical weather statuses named general weather classes (GWCs. Case studies for the proposed methods were conducted using actual measurement data of a 500 kW grid-connected solar PV plant in China. The results not only verified the knowledge about the main MIFs of PV module temperatures, more importantly, but also provide the specific ratio of quantitative impact degrees of these three MIFs respectively through CFS and MI based measures under four different GWCs.

  5. Screening and quantitative determination of twelve acidic and neutral pharmaceuticals in whole blood by liquid-liquid extraction and liquid chromatography-tandem mass spectrometry

    DEFF Research Database (Denmark)

    Simonsen, Kirsten Wiese; Steentoft, Anni; Buck, Maike

    2010-01-01

    . The method was fully validated for salicylic acid, paracetamol, phenobarbital, carisoprodol, meprobamate, topiramate, etodolac, chlorzoxazone, furosemide, ibuprofen, warfarin, and salicylamide. The method also tentatively includes thiopental, theophylline, piroxicam, naproxen, diclophenac, and modafinil......We describe a multi-method for simultaneous identification and quantification of 12 acidic and neutral compounds in whole blood. The method involves a simple liquid-liquid extraction, and the identification and quantification are performed using liquid chromatography-tandem mass spectrometry...

  6. Quantitative Analysis of Bioactive Compounds In Extract and Fraction of Star Fruit (Averrhoa carambola L. Leaves Using High Performance Liquid Chromatography

    Directory of Open Access Journals (Sweden)

    Nanang Yunarto

    2017-05-01

    Full Text Available Starfruit (Averrhoa carambola L. is potential as raw material for medicine, native in tropic areas, including Indonesia. According to other study report, starfruit leaves containing flavonoids apigenin and quercetin as potential anti-inflammatory and anticancer agents. The raw material for the drug in Indonesia mostly obtained through imports from other countries. In order to support the independence of traditional medicine raw materials, it is important to standardize the quality of traditional medicine raw materials, in this case is star fruit leaves by High Performance Liquid Chromatography (HPLC method. The sample used is star fruit leaves extract obtained from maceration process using ethanol 70%; water fraction, ethyl acetate and hexane fractions obtained from fractionation process of the ethanolic extract. Physical parameters analyzed in sample include appearance, color, odor, taste, extract yield, water content, loss of drying, total ash content, residual solvent. Chemical parameters analyzed include apigenin and quercetin contents. The results shows that star fruit leaves used in this study meet the standards of Indonesian Herbal Pharmacopoeia with highest apigenin and quercetin content are in ethyl acetate fraction.

  7. A Quantitative Examination of Perceived Promotability of Information Security Professionals with Vendor-Specific Certifications versus Vendor-Neutral Certifications

    Science.gov (United States)

    Gleghorn, Gregory D.

    2011-01-01

    Human capital theory suggests the knowledge, skills, and abilities one obtains through experience, on-the-job training, or education enhances one's productivity. This research was based on human capital theory and promotability (i.e., upward mobility). The research offered in this dissertation shows what effect obtaining information security…

  8. A Quantitative Study of Factors Contributing to Perceived Job Satisfaction of Information Technology Professionals Working in California Community Colleges

    Science.gov (United States)

    Temple, James Christian

    2013-01-01

    Purpose: The purpose of this replication study was to understand job satisfaction factors (work, pay, supervision, people, opportunities for promotion, and job in general) as measured by the abridged Job Descriptive Index (aJDI) and the abridged Job in General (aJIG) scale for information technology (IT) professionals working in California…

  9. Second line drug susceptibility testing to inform the treatment of rifampin-resistant tuberculosis: a quantitative perspective

    Directory of Open Access Journals (Sweden)

    Emily A. Kendall

    2017-03-01

    Full Text Available Treatment failure and resistance amplification are common among patients with rifampin-resistant tuberculosis (TB. Drug susceptibility testing (DST for second-line drugs is recommended for these patients, but logistical difficulties have impeded widespread implementation of second-line DST in many settings. To provide a quantitative perspective on the decision to scale up second-line DST, we synthesize literature on the prevalence of second-line drug resistance, the expected clinical and epidemiologic benefits of using second-line DST to ensure that patients with rifampin-resistant TB receive effective regimens, and the costs of implementing (or not implementing second-line DST for all individuals diagnosed with rifampin-resistant TB. We conclude that, in most settings, second-line DST could substantially improve treatment outcomes for patients with rifampin-resistant TB, reduce transmission of drug-resistant TB, prevent amplification of drug resistance, and be affordable or even cost-saving. Given the large investment made in each patient treated for rifampin-resistant TB, these payoffs would come at relatively small incremental cost. These anticipated benefits likely justify addressing the real challenges faced in implementing second-line DST in most high-burden settings.

  10. Quantitative Measurement of Eyestrain on 3D Stereoscopic Display Considering the Eye Foveation Model and Edge Information

    Directory of Open Access Journals (Sweden)

    Hwan Heo

    2014-05-01

    Full Text Available We propose a new method for measuring the degree of eyestrain on 3D stereoscopic displays using a glasses-type of eye tracking device. Our study is novel in the following four ways: first, the circular area where a user’s gaze position exists is defined based on the calculated gaze position and gaze estimation error. Within this circular area, the position where edge strength is maximized can be detected, and we determine this position as the gaze position that has a higher probability of being the correct one. Based on this gaze point, the eye foveation model is defined. Second, we quantitatively evaluate the correlation between the degree of eyestrain and the causal factors of visual fatigue, such as the degree of change of stereoscopic disparity (CSD, stereoscopic disparity (SD, frame cancellation effect (FCE, and edge component (EC of the 3D stereoscopic display using the eye foveation model. Third, by comparing the eyestrain in conventional 3D video and experimental 3D sample video, we analyze the characteristics of eyestrain according to various factors and types of 3D video. Fourth, by comparing the eyestrain with or without the compensation of eye saccades movement in 3D video, we analyze the characteristics of eyestrain according to the types of eye movements in 3D video. Experimental results show that the degree of CSD causes more eyestrain than other factors.

  11. Analyse géochimique de la matiére organique extraite des roches sédimentaires. IV. Extraction des roches en faible quantités Geochemical Analysis of Organic Matter Extracted from Sedimentary Rocks Iv. Exraction from Small Amounts of Rock

    Directory of Open Access Journals (Sweden)

    Monin J. C.

    2006-11-01

    Full Text Available L'extraction en Soxhlet est inappllcable lorsque les échantillons de roche sont de trop petite taille. A l'occasion de la mise au point du protocole d'extraction correspondant, on examine l'influence d'un certain nombre de conditions opératoires sur le rendement d'extraction : température, durée nature et quantité du solvant, présence de lumière, présence d'air, procédé d'extraction. Pour les hydrocarbures, tant saturés qu'aromatiques, le facteur essentiel est l'agitation du milieu d'extraction ; la nature du solvant n'est pas critique, à condition de ne pas choisir un très mauvais solvant des hydrocarbures : l'extractibilité est en effet plus fonction du pouvoir désorbant vis-à-vis de la roche que du pouvoir solvant proprement dit. Pour les résines et asphalténes, l'interprétation des résultats est délicate, car la frontière n'est pas nette entre produits simplement dissous, produits de solvolyse et, produits de néoformation par interaction solvant-matière organique-matière minérale. II n'existe donc pas de protocole d'extraction recommandable dans l'absolu. Tout dépend des exigences analytiques et aussi pratiques du laboratoire; à l'Institut Français du Pétrole (IFP le protocole retenu est l'extraction en bécher avec agitation magnétique pendant 20 min dans le chroroforme à 50 °C (approximativement; on donne aussi le protocole d'évaporation du solvant et de récupération de l'extrait, qui doit être étudié soigneusement étant donné les faibles quantités mises en jeu. A Soxhlet extractor cannot be used with rock samples that are too small in size. With the development of on extraction procédure for such cases, this article examines the influence of various operating conditions on extraction yield, i. e. temperature, duration, nature and amount of solvent, presence of light, présence of air and extraction process. For both saturated and aromatic hydrocarbons, the essential factor is the stirring of the

  12. Perception versus reality: Bridging the gap between quantitative and qualitative information relating to the risks of uranium mining

    International Nuclear Information System (INIS)

    Needham, S.

    2002-01-01

    Environmental impact of uranium mining in Australia is frequently raised as an issue of public concern. However, the level of concern both in terms of public agitation and political response has diminished over the last decade, largely as a consequence of many years of demonstrated high levels of environmental protection achieved at Australian uranium mines. Another reason is because of improved information now accessible to the public on mine environmental management systems, monitoring results, and audit outcomes. This paper describes some communication methods developed for the uranium mines of the Alligator Rivers Region of the Northern Territory. These methods have improved the effectiveness of dialogue between stakeholders, and better inform the public about the levels of environmental protection achieved and the level of risk to the environment and the community. A simple approach is described which has been developed to help build a mutual understanding between technocrats and the lay person on perceptions of risk and actual environmental impact. (author)

  13. Correlates and predictors of loneliness in older-adults: a review of quantitative results informed by qualitative insights.

    Science.gov (United States)

    Cohen-Mansfield, Jiska; Hazan, Haim; Lerman, Yaffa; Shalom, Vera

    2016-04-01

    Older persons are particularly vulnerable to loneliness because of common age-related changes and losses. This paper reviews predictors of loneliness in the older population as described in the current literature and a small qualitative study. Peer-reviewed journal articles were identified from psycINFO, MEDLINE, and Google Scholar from 2000-2012. Overall, 38 articles were reviewed. Two focus groups were conducted asking older participants about the causes of loneliness. Variables significantly associated with loneliness in older adults were: female gender, non-married status, older age, poor income, lower educational level, living alone, low quality of social relationships, poor self-reported health, and poor functional status. Psychological attributes associated with loneliness included poor mental health, low self-efficacy beliefs, negative life events, and cognitive deficits. These associations were mainly studied in cross-sectional studies. In the focus groups, participants mentioned environmental barriers, unsafe neighborhoods, migration patterns, inaccessible housing, and inadequate resources for socializing. Other issues raised in the focus groups were the relationship between loneliness and boredom and inactivity, the role of recent losses of family and friends, as well as mental health issues, such as shame and fear. Future quantitative studies are needed to examine the impact of physical and social environments on loneliness in this population. It is important to better map the multiple factors and ways by which they impact loneliness to develop better solutions for public policy, city, and environmental planning, and individually based interventions. This effort should be viewed as a public health priority.

  14. Rapid qualitative and quantitative analysis of opiates in extract of poppy head via FTIR and chemometrics: towards in-field sensors.

    Science.gov (United States)

    Turner, Nicholas W; Cauchi, Michael; Piletska, Elena V; Preston, Christopher; Piletsky, Sergey A

    2009-07-15

    Identification and quantification of the opiates morphine and thebaine has been achieved in three commercial poppy cultivars using FTIR-ATR spectroscopy, from a simple and rapid methanolic extraction, suitable for field analysis. The limits of detection were 0.13 mg/ml (0.013%, w/v) and 0.3 mg/ml (0.03%, w/v) respectively. The concentrations of opiates present were verified with HPLC-MS. The chemometrics has been used to identify specific "signature" peaks in the poppy IR spectra for characterisation of cultivar by its unique fingerprint offering a potential forensic application in opiate crop analysis.

  15. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    OpenAIRE

    J. Sharmila; A. Subramani

    2016-01-01

    Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodolog...

  16. An analytical framework for extracting hydrological information from time series of small reservoirs in a semi-arid region

    Science.gov (United States)

    Annor, Frank; van de Giesen, Nick; Bogaard, Thom; Eilander, Dirk

    2013-04-01

    small reservoirs in the Upper East Region of Ghana. Reservoirs without obvious large seepage losses (field survey) were selected. To verify this, stable water isotopic samples are collected from groundwater upstream and downstream from the reservoir. By looking at possible enrichment of downstream groundwater, a good estimate of seepage can be made in addition to estimates on evaporation. We estimated the evaporative losses and compared those with field measurements using eddy correlation measurements. Lastly, we determined the cumulative surface runoff curves for the small reservoirs .We will present this analytical framework for extracting hydrological information from time series of small reservoirs and show the first results for our study region of northern Ghana.

  17. Quantitative Retrieval of Organic Soil Properties from Visible Near-Infrared Shortwave Infrared (Vis-NIR-SWIR Spectroscopy Using Fractal-Based Feature Extraction

    Directory of Open Access Journals (Sweden)

    Lanfa Liu

    2016-12-01

    Full Text Available Visible and near-infrared diffuse reflectance spectroscopy has been demonstrated to be a fast and cheap tool for estimating a large number of chemical and physical soil properties, and effective features extracted from spectra are crucial to correlating with these properties. We adopt a novel methodology for feature extraction of soil spectroscopy based on fractal geometry. The spectrum can be divided into multiple segments with different step–window pairs. For each segmented spectral curve, the fractal dimension value was calculated using variation estimators with power indices 0.5, 1.0 and 2.0. Thus, the fractal feature can be generated by multiplying the fractal dimension value with spectral energy. To assess and compare the performance of new generated features, we took advantage of organic soil samples from the large-scale European Land Use/Land Cover Area Frame Survey (LUCAS. Gradient-boosting regression models built using XGBoost library with soil spectral library were developed to estimate N, pH and soil organic carbon (SOC contents. Features generated by a variogram estimator performed better than two other estimators and the principal component analysis (PCA. The estimation results for SOC were coefficient of determination (R2 = 0.85, root mean square error (RMSE = 56.7 g/kg, the ratio of percent deviation (RPD = 2.59; for pH: R2 = 0.82, RMSE = 0.49 g/kg, RPD = 2.31; and for N: R2 = 0.77, RMSE = 3.01 g/kg, RPD = 2.09. Even better results could be achieved when fractal features were combined with PCA components. Fractal features generated by the proposed method can improve estimation accuracies of soil properties and simultaneously maintain the original spectral curve shape.

  18. Use of qualitative and quantitative information in neural networks for assessing agricultural chemical contamination of domestic wells

    Science.gov (United States)

    Mishra, A.; Ray, C.; Kolpin, D.W.

    2004-01-01

    A neural network analysis of agrichemical occurrence in groundwater was conducted using data from a pilot study of 192 small-diameter drilled and driven wells and 115 dug and bored wells in Illinois, a regional reconnaissance network of 303 wells across 12 Midwestern states, and a study of 687 domestic wells across Iowa. Potential factors contributing to well contamination (e.g., depth to aquifer material, well depth, and distance to cropland) were investigated. These contributing factors were available in either numeric (actual or categorical) or descriptive (yes or no) format. A method was devised to use the numeric and descriptive values simultaneously. Training of the network was conducted using a standard backpropagation algorithm. Approximately 15% of the data was used for testing. Analysis indicated that training error was quite low for most data. Testing results indicated that it was possible to predict the contamination potential of a well with pesticides. However, predicting the actual level of contamination was more difficult. For pesticide occurrence in drilled and driven wells, the network predictions were good. The performance of the network was poorer for predicting nitrate occurrence in dug and bored wells. Although the data set for Iowa was large, the prediction ability of the trained network was poor, due to descriptive or categorical input parameters, compared with smaller data sets such as that for Illinois, which contained more numeric information.

  19. Comparison of mentha extracts obtained by different extraction methods

    Directory of Open Access Journals (Sweden)

    Milić Slavica

    2006-01-01

    Full Text Available The different methods of mentha extraction, such as steam distillation, extraction by methylene chloride (Soxhlet extraction and supercritical fluid extraction (SFE by carbon dioxide (CO J were investigated. SFE by CO, was performed at pressure of 100 bar and temperature of40°C. The extraction yield, as well as qualitative and quantitative composition of obtained extracts, determined by GC-MS method, were compared.

  20. A Quantitative Documentation of the Composition of Two Powdered Herbal Formulations (Antimalarial and Haematinic Using Ethnomedicinal Information from Ogbomoso, Nigeria

    Directory of Open Access Journals (Sweden)

    Adepoju Tunde Joseph Ogunkunle

    2014-01-01

    Full Text Available The safety of many African traditional herbal remedies is doubtful due to lack of standardization. This study therefore attempted to standardize two polyherbal formulations from Ogbomoso, Oyo State, Nigeria, with respect to the relative proportions (weight-for-weight of their botanical constituents. Information supplied by 41 local herbal practitioners was statistically screened for consistency and then used to quantify the composition of antimalarial (Maloff-HB and haematinic (Haematol-B powdered herbal formulations with nine and ten herbs, respectively. Maloff-HB contained the stem bark of Enantia chlorantha Oliv. (30.0, Alstonia boonei De Wild (20.0, Mangifera indica L. (10.0, Okoubaka aubrevillei Phelleg & Nomand (8.0, Pterocarpus osun Craib (4.0, root bark of Calliandra haematocephala Hassk (10.0, Sarcocephalus latifolius (J. E. Smith E. A. Bruce (8.0, Parquetina nigrescens (Afz. Bullock (6.0, and the vines of Cassytha filiformis L. (4.0, while Haematol-B was composed of the leaf sheath of Sorghum bicolor Moench (30.0, fruit calyx of Hibiscus sabdariffa L. (20.0, stem bark of Theobroma cacao L. (10.0, Khaya senegalensis (Desr. A. Juss (5.5, Mangifera indica (5.5, root of Aristolochia ringens Vahl. (7.0, root bark of Sarcocephalus latifolius (5.5, Uvaria chamae P. Beauv. (5.5, Zanthoxylum zanthoxyloides (Lam. Zepern & Timler (5.5, and seed of Garcinia kola Heckel (5.5. In pursuance of their general acceptability, the two herbal formulations are recommended for their pharmaceutical, phytochemical, and microbial qualities.

  1. Methodology development for quantitative optimization of security enhancement in medical information systems -Case study in a PACS and a multi-institutional radiotherapy database-.

    Science.gov (United States)

    Haneda, Kiyofumi; Umeda, Tokuo; Koyama, Tadashi; Harauchi, Hajime; Inamura, Kiyonari

    2002-01-01

    The target of our study is to establish the methodology for analyzing level of security requirements, for searching suitable security measures and for optimizing security distribution to every portion of medical practice. Quantitative expression must be introduced to our study as possible for the purpose of easy follow up of security procedures and easy evaluation of security outcomes or results. Results of system analysis by fault tree analysis (FTA) clarified that subdivided system elements in detail contribute to much more accurate analysis. Such subdivided composition factors very much depended on behavior of staff, interactive terminal devices, kinds of service, and routes of network. As conclusion, we found the methods to analyze levels of security requirements for each medical information systems employing FTA, basic events for each composition factor and combination of basic events. Methods for searching suitable security measures were found. Namely risk factors for each basic event, number of elements for each composition factor and candidates of security measure elements were found. Method to optimize the security measures for each medical information system was proposed. Namely optimum distribution of risk factors in terms of basic events were figured out, and comparison of them between each medical information systems became possible.

  2. Rapid extraction and quantitative detection of the herbicide diuron in surface water by a hapten-functionalized carbon nanotubes based electrochemical analyzer.

    Science.gov (United States)

    Sharma, Priyanka; Bhalla, Vijayender; Tuteja, Satish; Kukkar, Manil; Suri, C Raman

    2012-05-21

    A solid phase extraction micro-cartridge containing a non-polar polystyrene absorbent matrix was coupled with an electrochemical immunoassay analyzer (EIA) and used for the ultra-sensitive detection of the phenyl urea herbicide diuron in real samples. The EIA was fabricated by using carboxylated carbon nanotubes (CNTs) functionalized with a hapten molecule (an amine functionalized diuron derivative). Screen printed electrodes (SPE) were modified with these haptenized CNTs and specific in-house generated anti diuron antibodies were used for bio-interface development. The immunodetection was realized in a competitive electrochemical immunoassay format using alkaline phosphatase labeled secondary anti-IgG antibody. The addition of 1-naphthyl phosphate substrate resulted in the production of an electrochemically active product, 1-naphthol, which was monitored by using differential pulse voltammetry (DPV). The assay exhibited excellent sensitivity and specificity having a dynamic response range of 0.01 pg mL(-1) to 10 μg mL(-1) for diuron with a limit of detection of around 0.1 pg mL(-1) (n = 3) in standard water samples. The micro-cartridge coupled hapten-CNTs modified SPE provided an effective and efficient electrochemical immunoassay for the real-time monitoring of pesticides samples with a very high degree of sensitivity.

  3. Quantitative deviating effects of maple syrup extract supplementation on the hepatic gene expression of mice fed a high-fat diet.

    Science.gov (United States)

    Kamei, Asuka; Watanabe, Yuki; Shinozaki, Fumika; Yasuoka, Akihito; Shimada, Kousuke; Kondo, Kaori; Ishijima, Tomoko; Toyoda, Tsudoi; Arai, Soichi; Kondo, Takashi; Abe, Keiko

    2017-02-01

    Maple syrup contains various polyphenols and we investigated the effects of a polyphenol-rich maple syrup extract (MSXH) on the physiology of mice fed a high-fat diet (HFD). The mice fed a low-fat diet (LFD), an HFD, or an HFD supplemented with 0.02% (002MSXH) or 0.05% MSXH (005MSXH) for 4 weeks. Global gene expression analysis of the liver was performed, and the differentially expressed genes were classified into three expression patterns; pattern A (LFD 002MSXH = 005MSXH, LFD > HFD 005MSXH, LFD > HFD = 002MSXH 002MSXH HFD 005MSXH). Pattern A was enriched in glycolysis, fatty acid metabolism, and folate metabolism. Pattern B was enriched in tricarboxylic acid cycle while pattern C was enriched in gluconeogenesis, cholesterol metabolism, amino acid metabolism, and endoplasmic reticulum stress-related event. Our study suggested that the effects of MSXH ingestion showed (i) dose-dependent pattern involved in energy metabolisms and (ii) reversely pattern involved in stress responses. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Quantitative determination of juvenile hormone III and 20-hydroxyecdysone in queen larvae and drone pupae of Apis mellifera by ultrasonic-assisted extraction and liquid chromatography with electrospray ionization tandem mass spectrometry.

    Science.gov (United States)

    Zhou, Jinhui; Qi, Yitao; Hou, Yali; Zhao, Jing; Li, Yi; Xue, Xiaofeng; Wu, Liming; Zhang, Jinzhen; Chen, Fang

    2011-09-01

    In this paper, a method for the rapid and sensitive analysis of juvenile hormone III (JH III) and 20-hydroxyecdysone (20E) in queen larvae and drone pupae samples was presented. Ultrasound-assisted extraction provided a significant shortening of the leaching time for the extraction of JH III and 20E and satisfactory sensitivity as compared to the conventional shake extraction procedure. After extraction, determination was carried out by liquid chromatography-tandem mass spectrometry (LC-MS/MS) operating in electrospray ionization positive ion mode via multiple reaction monitoring (MRM) without any clean-up step prior to analysis. A linear gradient consisting of (A) water containing 0.1% formic acid and (B) acetonitrile containing 0.1% formic acid, and a ZORBAX SB-Aq column (100 mm × 2.1 mm, 3.5 μm) were employed to obtain the best resolution of the target analytes. The method was validated for linearity, limit of quantification, recovery, matrix effects, precision and stability. Drone pupae samples were found to contain 20E at concentrations of 18.0 ± 0.1 ng/g (mean ± SD) and JH III was detected at concentrations of 0.20 ± 0.06 ng/g (mean ± SD) in queen larvae samples. This validated method provided some practical information for the actual content of JH III and 20E in queen larvae and drone pupae samples. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Extraction of wind and temperature information from hybrid 4D-Var assimilation of stratospheric ozone using NAVGEM

    Science.gov (United States)

    Allen, Douglas R.; Hoppel, Karl W.; Kuhl, David D.

    2018-03-01

    Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM) hybrid 4-D variational assimilation (4D-Var) data assimilation (DA) system. Ozone can improve the wind and temperature through two different DA mechanisms: (1) through the flow-of-the-day ensemble background error covariance that is blended together with the static background error covariance and (2) via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE), which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error) global hourly ozone data constrains the stratospheric wind and temperature to within ˜ 2 m s-1 and ˜ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study), rather than 0.0 (no ensemble background error covariance) or 1.0 (no static background error covariance), which is consistent with other hybrid DA studies. When perfect global ozone is

  6. Extraction of wind and temperature information from hybrid 4D-Var assimilation of stratospheric ozone using NAVGEM

    Directory of Open Access Journals (Sweden)

    D. R. Allen

    2018-03-01

    Full Text Available Extraction of wind and temperature information from stratospheric ozone assimilation is examined within the context of the Navy Global Environmental Model (NAVGEM hybrid 4-D variational assimilation (4D-Var data assimilation (DA system. Ozone can improve the wind and temperature through two different DA mechanisms: (1 through the flow-of-the-day ensemble background error covariance that is blended together with the static background error covariance and (2 via the ozone continuity equation in the tangent linear model and adjoint used for minimizing the cost function. All experiments assimilate actual conventional data in order to maintain a similar realistic troposphere. In the stratosphere, the experiments assimilate simulated ozone and/or radiance observations in various combinations. The simulated observations are constructed for a case study based on a 16-day cycling truth experiment (TE, which is an analysis with no stratospheric observations. The impact of ozone on the analysis is evaluated by comparing the experiments to the TE for the last 6 days, allowing for a 10-day spin-up. Ozone assimilation benefits the wind and temperature when data are of sufficient quality and frequency. For example, assimilation of perfect (no applied error global hourly ozone data constrains the stratospheric wind and temperature to within ∼ 2 m s−1 and ∼ 1 K. This demonstrates that there is dynamical information in the ozone distribution that can potentially be used to improve the stratosphere. This is particularly important for the tropics, where radiance observations have difficulty constraining wind due to breakdown of geostrophic balance. Global ozone assimilation provides the largest benefit when the hybrid blending coefficient is an intermediate value (0.5 was used in this study, rather than 0.0 (no ensemble background error covariance or 1.0 (no static background error covariance, which is consistent with other hybrid DA studies. When

  7. Quantitative analysis of flavanones from citrus fruits by using mesoporous molecular sieve-based miniaturized solid phase extraction coupled to ultrahigh-performance liquid chromatography and quadrupole time-of-flight mass spectrometry.

    Science.gov (United States)

    Cao, Wan; Ye, Li-Hong; Cao, Jun; Xu, Jing-Jing; Peng, Li-Qing; Zhu, Qiong-Yao; Zhang, Qian-Yun; Hu, Shuai-Shuai

    2015-08-07

    An analytical procedure based on miniaturized solid phase extraction (SPE) and ultrahigh-performance liquid chromatography coupled with quadrupole time-of-flight tandem mass spectrometry was developed and validated for determination of six flavanones in Citrus fruits. The mesoporous molecular sieve SBA-15 as a solid sorbent was characterised by Fourier transform-infrared spectroscopy and scanning electron microscopy. Additionally, compared with reported extraction techniques, the mesoporous SBA-15 based SPE method possessed the advantages of shorter analysis time and higher sensitivity. Furthermore, considering the different nature of the tested compounds, all of the parameters, including the SBA-15 amount, solution pH, elution solvent, and the sorbent type, were investigated in detail. Under the optimum condition, the instrumental detection and quantitation limits calculated were less than 4.26 and 14.29ngmL(-1), respectively. The recoveries obtained for all the analytes were ranging from 89.22% to 103.46%. The experimental results suggested that SBA-15 was a promising material for the purification and enrichment of target flavanones from complex citrus fruit samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Extraction and analysis of reducing alteration information of oil-gas in Bashibulake uranium ore district based on ASTER remote sensing data

    International Nuclear Information System (INIS)

    Ye Fawang; Liu Dechang; Zhao Yingjun; Yang Xu

    2008-01-01

    Beginning with the analysis of the spectral characteristics of sandstone with reducing alteration of oil-gas in Bashibulake ore district, the extract technology of reducing alteration information based on ASTER data is presented. Several remote sensing anomaly zones of reducing alteration information similar with that in uranium deposit are interpreted in study area. On the basis of above study, these alteration anomaly information are further classified by using the advantage of ASTER data with multi-band in SWIR, the geological significance for alteration anomaly information is respectively discussed. As a result, alteration anomalies good for uranium prospecting are really selected, which provides some important information for uranium exploration in outland of Bashibulake uranium ore area. (authors)

  9. Payoff Information Biases a Fast Guess Process in Perceptual Decision Making under Deadline Pressure: Evidence from Behavior, Evoked Potentials, and Quantitative Model Comparison.

    Science.gov (United States)

    Noorbaloochi, Sharareh; Sharon, Dahlia; McClelland, James L

    2015-08-05

    We used electroencephalography (EEG) and behavior to examine the role of payoff bias in a difficult two-alternative perceptual decision under deadline pressure in humans. The findings suggest that a fast guess process, biased by payoff and triggered by stimulus onset, occurred on a subset of trials and raced with an evidence accumulation process informed by stimulus information. On each trial, the participant judged whether a rectangle was shifted to the right or left and responded by squeezing a right- or left-hand dynamometer. The payoff for each alternative (which could be biased or unbiased) was signaled 1.5 s before stimulus onset. The choice response was assigned to the first hand reaching a squeeze force criterion and reaction time was defined as time to criterion. Consistent with a fast guess account, fast responses were strongly biased toward the higher-paying alternative and the EEG exhibited an abrupt rise in the lateralized readiness potential (LRP) on a subset of biased payoff trials contralateral to the higher-paying alternative ∼ 150 ms after stimulus onset and 50 ms before stimulus information influenced the LRP. This rise was associated with poststimulus dynamometer activity favoring the higher-paying alternative and predicted choice and response time. Quantitative modeling supported the fast guess account over accounts of payoff effects supported in other studies. Our findings, taken with previous studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy for the integration of payoff and stimulus information. Humans and other animals often face situations in which they must make choices based on uncertain sensory information together with information about expected outcomes (gains or losses) about each choice. We investigated how differences in payoffs between available alternatives affect neural activity, overt choice, and the timing of choice

  10. Quality control in quantitative computed tomography

    International Nuclear Information System (INIS)

    Jessen, K.A.; Joergensen, J.

    1989-01-01

    Computed tomography (CT) has for several years been an indispensable tool in diagnostic radiology, but it is only recently that extraction of quantitative information from CT images has been of practical clinical value. Only careful control of the scan parameters, and especially the scan geometry, allows useful information to be obtained; and it can be demonstrated by simple phantom measurements how sensitive a CT system can be to variations in size, shape and position of the phantom in the gantry aperture. Significant differences exist between systems that are not manifested in normal control of image quality and general performance tests. Therefore an actual system has to be analysed for its suitability for quantitative use of the images before critical clinical applications are justified. (author)

  11. A simple method to extract information on anisotropy of particle fluxes from spin-modulated counting rates of cosmic ray telescopes

    International Nuclear Information System (INIS)

    Hsieh, K.C.; Lin, Y.C.; Sullivan, J.D.

    1975-01-01

    A simple method to extract information on anisotropy of particle fluxes from data collected by cosmic ray telescopes on spinning spacecraft but without sectored accumulators is presented. Application of this method to specific satellite data demonstrates that it requires no prior assumption on the form of angular distribution of the fluxes; furthermore, self-consistency ensures the validity of the results thus obtained. The examples show perfect agreement with the corresponding magnetic field directions

  12. SUPPLEMENTARY INFORMATION Quantitative analysis of ...

    Indian Academy of Sciences (India)

    Daya

    Thanjavur, Tamilnadu 613 401, India. cPURSE Laboratory, Mangalagangotri, Mangalore University, Mangalore, Karnataka 574 199,. India. dDepartment of Materials Science, Mangalore University, Mangalore, Karnataka 574 199, India. eLab. de Polímeros, Centro de Química, Instituto de Ciencias, Benemérita Universidad.

  13. Computerized extraction of information on the quality of diabetes care from free text in electronic patient records of general practitioners

    NARCIS (Netherlands)

    Voorham, Jaco; Denig, Petra

    2007-01-01

    Objective: This study evaluated a computerized method for extracting numeric clinical measurements related to diabetes care from free text in electronic patient records (EPR) of general practitioners. Design and Measurements: Accuracy of this number-oriented approach was compared to manual chart

  14. On the quantitativeness of EDS STEM

    Energy Technology Data Exchange (ETDEWEB)

    Lugg, N.R. [Institute of Engineering Innovation, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan); Kothleitner, G. [Institute for Electron Microscopy and Nanoanalysis, Graz University of Technology, Steyrergasse 17, 8010 Graz (Austria); Centre for Electron Microscopy, Steyrergasse 17, 8010 Graz (Austria); Shibata, N.; Ikuhara, Y. [Institute of Engineering Innovation, The University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2015-04-15

    Chemical mapping using energy dispersive X-ray spectroscopy (EDS) in scanning transmission electron microscopy (STEM) has recently shown to be a powerful technique in analyzing the elemental identity and location of atomic columns in materials at atomic resolution. However, most applications of EDS STEM have been used only to qualitatively map whether elements are present at specific sites. Obtaining calibrated EDS STEM maps so that they are on an absolute scale is a difficult task and even if one achieves this, extracting quantitative information about the specimen – such as the number or density of atoms under the probe – adds yet another layer of complexity to the analysis due to the multiple elastic and inelastic scattering of the electron probe. Quantitative information may be obtained by comparing calibrated EDS STEM with theoretical simulations, but in this case a model of the structure must be assumed a priori. Here we first theoretically explore how exactly elastic and thermal scattering of the probe confounds the quantitative information one is able to extract about the specimen from an EDS STEM map. We then show using simulation how tilting the specimen (or incident probe) can reduce the effects of scattering and how it can provide quantitative information about the specimen. We then discuss drawbacks of this method – such as the loss of atomic resolution along the tilt direction – but follow this with a possible remedy: precession averaged EDS STEM mapping. - Highlights: • Signal obtained in EDS STEM maps (of STO) compared to non-channelling signal. • Deviation from non-channelling signal occurs in on-axis experiments. • Tilting specimen: signal close to non-channelling case but atomic resolution is lost. • Tilt-precession series: non-channelling signal and atomic-resolution features obtained. • Associated issues are discussed.

  15. An image-processing strategy to extract important information suitable for a low-size stimulus pattern in a retinal prosthesis.

    Science.gov (United States)

    Chen, Yili; Fu, Jixiang; Chu, Dawei; Li, Rongmao; Xie, Yaoqin

    2017-11-27

    A retinal prosthesis is designed to help the blind to obtain some sight. It consists of an external part and an internal part. The external part is made up of a camera, an image processor and an RF transmitter. The internal part is made up of an RF receiver, implant chip and microelectrode. Currently, the number of microelectrodes is in the hundreds, and we do not know the mechanism for using an electrode to stimulate the optic nerve. A simple hypothesis is that the pixels in an image correspond to the electrode. The images captured by the camera should be processed by suitable strategies to correspond to stimulation from the electrode. Thus, it is a question of how to obtain the important information from the image captured in the picture. Here, we use the region of interest (ROI), a useful algorithm for extracting the ROI, to retain the important information, and to remove the redundant information. This paper explains the details of the principles and functions of the ROI. Because we are investigating a real-time system, we need a fast processing ROI as a useful algorithm to extract the ROI. Thus, we simplified the ROI algorithm and used it in an outside image-processing digital signal processing (DSP) system of the retinal prosthesis. The results show that our image-processing strategies are suitable for a real-time retinal prosthesis and can eliminate redundant information and provide useful information for expression in a low-size image.

  16. Quantitative Study and Structure Visualization of Scientific Publications in the Field of Information Management in Web of Science Database during 1988-2009

    Directory of Open Access Journals (Sweden)

    Afshin Hamdipour

    2012-12-01

    Full Text Available The present study endeavored to analysis the scientific publications that were indexed in the Web of Science database as the information management records and the visualization of science structure in this field during 1988-2009. The research method was scientometrics. During the study period, 1120 records in the field of information management have been published. These records were extracted in the form of plain text files and stored in a PC. Then they were analyzed by ISI.exe and HistCite softwares. Author's coefficient collaboration (CC was grown from zero in 1988 to 0.33 in 2009. Average coefficient collaboration between the authors was 0.22 which confirmed low authors collaboration in this area. The records have been published in 63 languages. Among these records the English language with 93.8 % possessed the highest proportion. City University London and the University of Sheffield in England had the most common publications in information management field. Based on the number of published records, T.D. Wilson with 13 records and 13 citations ranked as the first. The average number of global citations to 112 documents has been equal to 8.78. Despite the participation of different countries in the production of documents, more than 28.9% of records have been produced in the United States. According to results, 10 countries have published more than 72.4 percent of the records. City University London and the University of Sheffield have had highest frequency in this area. 15 journals have published 564 records (50.4% of the total productions. Finally, by implementation of scientific software HistCite map drawing clustered and authors, articles and four effective specific subjects were introduced..

  17. Biologically active extracts with kidney affections applications

    Science.gov (United States)

    Pascu (Neagu), Mihaela; Pascu, Daniela-Elena; Cozea, Andreea; Bunaciu, Andrei A.; Miron, Alexandra Raluca; Nechifor, Cristina Aurelia

    2015-12-01

    This paper is aimed to select plant materials rich in bioflavonoid compounds, made from herbs known for their application performances in the prevention and therapy of renal diseases, namely kidney stones and urinary infections (renal lithiasis, nephritis, urethritis, cystitis, etc.). This paper presents a comparative study of the medicinal plant extracts composition belonging to Ericaceae-Cranberry (fruit and leaves) - Vaccinium vitis-idaea L. and Bilberry (fruit) - Vaccinium myrtillus L. Concentrated extracts obtained from medicinal plants used in this work were analyzed from structural, morphological and compositional points of view using different techniques: chromatographic methods (HPLC), scanning electronic microscopy, infrared, and UV spectrophotometry, also by using kinetic model. Liquid chromatography was able to identify the specific compounds of the Ericaceae family, present in all three extracts, arbutosid, as well as specific components of each species, mostly from the class of polyphenols. The identification and quantitative determination of the active ingredients from these extracts can give information related to their therapeutic effects.

  18. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    Science.gov (United States)

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  19. European Organisation for Research and Treatment of Cancer (EORTC) Pathobiology Group standard operating procedure for the preparation of human tumour tissue extracts suited for the quantitative analysis of tissue-associated biomarkers.

    Science.gov (United States)

    Schmitt, Manfred; Mengele, Karin; Schueren, Elisabeth; Sweep, Fred C G J; Foekens, John A; Brünner, Nils; Laabs, Juliane; Malik, Abha; Harbeck, Nadia

    2007-03-01

    With the new concept of 'individualized treatment and targeted therapies', tumour tissue-associated biomarkers have been given a new role in selection of cancer patients for treatment and in cancer patient management. Tumour biomarkers can give support to cancer patient stratification and risk assessment, treatment response identification, or to identifying those patients who are expected to respond to certain anticancer drugs. As the field of tumour-associated biomarkers has expanded rapidly over the last years, it has become increasingly apparent that a strong need exists to establish guidelines on how to easily disintegrate the tumour tissue for assessment of the presence of tumour tissue-associated biomarkers. Several mechanical tissue (cell) disruption techniques exist, ranging from bead mill homogenisation and freeze-fracturing through to blade or pestle-type homogenisation, to grinding and ultrasonics. Still, only a few directives have been given on how fresh-frozen tumour tissues should be processed for the extraction and determination of tumour biomarkers. The PathoBiology Group of the European Organisation for Research and Treatment of Cancer therefore has devised a standard operating procedure for the standardised preparation of human tumour tissue extracts which is designed for the quantitative analysis of tumour tissue-associated biomarkers. The easy to follow technical steps involved require 50-300 mg of deep-frozen cancer tissue placed into small size (1.2 ml) cryogenic tubes. These are placed into the shaking flask of a Mikro-Dismembrator S machine (bead mill) to pulverise the tumour tissue in the capped tubes in the deep-frozen state by use of a stainless steel ball, all within 30 s of exposure. RNA is isolated from the pulverised tissue following standard procedures. Proteins are extracted from the still frozen pulverised tissue by addition of Tris-buffered saline to obtain the cytosol fraction of the tumour or by the Tris buffer supplemented with

  20. Qualitative and quantitative radiation protection analysis of mucosa of ICR strained mice using selected herbal extracts such as GC-2112 from garlic (Allium sativum) and GX-2137 from ginseng (Panax sp.)

    International Nuclear Information System (INIS)

    Bunagan, J.B.

    2005-01-01

    Full text: Earlier reports showed that ginseng has significant radioprotective and stimulatory effect on the recovery of the lymphocytes and leukocytes. Using graded absorbed doses of radiation (1.5, 5, 20, 50 Gy) applied in ICR strain male white mice which was injected with GX-2137 from ginseng (Panax sp.) and GC-2112 from garlic (Allium sativum) was tested to prove some radioprotective efficiency. The herbal extracts were injected intraperitoneally and the experimental mice were sacrificed 2 and 48 hrs post-irradiation. Factors such as analyzing kinetics of critical tissue parameters (length of villi, the number of crypt and villi cells and cell density) and determining the Relative Protection Efficiencies (RPE) using quantitative histopathological techniques were used to quantify the radiation protection assay in the duodenum of ICR strain mice. Results showed that GC-2112 and GX-2137 protected the villi structures. After 2 hrs. post irradiation, tissue degeneration was evident. RPE values of significant radioprotection of the crypts is demonstrated at absorbed dose. It was found that some villi cells are even viable at non-physiologic dose of 50 Gy. (author)

  1. On the capability of Swarm for surface mass variation monitoring: Quantitative assessment based on orbit information from CHAMP, GRACE and GOCE

    Science.gov (United States)

    Baur, Oliver; Weigelt, Matthias; Zehentner, Norbert; Mayer-Gürr, Torsten; Jäggi, Adrian

    2014-05-01

    In the last decade, temporal variations of the gravity field from GRACE observations have become one of the most ubiquitous and valuable sources of information for geophysical and environmental studies. In the context of global climate change, mass balance of the Arctic and Antarctic ice sheets gained particular attention. Because GRACE has outlived its predicted lifetime by several years already, it is very likely that a gap between GRACE and its successor GRACE follow-on (supposed to be launched in 2017, at the earliest) occurs. The Swarm mission - launched on November 22, 2013 - is the most promising candidate to bridge this potential gap, i.e., to directly acquire large-scale mass variation information on the Earth's surface in case of a gap between the present GRACE and the upcoming GRACE follow-on projects. Although the magnetometry mission Swarm has not been designed for gravity field purposes, its three satellites have the characteristics for such an endeavor: (i) low, near-circular and near-polar orbits, (ii) precise positioning with high-quality GNSS receivers, (iii) on-board accelerometers to measure the influence of non-gravitational forces. Hence, from an orbit analysis point of view the Swarm satellites are comparable to the CHAMP, GRACE and GOCE spacecraft. Indeed and as data analysis from CHAMP has been shown, the detection of annual signals and trends from orbit analysis is possible for long-wavelength features of the gravity field, although the accuracy associated with the inter-satellite GRACE measurements cannot be reached. We assess the capability of the (non-dedicated) mission Swarm for mass variation detection in a real-case environment (opposed to simulation studies). For this purpose, we "approximate" the Swarm scenario by the GRACE+CHAMP and GRACE+GOCE constellations. In a first step, kinematic orbits of the individual satellites are derived from GNSS observations. From these orbits, we compute monthly combined GRACE+CHAMP and GRACE

  2. Extraction of Pluvial Flood Relevant Volunteered Geographic Information (VGI by Deep Learning from User Generated Texts and Photos

    Directory of Open Access Journals (Sweden)

    Yu Feng

    2018-01-01

    Full Text Available In recent years, pluvial floods caused by extreme rainfall events have occurred frequently. Especially in urban areas, they lead to serious damages and endanger the citizens’ safety. Therefore, real-time information about such events is desirable. With the increasing popularity of social media platforms, such as Twitter or Instagram, information provided by voluntary users becomes a valuable source for emergency response. Many applications have been built for disaster detection and flood mapping using crowdsourcing. Most of the applications so far have merely used keyword filtering or classical language processing methods to identify disaster relevant documents based on user generated texts. As the reliability of social media information is often under criticism, the precision of information retrieval plays a significant role for further analyses. Thus, in this paper, high quality eyewitnesses of rainfall and flooding events are retrieved from social media by applying deep learning approaches on user generated texts and photos. Subsequently, events are detected through spatiotemporal clustering and visualized together with these high quality eyewitnesses in a web map application. Analyses and case studies are conducted during flooding events in Paris, London and Berlin.

  3. Extracting Information about the Electronic Quality of Organic Solar-Cell Absorbers from Fill Factor and Thickness

    Science.gov (United States)

    Kaienburg, Pascal; Rau, Uwe; Kirchartz, Thomas

    2016-08-01

    Understanding the fill factor in organic solar cells remains challenging due to its complex dependence on a multitude of parameters. By means of drift-diffusion simulations, we thoroughly analyze the fill factor of such low-mobility systems and demonstrate its dependence on a collection coefficient defined in this work. We systematically discuss the effect of different recombination mechanisms, space-charge regions, and contact properties. Based on these findings, we are able to interpret the thickness dependence of the fill factor for different experimental studies from the literature. The presented model provides a facile method to extract the photoactive layer's electronic quality which is of particular importance for the fill factor. We illustrate that over the past 15 years, the electronic quality has not been continuously improved, although organic solar-cell efficiencies increased steadily over the same period of time. Only recent reports show the synthesis of polymers for semiconducting films of high electronic quality that are able to produce new efficiency records.

  4. Investigating the feasibility of using partial least squares as a method of extracting salient information for the evaluation of digital breast tomosynthesis

    Science.gov (United States)

    Zhang, George Z.; Myers, Kyle J.; Park, Subok

    2013-03-01

    Digital breast tomosynthesis (DBT) has shown promise for improving the detection of breast cancer, but it has not yet been fully optimized due to a large space of system parameters to explore. A task-based statistical approach1 is a rigorous method for evaluating and optimizing this promising imaging technique with the use of optimal observers such as the Hotelling observer (HO). However, the high data dimensionality found in DBT has been the bottleneck for the use of a task-based approach in DBT evaluation. To reduce data dimensionality while extracting salient information for performing a given task, efficient channels have to be used for the HO. In the past few years, 2D Laguerre-Gauss (LG) channels, which are a complete basis for stationary backgrounds and rotationally symmetric signals, have been utilized for DBT evaluation2, 3 . But since background and signal statistics from DBT data are neither stationary nor rotationally symmetric, LG channels may not be efficient in providing reliable performance trends as a function of system parameters. Recently, partial least squares (PLS) has been shown to generate efficient channels for the Hotelling observer in detection tasks involving random backgrounds and signals.4 In this study, we investigate the use of PLS as a method for extracting salient information from DBT in order to better evaluate such systems.

  5. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  6. Quantitative analysis of rutin, quercetin, naringenin, and gallic acid by validated RP- and NP-HPTLC methods for quality control of anti-HBV active extract of Guiera senegalensis.

    Science.gov (United States)

    Alam, Perwez; Parvez, Mohammad K; Arbab, Ahmed H; Al-Dosari, Mohammed S

    2017-12-01

    Guiera senegalensis J.F. Gmel (Combretaceae) is a folk medicinal plant used in various metabolic and infectious diseases. In addition to its antiviral activities against herpes and fowlpox, the anti-HBV efficacy is very recently reported. To develop and validate simple, sensitive RP-/NP-HPTLC methods for quantitative determination of biomarkers rutin, quercetin, naringenin, and gallic acid in the anti-HBV active G. senegalensis leaves ethanol-extract. RP-HPTLC (rutin & quercetin; phase- acetonitrile:water, 4:6) and NP-HPTLC (naringenin & gallic acid; phase- toluene:ethyl acetate:formic acid, 6:4:0.8) were performed on glass-backed silica gel plates 60F 254 -RP18 and 60F 254 , respectively. The methods were validated according to the ICH guidelines. Well-separated and compact spots (R f ) of rutin (0.52 ± 0.006), quercetin (0.23 ± 0.005), naringenin (0.56 ± 0.009) and gallic acid (0.28 ± 0.006) were detected. The regression equations (Y) were 12.434x + 443.49, 10.08x + 216.85, 11.253x + 973.52 and 11.082x + 446.41 whereas the coefficient correlations (r 2 ) were 0.997 ± 0.0004, 0.9982 ± 0.0001, 0.9974 ± 0.0004 and 0.9981 ± 0.0001, respectively. The linearity ranges (ng/spot) were 200-1400 (RP-HPTLC) and 100-1200 (NP-HPTLC). The LOD/LOQ (ng/band) were 33.03/100.1 (rutin), 9.67/29.31 (quercetin), 35.574/107.8 (naringenin), and 12.32/37.35 (gallic acid). Gallic acid (7.01 μg/mg) was the most abundant biomarker compared to rutin (2.42 μg/mg), quercetin (1.53 μg/mg) and naringenin (0.14 μg/mg) in the extract. The validated NP-/RP-HPTLC methods were simple, accurate, and sensitive for separating and quantifying antiviral biomarkers in G. senegalensis, and endorsed its anti-HBV activity. The developed methods could be further employed in the standardization and quality-control of herbal formulations.

  7. Extraction of information on macromolecular interactions from fluorescence micro-spectroscopy measurements in the presence and absence of FRET

    Science.gov (United States)

    Raicu, Valerică

    2018-06-01

    Investigations of static or dynamic interactions between proteins or other biological macromolecules in living cells often rely on the use of fluorescent tags with two different colors in conjunction with adequate theoretical descriptions of Förster Resonance Energy Transfer (FRET) and molecular-level micro-spectroscopic technology. One such method based on these general principles is FRET spectrometry, which allows determination of the quaternary structure of biomolecules from cell-level images of the distributions, or spectra of occurrence frequency of FRET efficiencies. Subsequent refinements allowed combining FRET frequency spectra with molecular concentration information, thereby providing the proportion of molecular complexes with various quaternary structures as well as their binding/dissociation energies. In this paper, we build on the mathematical principles underlying FRET spectrometry to propose two new spectrometric methods, which have distinct advantages compared to other methods. One of these methods relies on statistical analysis of color mixing in subpopulations of fluorescently tagged molecules to probe molecular association stoichiometry, while the other exploits the color shift induced by FRET to also derive geometric information in addition to stoichiometry. The appeal of the first method stems from its sheer simplicity, while the strength of the second consists in its ability to provide structural information.

  8. Quantitative habitability.

    Science.gov (United States)

    Shock, Everett L; Holland, Melanie E

    2007-12-01

    A framework is proposed for a quantitative approach to studying habitability. Considerations of environmental supply and organismal demand of energy lead to the conclusions that power units are most appropriate and that the units for habitability become watts per organism. Extreme and plush environments are revealed to be on a habitability continuum, and extreme environments can be quantified as those where power supply only barely exceeds demand. Strategies for laboratory and field experiments are outlined that would quantify power supplies, power demands, and habitability. An example involving a comparison of various metabolisms pursued by halophiles is shown to be well on the way to a quantitative habitability analysis.

  9. Information

    International Nuclear Information System (INIS)

    Boyard, Pierre.

    1981-01-01

    The fear for nuclear energy and more particularly for radioactive wastes is analyzed in the sociological context. Everybody agree on the information need, information is available but there is a problem for their diffusion. Reactions of the public are analyzed and journalists, scientists and teachers have a role to play [fr

  10. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  11. 基于决策树的戈壁信息提取研究%Gobi information extraction based on decision tree classification method

    Institute of Scientific and Technical Information of China (English)

    冯益明; 智长贵; 姚爱冬

    2013-01-01

    Gobi is one of the main landscape types of earth' s surface in the arid region of northwestern parts of China, with the total area of 458 000-757 000 km2, accounting for the 4.8%-7.9% of China's total land area. The gobi holds abundant natural resources such as minerals, wind energy and solar power. Meanwhile, many modern cities and towns and some important traffic routes were also constructed on the gobi region. The gobi region plays an important role in the construction of western economy. Therefore, it is important to launch the gobi research under current social and economic conditions, and accurately revealing the distribution and area of gobi is the base and premise of launching the gobi research. At present, it is difficult to do fieldwork due to the execrable natural conditions and the sparse dweller in the gobi region, which leads to the scarcity of research documents on the situation, distribution, type classification, transformation and utilization of gobi. The studied region of this paper is a typical gobi distribution region, locating in Ejina County in Inner Mongolia, China, and its climatic characteristics include lack of rain, more evaporation, full sunshine, large temperature difference and frequent windy sand weather. Using Remote Sensing imageries Landsat TM5 and TM7 of plant growth season of 2005-2010, the DEM with 30 m spatial resolution, administrative map, present land use map, field investigation data and related documents as the basic data resource. Firstly, the non-gobi distribution regions were extracted in GIS software by analyzing DEM. Then, based on the analysis of spectral characteristics of difference typical ground objects, the information extraction model of Decision Tree based on knowledge was constructed to classify the remote sensing imageries, and eroded gobi and cumulated gobi were relatively accurately separated. The general accuracy of the extracted gobi information reached 91.57%. There were few materials in China on using

  12. Large-Scale Survey Findings Inform Patients’ Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment

    Science.gov (United States)

    Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-01-01

    Background Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users’ experiences in using secure email messaging. Objective To quantitatively assess veteran patients’ experiences in using secure email messaging in a large patient sample. Methods A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations’ Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. Results The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy

  13. Large-Scale Survey Findings Inform Patients' Experiences in Using Secure Messaging to Engage in Patient-Provider Communication and Self-Care Management: A Quantitative Assessment.

    Science.gov (United States)

    Haun, Jolie N; Patel, Nitin R; Lind, Jason D; Antinori, Nicole

    2015-12-21

    Secure email messaging is part of a national transformation initiative in the United States to promote new models of care that support enhanced patient-provider communication. To date, only a limited number of large-scale studies have evaluated users' experiences in using secure email messaging. To quantitatively assess veteran patients' experiences in using secure email messaging in a large patient sample. A cross-sectional mail-delivered paper-and-pencil survey study was conducted with a sample of respondents identified as registered for the Veteran Health Administrations' Web-based patient portal (My HealtheVet) and opted to use secure messaging. The survey collected demographic data, assessed computer and health literacy, and secure messaging use. Analyses conducted on survey data include frequencies and proportions, chi-square tests, and one-way analysis of variance. The majority of respondents (N=819) reported using secure messaging 6 months or longer (n=499, 60.9%). They reported secure messaging to be helpful for completing medication refills (n=546, 66.7%), managing appointments (n=343, 41.9%), looking up test results (n=350, 42.7%), and asking health-related questions (n=340, 41.5%). Notably, some respondents reported using secure messaging to address sensitive health topics (n=67, 8.2%). Survey responses indicated that younger age (P=.039) and higher levels of education (P=.025) and income (P=.003) were associated with more frequent use of secure messaging. Females were more likely to report using secure messaging more often, compared with their male counterparts (P=.098). Minorities were more likely to report using secure messaging more often, at least once a month, compared with nonminorities (P=.086). Individuals with higher levels of health literacy reported more frequent use of secure messaging (P=.007), greater satisfaction (P=.002), and indicated that secure messaging is a useful (P=.002) and easy-to-use (P≤.001) communication tool, compared

  14. Extraction of compositional and hydration information of sulfates from laser-induced plasma spectra recorded under Mars atmospheric conditions — Implications for ChemCam investigations on Curiosity rover

    International Nuclear Information System (INIS)

    Sobron, Pablo; Wang, Alian; Sobron, Francisco

    2012-01-01

    Given the volume of spectral data required for providing accurate compositional information and thereby insight in mineralogy and petrology from laser-induced breakdown spectroscopy (LIBS) measurements, fast data processing tools are a must. This is particularly true during the tactical operations of rover-based planetary exploration missions such as the Mars Science Laboratory rover, Curiosity, which will carry a remote LIBS spectrometer in its science payload. We have developed: an automated fast pre-processing sequence of algorithms for converting a series of LIBS spectra (typically 125) recorded from a single target into a reliable SNR-enhanced spectrum; a dedicated routine to quantify its spectral features; and a set of calibration curves using standard hydrous and multi-cation sulfates. These calibration curves allow deriving the elemental compositions and the degrees of hydration of various hydrous sulfates, one of the two major types of secondary minerals found on Mars. Our quantitative tools are built upon calibration-curve modeling, through the correlation of the elemental concentrations and the peak areas of the atomic emission lines observed in the LIBS spectra of standard samples. At present, we can derive the elemental concentrations of K, Na, Ca, Mg, Fe, Al, S, O, and H in sulfates, as well as the hydration degrees of Ca- and Mg-sulfates, from LIBS spectra obtained in both Earth atmosphere and Mars atmospheric conditions in a Planetary Environment and Analysis Chamber (PEACh). In addition, structural information can be potentially obtained for various Fe-sulfates. - Highlights: ► Routines for LIBS spectral data fast automated processing. ► Identification of elements and determination of the elemental composition. ► Calibration curves for sulfate samples in Earth and Mars atmospheric conditions. ► Fe curves probably related to the crystalline structure of Fe-sulfates. ► Extraction of degree of hydration in hydrous Mg-, Ca-, and Fe-sulfates.

  15. A writer's guide to education scholarship: Quantitative methodologies for medical education research (part 1).

    Science.gov (United States)

    Thoma, Brent; Camorlinga, Paola; Chan, Teresa M; Hall, Andrew Koch; Murnaghan, Aleisha; Sherbino, Jonathan

    2018-01-01

    Quantitative research is one of the many research methods used to help educators advance their understanding of questions in medical education. However, little research has been done on how to succeed in publishing in this area. We conducted a scoping review to identify key recommendations and reporting guidelines for quantitative educational research and scholarship. Medline, ERIC, and Google Scholar were searched for English-language articles published between 2006 and January 2016 using the search terms, "research design," "quantitative," "quantitative methods," and "medical education." A hand search was completed for additional references during the full-text review. Titles/abstracts were reviewed by two authors (BT, PC) and included if they focused on quantitative research in medical education and outlined reporting guidelines, or provided recommendations on conducting quantitative research. One hundred articles were reviewed in parallel with the first 30 used for calibration and the subsequent 70 to calculate Cohen's kappa coefficient. Two reviewers (BT, PC) conducted a full text review and extracted recommendations and reporting guidelines. A simple thematic analysis summarized the extracted recommendations. Sixty-one articles were reviewed in full, and 157 recommendations were extracted. The thematic analysis identified 86 items, 14 categories, and 3 themes. Fourteen quality evaluation tools and reporting guidelines were found. Discussion This paper provides guidance for junior researchers in the form of key quality markers and reporting guidelines. We hope that quantitative researchers in medical education will be informed by the results and that further work will be done to refine the list of recommendations.

  16. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    Energy Technology Data Exchange (ETDEWEB)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo, E-mail: thiagoreis@usp.b, E-mail: barroso@ipen.b, E-mail: kimakuma@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  17. Nuclear expert web mining system: monitoring and analysis of nuclear acceptance by information retrieval and opinion extraction on the Internet

    International Nuclear Information System (INIS)

    Reis, Thiago; Barroso, Antonio C.O.; Imakuma, Kengo

    2011-01-01

    This paper presents a research initiative that aims to collect nuclear related information and to analyze opinionated texts by mining the hypertextual data environment and social networks web sites on the Internet. Different from previous approaches that employed traditional statistical techniques, it is being proposed a novel Web Mining approach, built using the concept of Expert Systems, for massive and autonomous data collection and analysis. The initial step has been accomplished, resulting in a framework design that is able to gradually encompass a set of evolving techniques, methods, and theories in such a way that this work will build a platform upon which new researches can be performed more easily by just substituting modules or plugging in new ones. Upon completion it is expected that this research will contribute to the understanding of the population views on nuclear technology and its acceptance. (author)

  18. Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010.

    Science.gov (United States)

    de Bruijn, Berry; Cherry, Colin; Kiritchenko, Svetlana; Martin, Joel; Zhu, Xiaodan

    2011-01-01

    As clinical text mining continues to mature, its potential as an enabling technology for innovations in patient care and clinical research is becoming a reality. A critical part of that process is rigid benchmark testing of natural language processing methods on realistic clinical narrative. In this paper, the authors describe the design and performance of three state-of-the-art text-mining applications from the National Research Council of Canada on evaluations within the 2010 i2b2 challenge. The three systems perform three key steps in clinical information extraction: (1) extraction of medical problems, tests, and treatments, from discharge summaries and progress notes; (2) classification of assertions made on the medical problems; (3) classification of relations between medical concepts. Machine learning systems performed these tasks using large-dimensional bags of features, as derived from both the text itself and from external sources: UMLS, cTAKES, and Medline. Performance was measured per subtask, using micro-averaged F-scores, as calculated by comparing system annotations with ground-truth annotations on a test set. The systems ranked high among all submitted systems in the competition, with the following F-scores: concept extraction 0.8523 (ranked first); assertion detection 0.9362 (ranked first); relationship detection 0.7313 (ranked second). For all tasks, we found that the introduction of a wide range of features was crucial to success. Importantly, our choice of machine learning algorithms allowed us to be versatile in our feature design, and to introduce a large number of features without overfitting and without encountering computing-resource bottlenecks.

  19. Quantitative Finance

    Science.gov (United States)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  20. Extracting drug mechanism and pharmacodynamic information from clinical electroencephalographic data using generalised semi-linear canonical correlation analysis

    International Nuclear Information System (INIS)

    Brain, P; Strimenopoulou, F; Ivarsson, M; Wilson, F J; Diukova, A; Wise, R G; Berry, E; Jolly, A; Hall, J E

    2014-01-01

    Conventional analysis of clinical resting electroencephalography (EEG) recordings typically involves assessment of spectral power in pre-defined frequency bands at specific electrodes. EEG is a potentially useful technique in drug development for measuring the pharmacodynamic (PD) effects of a centrally acting compound and hence to assess the likelihood of success of a novel drug based on pharmacokinetic–pharmacodynamic (PK–PD) principles. However, the need to define the electrodes and spectral bands to be analysed a priori is limiting where the nature of the drug-induced EEG effects is initially not known. We describe the extension to human EEG data of a generalised semi-linear canonical correlation analysis (GSLCCA), developed for small animal data. GSLCCA uses data from the whole spectrum, the entire recording duration and multiple electrodes. It provides interpretable information on the mechanism of drug action and a PD measure suitable for use in PK–PD modelling. Data from a study with low (analgesic) doses of the μ-opioid agonist, remifentanil, in 12 healthy subjects were analysed using conventional spectral edge analysis and GSLCCA. At this low dose, the conventional analysis was unsuccessful but plausible results consistent with previous observations were obtained using GSLCCA, confirming that GSLCCA can be successfully applied to clinical EEG data. (paper)