WorldWideScience

Sample records for improving information extraction

  1. Optical Aperture Synthesis Object's Information Extracting Based on Wavelet Denoising

    International Nuclear Information System (INIS)

    Fan, W J; Lu, Y

    2006-01-01

    Wavelet denoising is studied to improve OAS(optical aperture synthesis) object's Fourier information extracting. Translation invariance wavelet denoising based on Donoho wavelet soft threshold denoising is researched to remove Pseudo-Gibbs in wavelet soft threshold image. OAS object's information extracting based on translation invariance wavelet denoising is studied. The study shows that wavelet threshold denoising can improve the precision and the repetition of object's information extracting from interferogram, and the translation invariance wavelet denoising information extracting is better than soft threshold wavelet denoising information extracting

  2. Improving information extraction using a probability-based approach

    DEFF Research Database (Denmark)

    Kim, S.; Ahmed, Saeema; Wallace, K.

    2007-01-01

    Information plays a crucial role during the entire life-cycle of a product. It has been shown that engineers frequently consult colleagues to obtain the information they require to solve problems. However, the industrial world is now more transient and key personnel move to other companies...... or retire. It is becoming essential to retrieve vital information from archived product documents, if it is available. There is, therefore, great interest in ways of extracting relevant and sharable information from documents. A keyword-based search is commonly used, but studies have shown...... the recall, while maintaining the high precision, a learning approach that makes identification decisions based on a probability model, rather than simply looking up the presence of the pre-defined variations, looks promising. This paper presents the results of developing such a probability-based entity...

  3. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction.

    Science.gov (United States)

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2010-11-13

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.

  4. Information extraction system

    Science.gov (United States)

    Lemmond, Tracy D; Hanley, William G; Guensche, Joseph Wendell; Perry, Nathan C; Nitao, John J; Kidwell, Paul Brandon; Boakye, Kofi Agyeman; Glaser, Ron E; Prenger, Ryan James

    2014-05-13

    An information extraction system and methods of operating the system are provided. In particular, an information extraction system for performing meta-extraction of named entities of people, organizations, and locations as well as relationships and events from text documents are described herein.

  5. Object learning improves feature extraction but does not improve feature selection.

    Directory of Open Access Journals (Sweden)

    Linus Holm

    Full Text Available A single glance at your crowded desk is enough to locate your favorite cup. But finding an unfamiliar object requires more effort. This superiority in recognition performance for learned objects has at least two possible sources. For familiar objects observers might: 1 select more informative image locations upon which to fixate their eyes, or 2 extract more information from a given eye fixation. To test these possibilities, we had observers localize fragmented objects embedded in dense displays of random contour fragments. Eight participants searched for objects in 600 images while their eye movements were recorded in three daily sessions. Performance improved as subjects trained with the objects: The number of fixations required to find an object decreased by 64% across the 3 sessions. An ideal observer model that included measures of fragment confusability was used to calculate the information available from a single fixation. Comparing human performance to the model suggested that across sessions information extraction at each eye fixation increased markedly, by an amount roughly equal to the extra information that would be extracted following a 100% increase in functional field of view. Selection of fixation locations, on the other hand, did not improve with practice.

  6. Multimedia Information Extraction

    CERN Document Server

    Maybury, Mark T

    2012-01-01

    The advent of increasingly large consumer collections of audio (e.g., iTunes), imagery (e.g., Flickr), and video (e.g., YouTube) is driving a need not only for multimedia retrieval but also information extraction from and across media. Furthermore, industrial and government collections fuel requirements for stock media access, media preservation, broadcast news retrieval, identity management, and video surveillance.  While significant advances have been made in language processing for information extraction from unstructured multilingual text and extraction of objects from imagery and vid

  7. BioSimplify: an open source sentence simplification engine to improve recall in automatic biomedical information extraction

    OpenAIRE

    Jonnalagadda, Siddhartha; Gonzalez, Graciela

    2011-01-01

    BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scien...

  8. Challenges in Managing Information Extraction

    Science.gov (United States)

    Shen, Warren H.

    2009-01-01

    This dissertation studies information extraction (IE), the problem of extracting structured information from unstructured data. Example IE tasks include extracting person names from news articles, product information from e-commerce Web pages, street addresses from emails, and names of emerging music bands from blogs. IE is all increasingly…

  9. Extracting the information backbone in online system.

    Directory of Open Access Journals (Sweden)

    Qian-Ming Zhang

    Full Text Available Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  10. Extracting the information backbone in online system.

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such "less can be more" feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency.

  11. Extracting the Information Backbone in Online System

    Science.gov (United States)

    Zhang, Qian-Ming; Zeng, An; Shang, Ming-Sheng

    2013-01-01

    Information overload is a serious problem in modern society and many solutions such as recommender system have been proposed to filter out irrelevant information. In the literature, researchers have been mainly dedicated to improving the recommendation performance (accuracy and diversity) of the algorithms while they have overlooked the influence of topology of the online user-object bipartite networks. In this paper, we find that some information provided by the bipartite networks is not only redundant but also misleading. With such “less can be more” feature, we design some algorithms to improve the recommendation performance by eliminating some links from the original networks. Moreover, we propose a hybrid method combining the time-aware and topology-aware link removal algorithms to extract the backbone which contains the essential information for the recommender systems. From the practical point of view, our method can improve the performance and reduce the computational time of the recommendation system, thus improving both of their effectiveness and efficiency. PMID:23690946

  12. Integrating Information Extraction Agents into a Tourism Recommender System

    Science.gov (United States)

    Esparcia, Sergio; Sánchez-Anguix, Víctor; Argente, Estefanía; García-Fornes, Ana; Julián, Vicente

    Recommender systems face some problems. On the one hand information needs to be maintained updated, which can result in a costly task if it is not performed automatically. On the other hand, it may be interesting to include third party services in the recommendation since they improve its quality. In this paper, we present an add-on for the Social-Net Tourism Recommender System that uses information extraction and natural language processing techniques in order to automatically extract and classify information from the Web. Its goal is to maintain the system updated and obtain information about third party services that are not offered by service providers inside the system.

  13. Transliteration normalization for Information Extraction and Machine Translation

    Directory of Open Access Journals (Sweden)

    Yuval Marton

    2014-12-01

    Full Text Available Foreign name transliterations typically include multiple spelling variants. These variants cause data sparseness and inconsistency problems, increase the Out-of-Vocabulary (OOV rate, and present challenges for Machine Translation, Information Extraction and other natural language processing (NLP tasks. This work aims to identify and cluster name spelling variants using a Statistical Machine Translation method: word alignment. The variants are identified by being aligned to the same “pivot” name in another language (the source-language in Machine Translation settings. Based on word-to-word translation and transliteration probabilities, as well as the string edit distance metric, names with similar spellings in the target language are clustered and then normalized to a canonical form. With this approach, tens of thousands of high-precision name transliteration spelling variants are extracted from sentence-aligned bilingual corpora in Arabic and English (in both languages. When these normalized name spelling variants are applied to Information Extraction tasks, improvements over strong baseline systems are observed. When applied to Machine Translation tasks, a large improvement potential is shown.

  14. Semantic Information Extraction of Lanes Based on Onboard Camera Videos

    Science.gov (United States)

    Tang, L.; Deng, T.; Ren, C.

    2018-04-01

    In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.

  15. AGS slow extraction improvements

    International Nuclear Information System (INIS)

    Glenn, J.W.; Smith, G.A.; Sandberg, J.N.; Repeta, L.; Weisberg, H.

    1979-01-01

    Improvement of the straightness of the F5 copper septum increased the AGS slow extraction efficiency from approx. 80% to approx. 90%. Installation of an electrostatic septum at H2O, 24 betatron wavelengths upstream of F5, further improved the extraction efficiency to approx. 97%

  16. Information extraction from multi-institutional radiology reports.

    Science.gov (United States)

    Hassanpour, Saeed; Langlotz, Curtis P

    2016-01-01

    The radiology report is the most important source of clinical imaging information. It documents critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records that information for future clinical and research use. Although efforts to structure some radiology report information through predefined templates are beginning to bear fruit, a large portion of radiology report information is entered in free text. The free text format is a major obstacle for rapid extraction and subsequent use of information by clinicians, researchers, and healthcare information systems. This difficulty is due to the ambiguity and subtlety of natural language, complexity of described images, and variations among different radiologists and healthcare organizations. As a result, radiology reports are used only once by the clinician who ordered the study and rarely are used again for research and data mining. In this work, machine learning techniques and a large multi-institutional radiology report repository are used to extract the semantics of the radiology report and overcome the barriers to the re-use of radiology report information in clinical research and other healthcare applications. We describe a machine learning system to annotate radiology reports and extract report contents according to an information model. This information model covers the majority of clinically significant contents in radiology reports and is applicable to a wide variety of radiology study types. Our automated approach uses discriminative sequence classifiers for named-entity recognition to extract and organize clinically significant terms and phrases consistent with the information model. We evaluated our information extraction system on 150 radiology reports from three major healthcare organizations and compared its results to a commonly used non-machine learning information extraction method. We

  17. A rapid extraction of landslide disaster information research based on GF-1 image

    Science.gov (United States)

    Wang, Sai; Xu, Suning; Peng, Ling; Wang, Zhiyi; Wang, Na

    2015-08-01

    In recent years, the landslide disasters occurred frequently because of the seismic activity. It brings great harm to people's life. It has caused high attention of the state and the extensive concern of society. In the field of geological disaster, landslide information extraction based on remote sensing has been controversial, but high resolution remote sensing image can improve the accuracy of information extraction effectively with its rich texture and geometry information. Therefore, it is feasible to extract the information of earthquake- triggered landslides with serious surface damage and large scale. Taking the Wenchuan county as the study area, this paper uses multi-scale segmentation method to extract the landslide image object through domestic GF-1 images and DEM data, which uses the estimation of scale parameter tool to determine the optimal segmentation scale; After analyzing the characteristics of landslide high-resolution image comprehensively and selecting spectrum feature, texture feature, geometric features and landform characteristics of the image, we can establish the extracting rules to extract landslide disaster information. The extraction results show that there are 20 landslide whose total area is 521279.31 .Compared with visual interpretation results, the extraction accuracy is 72.22%. This study indicates its efficient and feasible to extract earthquake landslide disaster information based on high resolution remote sensing and it provides important technical support for post-disaster emergency investigation and disaster assessment.

  18. Terrain Extraction by Integrating Terrestrial Laser Scanner Data and Spectral Information

    Science.gov (United States)

    Lau, C. L.; Halim, S.; Zulkepli, M.; Azwan, A. M.; Tang, W. L.; Chong, A. K.

    2015-10-01

    The extraction of true terrain points from unstructured laser point cloud data is an important process in order to produce an accurate digital terrain model (DTM). However, most of these spatial filtering methods just utilizing the geometrical data to discriminate the terrain points from nonterrain points. The point cloud filtering method also can be improved by using the spectral information available with some scanners. Therefore, the objective of this study is to investigate the effectiveness of using the three-channel (red, green and blue) of the colour image captured from built-in digital camera which is available in some Terrestrial Laser Scanner (TLS) for terrain extraction. In this study, the data acquisition was conducted at a mini replica landscape in Universiti Teknologi Malaysia (UTM), Skudai campus using Leica ScanStation C10. The spectral information of the coloured point clouds from selected sample classes are extracted for spectral analysis. The coloured point clouds which within the corresponding preset spectral threshold are identified as that specific feature point from the dataset. This process of terrain extraction is done through using developed Matlab coding. Result demonstrates that a higher spectral resolution passive image is required in order to improve the output. This is because low quality of the colour images captured by the sensor contributes to the low separability in spectral reflectance. In conclusion, this study shows that, spectral information is capable to be used as a parameter for terrain extraction.

  19. Improvements in solvent extraction columns

    International Nuclear Information System (INIS)

    Aughwane, K.R.

    1987-01-01

    Solvent extraction columns are used in the reprocessing of irradiated nuclear fuel. For an effective reprocessing operation a solvent extraction column is required which is capable of distributing the feed over most of the column. The patent describes improvements in solvent extractions columns which allows the feed to be distributed over an increased length of column than was previously possible. (U.K.)

  20. Extracting useful information from images

    DEFF Research Database (Denmark)

    Kucheryavskiy, Sergey

    2011-01-01

    The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic and heter......The paper presents an overview of methods for extracting useful information from digital images. It covers various approaches that utilized different properties of images, like intensity distribution, spatial frequencies content and several others. A few case studies including isotropic...

  1. An Improved AAM Method for Extracting Human Facial Features

    Directory of Open Access Journals (Sweden)

    Tao Zhou

    2012-01-01

    Full Text Available Active appearance model is a statistically parametrical model, which is widely used to extract human facial features and recognition. However, intensity values used in original AAM cannot provide enough information for image texture, which will lead to a larger error or a failure fitting of AAM. In order to overcome these defects and improve the fitting performance of AAM model, an improved texture representation is proposed in this paper. Firstly, translation invariant wavelet transform is performed on face images and then image structure is represented using the measure which is obtained by fusing the low-frequency coefficients with edge intensity. Experimental results show that the improved algorithm can increase the accuracy of the AAM fitting and express more information for structures of edge and texture.

  2. Extraction of Information of Audio-Visual Contents

    Directory of Open Access Journals (Sweden)

    Carlos Aguilar

    2011-10-01

    Full Text Available In this article we show how it is possible to use Channel Theory (Barwise and Seligman, 1997 for modeling the process of information extraction realized by audiences of audio-visual contents. To do this, we rely on the concepts pro- posed by Channel Theory and, especially, its treatment of representational systems. We then show how the information that an agent is capable of extracting from the content depends on the number of channels he is able to establish between the content and the set of classifications he is able to discriminate. The agent can endeavor the extraction of information through these channels from the totality of content; however, we discuss the advantages of extracting from its constituents in order to obtain a greater number of informational items that represent it. After showing how the extraction process is endeavored for each channel, we propose a method of representation of all the informative values an agent can obtain from a content using a matrix constituted by the channels the agent is able to establish on the content (source classifications, and the ones he can understand as individual (destination classifications. We finally show how this representation allows reflecting the evolution of the informative items through the evolution of audio-visual content.

  3. Ultrasound pretreatment as an alternative to improve essential oils extraction

    Directory of Open Access Journals (Sweden)

    Flávia Michelon Dalla Nora

    Full Text Available ABSTRACT: Essential oils are substances originated from plants in general. These compounds are well known to have a high biological activity, specially the antioxidant and antimicrobial. Several extraction techniques are employed to obtain these substances. However, the majority of these techniques require a long extraction time. In this sense, innovative and alternative extraction techniques, such as ultrasound, have recently been the target of studies. In view of the small amount of publications using ultrasonic pretreatment, this review aimed to congregate current relevant information on ultrasound-assisted extraction of essential oils. In this sense, theoretical aspects, such as the main factors that influence the performance of this technique as well as the advantages and disadvantages of the use of ultrasound as an environmental friendly alternative technique to improve the extraction of essential oil in comparison to traditional methods, are shown. Considering the available studies in the literature on essential oil extraction using ultrasonic pretreatment, low frequencies ranged from 20 to 50kWz and times ranged from 20 to 40min were used. The use of ultrasonic pretreatment represents a time reduction to near 70% in relation to the conventional hydrodistillation. Also, these conditions enabled a growth in the extraction of bioactive compounds and consequently improving the antioxidant and antimicrobial activities of essential oils.

  4. Scenario Customization for Information Extraction

    National Research Council Canada - National Science Library

    Yangarber, Roman

    2001-01-01

    Information Extraction (IE) is an emerging NLP technology, whose function is to process unstructured, natural language text, to locate specific pieces of information, or facts, in the text, and to use these facts to fill a database...

  5. Can we replace curation with information extraction software?

    Science.gov (United States)

    Karp, Peter D

    2016-01-01

    Can we use programs for automated or semi-automated information extraction from scientific texts as practical alternatives to professional curation? I show that error rates of current information extraction programs are too high to replace professional curation today. Furthermore, current IEP programs extract single narrow slivers of information, such as individual protein interactions; they cannot extract the large breadth of information extracted by professional curators for databases such as EcoCyc. They also cannot arbitrate among conflicting statements in the literature as curators can. Therefore, funding agencies should not hobble the curation efforts of existing databases on the assumption that a problem that has stymied Artificial Intelligence researchers for more than 60 years will be solved tomorrow. Semi-automated extraction techniques appear to have significantly more potential based on a review of recent tools that enhance curator productivity. But a full cost-benefit analysis for these tools is lacking. Without such analysis it is possible to expend significant effort developing information-extraction tools that automate small parts of the overall curation workflow without achieving a significant decrease in curation costs.Database URL. © The Author(s) 2016. Published by Oxford University Press.

  6. Transductive Pattern Learning for Information Extraction

    National Research Council Canada - National Science Library

    McLernon, Brian; Kushmerick, Nicholas

    2006-01-01

    .... We present TPLEX, a semi-supervised learning algorithm for information extraction that can acquire extraction patterns from a small amount of labelled text in conjunction with a large amount of unlabelled text...

  7. An improved active contour model for glacial lake extraction

    Science.gov (United States)

    Zhao, H.; Chen, F.; Zhang, M.

    2017-12-01

    Active contour model is a widely used method in visual tracking and image segmentation. Under the driven of objective function, the initial curve defined in active contour model will evolve to a stable condition - a desired result in given image. As a typical region-based active contour model, C-V model has a good effect on weak boundaries detection and anti noise ability which shows great potential in glacial lake extraction. Glacial lake is a sensitive indicator for reflecting global climate change, therefore accurate delineate glacial lake boundaries is essential to evaluate hydrologic environment and living environment. However, the current method in glacial lake extraction mainly contains water index method and recognition classification method are diffcult to directly applied in large scale glacial lake extraction due to the diversity of glacial lakes and masses impacted factors in the image, such as image noise, shadows, snow and ice, etc. Regarding the abovementioned advantanges of C-V model and diffcults in glacial lake extraction, we introduce the signed pressure force function to improve the C-V model for adapting to processing of glacial lake extraction. To inspect the effect of glacial lake extraction results, three typical glacial lake development sites were selected, include Altai mountains, Centre Himalayas, South-eastern Tibet, and Landsat8 OLI imagery was conducted as experiment data source, Google earth imagery as reference data for varifying the results. The experiment consequence suggests that improved active contour model we proposed can effectively discriminate the glacial lakes from complex backgound with a higher Kappa Coefficient - 0.895, especially in some small glacial lakes which belongs to weak information in the image. Our finding provide a new approach to improved accuracy under the condition of large proportion of small glacial lakes and the possibility for automated glacial lake mapping in large-scale area.

  8. Improving extraction efficiency of the third integer resonant extraction using higher order multipoles

    Energy Technology Data Exchange (ETDEWEB)

    Brown, K. A. [Brookhaven National Lab. (BNL), Upton, NY (United States); Schoefer, V. [Brookhaven National Lab. (BNL), Upton, NY (United States); Tomizawa, M. [High Energy Accelerator Research Organization (KEK), Tsukuba (Japan)

    2017-03-09

    The new accelerator complex at J-PARC will operate with both high energy and very high intensity proton beams. With a design slow extraction efficiency of greater than 99% this facility will still be depositing significant beam power onto accelerator components [2]. To achieve even higher efficiencies requires some new ideas. The design of the extraction system and the accelerator lattice structure leaves little room for improvement using conventional techniques. In this report we will present one method for improving the slow extraction efficiency at J-PARC by adding duodecapoles or octupoles to the slow extraction system. We will review the theory of resonant extraction, describe simulation methods, and present the results of detailed simulations. From our investigations we find that we can improve extraction efficiency and thereby reduce the level of residual activation in the accelerator components and surrounding shielding.

  9. Evolving spectral transformations for multitemporal information extraction using evolutionary computation

    Science.gov (United States)

    Momm, Henrique; Easson, Greg

    2011-01-01

    Remote sensing plays an important role in assessing temporal changes in land features. The challenge often resides in the conversion of large quantities of raw data into actionable information in a timely and cost-effective fashion. To address this issue, research was undertaken to develop an innovative methodology integrating biologically-inspired algorithms with standard image classification algorithms to improve information extraction from multitemporal imagery. Genetic programming was used as the optimization engine to evolve feature-specific candidate solutions in the form of nonlinear mathematical expressions of the image spectral channels (spectral indices). The temporal generalization capability of the proposed system was evaluated by addressing the task of building rooftop identification from a set of images acquired at different dates in a cross-validation approach. The proposed system generates robust solutions (kappa values > 0.75 for stage 1 and > 0.4 for stage 2) despite the statistical differences between the scenes caused by land use and land cover changes coupled with variable environmental conditions, and the lack of radiometric calibration between images. Based on our results, the use of nonlinear spectral indices enhanced the spectral differences between features improving the clustering capability of standard classifiers and providing an alternative solution for multitemporal information extraction.

  10. Network and Ensemble Enabled Entity Extraction in Informal Text (NEEEEIT) final report

    Energy Technology Data Exchange (ETDEWEB)

    Kegelmeyer, Philip W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Shead, Timothy M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dunlavy, Daniel M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    This SAND report summarizes the activities and outcomes of the Network and Ensemble Enabled Entity Extraction in Information Text (NEEEEIT) LDRD project, which addressed improving the accuracy of conditional random fields for named entity recognition through the use of ensemble methods.

  11. Improving the Accuracy of Attribute Extraction using the Relatedness between Attribute Values

    Science.gov (United States)

    Bollegala, Danushka; Tani, Naoki; Ishizuka, Mitsuru

    Extracting attribute-values related to entities from web texts is an important step in numerous web related tasks such as information retrieval, information extraction, and entity disambiguation (namesake disambiguation). For example, for a search query that contains a personal name, we can not only return documents that contain that personal name, but if we have attribute-values such as the organization for which that person works, we can also suggest documents that contain information related to that organization, thereby improving the user's search experience. Despite numerous potential applications of attribute extraction, it remains a challenging task due to the inherent noise in web data -- often a single web page contains multiple entities and attributes. We propose a graph-based approach to select the correct attribute-values from a set of candidate attribute-values extracted for a particular entity. First, we build an undirected weighted graph in which, attribute-values are represented by nodes, and the edge that connects two nodes in the graph represents the degree of relatedness between the corresponding attribute-values. Next, we find the maximum spanning tree of this graph that connects exactly one attribute-value for each attribute-type. The proposed method outperforms previously proposed attribute extraction methods on a dataset that contains 5000 web pages.

  12. Improving IUE High Dispersion Extraction

    Science.gov (United States)

    Lawton, Patricia J.; VanSteenberg, M. E.; Massa, D.

    2007-01-01

    We present a different method to extract high dispersion International Ultraviolet Explorer (IUE) spectra from the New Spectral Image Processing System (NEWSIPS) geometrically and photometrically corrected (SI HI) images of the echellogram. The new algorithm corrects many of the deficiencies that exist in the NEWSIPS high dispersion (SIHI) spectra . Specifically, it does a much better job of accounting for the overlap of the higher echelle orders, it eliminates a significant time dependency in the extracted spectra (which can be traced to the background model used in the NEWSIPS extractions), and it can extract spectra from echellogram images that are more highly distorted than the NEWSIPS extraction routines can handle. Together, these improvements yield a set of IUE high dispersion spectra whose scientific integrity is sign ificantly better than the NEWSIPS products. This work has been supported by NASA ADP grants.

  13. A cascade of classifiers for extracting medication information from discharge summaries

    Directory of Open Access Journals (Sweden)

    Halgrim Scott

    2011-07-01

    Full Text Available Abstract Background Extracting medication information from clinical records has many potential applications, and recently published research, systems, and competitions reflect an interest therein. Much of the early extraction work involved rules and lexicons, but more recently machine learning has been applied to the task. Methods We present a hybrid system consisting of two parts. The first part, field detection, uses a cascade of statistical classifiers to identify medication-related named entities. The second part uses simple heuristics to link those entities into medication events. Results The system achieved performance that is comparable to other approaches to the same task. This performance is further improved by adding features that reference external medication name lists. Conclusions This study demonstrates that our hybrid approach outperforms purely statistical or rule-based systems. The study also shows that a cascade of classifiers works better than a single classifier in extracting medication information. The system is available as is upon request from the first author.

  14. Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    Directory of Open Access Journals (Sweden)

    Yeh Chia-Hung

    2005-01-01

    Full Text Available A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1 the moving cast shadow effect, (2 the occlusion effect, and (3 nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

  15. Cause Information Extraction from Financial Articles Concerning Business Performance

    Science.gov (United States)

    Sakai, Hiroyuki; Masuyama, Shigeru

    We propose a method of extracting cause information from Japanese financial articles concerning business performance. Our method acquires cause informtion, e. g. “_??__??__??__??__??__??__??__??__??__??_ (zidousya no uriage ga koutyou: Sales of cars were good)”. Cause information is useful for investors in selecting companies to invest. Our method extracts cause information as a form of causal expression by using statistical information and initial clue expressions automatically. Our method can extract causal expressions without predetermined patterns or complex rules given by hand, and is expected to be applied to other tasks for acquiring phrases that have a particular meaning not limited to cause information. We compared our method with our previous one originally proposed for extracting phrases concerning traffic accident causes and experimental results showed that our new method outperforms our previous one.

  16. Information Extraction of High Resolution Remote Sensing Images Based on the Calculation of Optimal Segmentation Parameters

    Science.gov (United States)

    Zhu, Hongchun; Cai, Lijie; Liu, Haiying; Huang, Wei

    2016-01-01

    Multi-scale image segmentation and the selection of optimal segmentation parameters are the key processes in the object-oriented information extraction of high-resolution remote sensing images. The accuracy of remote sensing special subject information depends on this extraction. On the basis of WorldView-2 high-resolution data, the optimal segmentation parameters methodof object-oriented image segmentation and high-resolution image information extraction, the following processes were conducted in this study. Firstly, the best combination of the bands and weights was determined for the information extraction of high-resolution remote sensing image. An improved weighted mean-variance method was proposed andused to calculatethe optimal segmentation scale. Thereafter, the best shape factor parameter and compact factor parameters were computed with the use of the control variables and the combination of the heterogeneity and homogeneity indexes. Different types of image segmentation parameters were obtained according to the surface features. The high-resolution remote sensing images were multi-scale segmented with the optimal segmentation parameters. Ahierarchical network structure was established by setting the information extraction rules to achieve object-oriented information extraction. This study presents an effective and practical method that can explain expert input judgment by reproducible quantitative measurements. Furthermore the results of this procedure may be incorporated into a classification scheme. PMID:27362762

  17. Sample-based XPath Ranking for Web Information Extraction

    NARCIS (Netherlands)

    Jundt, Oliver; van Keulen, Maurice

    Web information extraction typically relies on a wrapper, i.e., program code or a configuration that specifies how to extract some information from web pages at a specific website. Manually creating and maintaining wrappers is a cumbersome and error-prone task. It may even be prohibitive as some

  18. Ontology-Based Information Extraction for Business Intelligence

    Science.gov (United States)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  19. Improving the extraction of crisis information in the context of flood, fire, and landslide rapid mapping using SAR and optical remote sensing data

    Science.gov (United States)

    Martinis, Sandro; Clandillon, Stephen; Twele, André; Huber, Claire; Plank, Simon; Maxant, Jérôme; Cao, Wenxi; Caspard, Mathilde; May, Stéphane

    2016-04-01

    Optical and radar satellite remote sensing have proven to provide essential crisis information in case of natural disasters, humanitarian relief activities and civil security issues in a growing number of cases through mechanisms such as the Copernicus Emergency Management Service (EMS) of the European Commission or the International Charter 'Space and Major Disasters'. The aforementioned programs and initiatives make use of satellite-based rapid mapping services aimed at delivering reliable and accurate crisis information after natural hazards. Although these services are increasingly operational, they need to be continuously updated and improved through research and development (R&D) activities. The principal objective of ASAPTERRA (Advancing SAR and Optical Methods for Rapid Mapping), the ESA-funded R&D project being described here, is to improve, automate and, hence, speed-up geo-information extraction procedures in the context of natural hazards response. This is performed through the development, implementation, testing and validation of novel image processing methods using optical and Synthetic Aperture Radar (SAR) data. The methods are mainly developed based on data of the German radar satellites TerraSAR-X and TanDEM-X, the French satellite missions Pléiades-1A/1B as well as the ESA missions Sentinel-1/2 with the aim to better characterize the potential and limitations of these sensors and their synergy. The resulting algorithms and techniques are evaluated in real case applications during rapid mapping activities. The project is focussed on three types of natural hazards: floods, landslides and fires. Within this presentation an overview of the main methodological developments in each topic is given and demonstrated in selected test areas. The following developments are presented in the context of flood mapping: a fully automated Sentinel-1 based processing chain for detecting open flood surfaces, a method for the improved detection of flooded vegetation

  20. A Two-Step Resume Information Extraction Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Chen

    2018-01-01

    Full Text Available With the rapid growth of Internet-based recruiting, there are a great number of personal resumes among recruiting systems. To gain more attention from the recruiters, most resumes are written in diverse formats, including varying font size, font colour, and table cells. However, the diversity of format is harmful to data mining, such as resume information extraction, automatic job matching, and candidates ranking. Supervised methods and rule-based methods have been proposed to extract facts from resumes, but they strongly rely on hierarchical structure information and large amounts of labelled data, which are hard to collect in reality. In this paper, we propose a two-step resume information extraction approach. In the first step, raw text of resume is identified as different resume blocks. To achieve the goal, we design a novel feature, Writing Style, to model sentence syntax information. Besides word index and punctuation index, word lexical attribute and prediction results of classifiers are included in Writing Style. In the second step, multiple classifiers are employed to identify different attributes of fact information in resumes. Experimental results on a real-world dataset show that the algorithm is feasible and effective.

  1. The Agent of extracting Internet Information with Lead Order

    Science.gov (United States)

    Mo, Zan; Huang, Chuliang; Liu, Aijun

    In order to carry out e-commerce better, advanced technologies to access business information are in need urgently. An agent is described to deal with the problems of extracting internet information that caused by the non-standard and skimble-scamble structure of Chinese websites. The agent designed includes three modules which respond to the process of extracting information separately. A method of HTTP tree and a kind of Lead algorithm is proposed to generate a lead order, with which the required web can be retrieved easily. How to transform the extracted information structuralized with natural language is also discussed.

  2. Improved extraction procedure for carotenoids from human milk.

    Science.gov (United States)

    Schweigert, F J; Hurtienne, A; Bathe, K

    2000-05-01

    An improved method for the extraction of the major carotenoids from human milk is described. Carotenoids were extracted from milk first with ethanol and n-hexane. Then, polar xanthophylls were extracted from n-hexane into ethanol/water. The remaining n-hexane was evaporated, the residue combined with the ethanolic milk fraction and the mixture briefly saponified. Carotenoids were extracted from the hydrolysate with n-hexane, combined with the polar xanthophylls from the non-saponified ethanol/water-extract and separated by HPLC. Using this method we were able to significantly improve the recovery of xanthophylls such as lutein and zeaxanthin from human milk. The recovery rate of all carotenoids was > 90%. This method might not only be of value for milk but should be especially useful in the extraction of carotenoids from human tissues such as the adipose tissue.

  3. Fine-grained information extraction from German transthoracic echocardiography reports.

    Science.gov (United States)

    Toepfer, Martin; Corovic, Hamo; Fette, Georg; Klügl, Peter; Störk, Stefan; Puppe, Frank

    2015-11-12

    Information extraction techniques that get structured representations out of unstructured data make a large amount of clinically relevant information about patients accessible for semantic applications. These methods typically rely on standardized terminologies that guide this process. Many languages and clinical domains, however, lack appropriate resources and tools, as well as evaluations of their applications, especially if detailed conceptualizations of the domain are required. For instance, German transthoracic echocardiography reports have not been targeted sufficiently before, despite of their importance for clinical trials. This work therefore aimed at development and evaluation of an information extraction component with a fine-grained terminology that enables to recognize almost all relevant information stated in German transthoracic echocardiography reports at the University Hospital of Würzburg. A domain expert validated and iteratively refined an automatically inferred base terminology. The terminology was used by an ontology-driven information extraction system that outputs attribute value pairs. The final component has been mapped to the central elements of a standardized terminology, and it has been evaluated according to documents with different layouts. The final system achieved state-of-the-art precision (micro average.996) and recall (micro average.961) on 100 test documents that represent more than 90 % of all reports. In particular, principal aspects as defined in a standardized external terminology were recognized with f 1=.989 (micro average) and f 1=.963 (macro average). As a result of keyword matching and restraint concept extraction, the system obtained high precision also on unstructured or exceptionally short documents, and documents with uncommon layout. The developed terminology and the proposed information extraction system allow to extract fine-grained information from German semi-structured transthoracic echocardiography reports

  4. Extraction of land cover change information from ENVISAT-ASAR data in Chengdu Plain

    Science.gov (United States)

    Xu, Wenbo; Fan, Jinlong; Huang, Jianxi; Tian, Yichen; Zhang, Yong

    2006-10-01

    Land cover data are essential to most global change research objectives, including the assessment of current environmental conditions and the simulation of future environmental scenarios that ultimately lead to public policy development. Chinese Academy of Sciences generated a nationwide land cover database in order to carry out the quantification and spatial characterization of land use/cover changes (LUCC) in 1990s. In order to improve the reliability of the database, we will update the database anytime. But it is difficult to obtain remote sensing data to extract land cover change information in large-scale. It is hard to acquire optical remote sensing data in Chengdu plain, so the objective of this research was to evaluate multitemporal ENVISAT advanced synthetic aperture radar (ASAR) data for extracting land cover change information. Based on the fieldwork and the nationwide 1:100000 land cover database, the paper assesses several land cover changes in Chengdu plain, for example: crop to buildings, forest to buildings, and forest to bare land. The results show that ENVISAT ASAR data have great potential for the applications of extracting land cover change information.

  5. [An improved algorithm for electrohysterogram envelope extraction].

    Science.gov (United States)

    Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

    2017-02-01

    Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

  6. Support the Design of Improved IUE NEWSIPS High Dispersion Extraction Algorithms: Improved IUE High Dispersion Extraction Algorithms

    Science.gov (United States)

    Lawton, Pat

    2004-01-01

    The objective of this work was to support the design of improved IUE NEWSIPS high dispersion extraction algorithms. The purpose of this work was to evaluate use of the Linearized Image (LIHI) file versus the Re-Sampled Image (SIHI) file, evaluate various extraction, and design algorithms for evaluation of IUE High Dispersion spectra. It was concluded the use of the Re-Sampled Image (SIHI) file was acceptable. Since the Gaussian profile worked well for the core and the Lorentzian profile worked well for the wings, the Voigt profile was chosen for use in the extraction algorithm. It was found that the gamma and sigma parameters varied significantly across the detector, so gamma and sigma masks for the SWP detector were developed. Extraction code was written.

  7. Research of building information extraction and evaluation based on high-resolution remote-sensing imagery

    Science.gov (United States)

    Cao, Qiong; Gu, Lingjia; Ren, Ruizhi; Wang, Lang

    2016-09-01

    Building extraction currently is important in the application of high-resolution remote sensing imagery. At present, quite a few algorithms are available for detecting building information, however, most of them still have some obvious disadvantages, such as the ignorance of spectral information, the contradiction between extraction rate and extraction accuracy. The purpose of this research is to develop an effective method to detect building information for Chinese GF-1 data. Firstly, the image preprocessing technique is used to normalize the image and image enhancement is used to highlight the useful information in the image. Secondly, multi-spectral information is analyzed. Subsequently, an improved morphological building index (IMBI) based on remote sensing imagery is proposed to get the candidate building objects. Furthermore, in order to refine building objects and further remove false objects, the post-processing (e.g., the shape features, the vegetation index and the water index) is employed. To validate the effectiveness of the proposed algorithm, the omission errors (OE), commission errors (CE), the overall accuracy (OA) and Kappa are used at final. The proposed method can not only effectively use spectral information and other basic features, but also avoid extracting excessive interference details from high-resolution remote sensing images. Compared to the original MBI algorithm, the proposed method reduces the OE by 33.14% .At the same time, the Kappa increase by 16.09%. In experiments, IMBI achieved satisfactory results and outperformed other algorithms in terms of both accuracies and visual inspection

  8. From remote sensing data about information extraction for 3D geovisualization - Development of a workflow

    International Nuclear Information System (INIS)

    Tiede, D.

    2010-01-01

    With an increased availability of high (spatial) resolution remote sensing imagery since the late nineties, the need to develop operative workflows for the automated extraction, provision and communication of information from such data has grown. Monitoring requirements, aimed at the implementation of environmental or conservation targets, management of (environmental-) resources, and regional planning as well as international initiatives, especially the joint initiative of the European Commission and ESA (European Space Agency) for Global Monitoring for Environment and Security (GMES) play also a major part. This thesis addresses the development of an integrated workflow for the automated provision of information derived from remote sensing data. Considering applied data and fields of application, this work aims to design the workflow as generic as possible. Following research questions are discussed: What are the requirements of a workflow architecture that seamlessly links the individual workflow elements in a timely manner and secures accuracy of the extracted information effectively? How can the workflow retain its efficiency if mounds of data are processed? How can the workflow be improved with regards to automated object-based image analysis (OBIA)? Which recent developments could be of use? What are the limitations or which workarounds could be applied in order to generate relevant results? How can relevant information be prepared target-oriented and communicated effectively? How can the more recently developed freely available virtual globes be used for the delivery of conditioned information under consideration of the third dimension as an additional, explicit carrier of information? Based on case studies comprising different data sets and fields of application it is demonstrated how methods to extract and process information as well as to effectively communicate results can be improved and successfully combined within one workflow. It is shown that (1

  9. Improvement of infrastructure for risk-informed regulation

    International Nuclear Information System (INIS)

    2012-01-01

    Improvement of the infrastructure of probabilistic safety assessment (PSA) is essential to the risk-informed regulation for nuclear power plants. JNES conducted update of initiating event frequencies and improvement of the method for uncertainty analysis to enhance the technology bases of PSA in 2011. Furthermore, JNES improved the human reliability analysis method and the reliability analysis method for digital reactor protection systems. JNES estimated initiating event frequencies both for power and shutdown operation based on the recent operating experiences in Nuclear Power Plants (NPPs) of Japan using the hierarchical Bayesian method. As for improvement of the uncertainty analysis method, JNES conducted trial analyses using SOKC (State-Of-Knowledge Correlation) for the representative PWR plant and BWR plant of Japan. The study on the advanced HRA method with operator cognitive action model was conducted to improve a quality of HRA. The study on analyses of 'defense in depth' and 'diversity' for introducing digital instrumentation and control (I and C) systems was conducted. In order to ensure the quality of PSA, JNES conducted a peer review of a representative Japanese BWR plant PSA by the professional PSA engineers from the U.S. in order to extract to improve quality of PSA, and made an effort to develop the procedures of internal fire PSA. JNES participated in OECD/NEA PRISME and FIRE project to obtain the latest information and data to validate and improve the fire propagation analysis codes and the parameters for fire PSA as well. Furthermore, JNES studied schemes for the endorsement and application in the risk-informed regulation of PSA standards established by Atomic Energy Society of Japan. (author)

  10. Extracting Information from Multimedia Meeting Collections

    OpenAIRE

    Gatica-Perez, Daniel; Zhang, Dong; Bengio, Samy

    2005-01-01

    Multimedia meeting collections, composed of unedited audio and video streams, handwritten notes, slides, and electronic documents that jointly constitute a raw record of complex human interaction processes in the workplace, have attracted interest due to the increasing feasibility of recording them in large quantities, by the opportunities for information access and retrieval applications derived from the automatic extraction of relevant meeting information, and by the challenges that the ext...

  11. NAMED ENTITY RECOGNITION FROM BIOMEDICAL TEXT -AN INFORMATION EXTRACTION TASK

    Directory of Open Access Journals (Sweden)

    N. Kanya

    2016-07-01

    Full Text Available Biomedical Text Mining targets the Extraction of significant information from biomedical archives. Bio TM encompasses Information Retrieval (IR and Information Extraction (IE. The Information Retrieval will retrieve the relevant Biomedical Literature documents from the various Repositories like PubMed, MedLine etc., based on a search query. The IR Process ends up with the generation of corpus with the relevant document retrieved from the Publication databases based on the query. The IE task includes the process of Preprocessing of the document, Named Entity Recognition (NER from the documents and Relationship Extraction. This process includes Natural Language Processing, Data Mining techniques and machine Language algorithm. The preprocessing task includes tokenization, stop word Removal, shallow parsing, and Parts-Of-Speech tagging. NER phase involves recognition of well-defined objects such as genes, proteins or cell-lines etc. This process leads to the next phase that is extraction of relationships (IE. The work was based on machine learning algorithm Conditional Random Field (CRF.

  12. YAdumper: extracting and translating large information volumes from relational databases to structured flat files.

    Science.gov (United States)

    Fernández, José M; Valencia, Alfonso

    2004-10-12

    Downloading the information stored in relational databases into XML and other flat formats is a common task in bioinformatics. This periodical dumping of information requires considerable CPU time, disk and memory resources. YAdumper has been developed as a purpose-specific tool to deal with the integral structured information download of relational databases. YAdumper is a Java application that organizes database extraction following an XML template based on an external Document Type Declaration. Compared with other non-native alternatives, YAdumper substantially reduces memory requirements and considerably improves writing performance.

  13. Applying Improved Multiscale Fuzzy Entropy for Feature Extraction of MI-EEG

    Directory of Open Access Journals (Sweden)

    Ming-ai Li

    2017-01-01

    Full Text Available Electroencephalography (EEG is considered the output of a brain and it is a bioelectrical signal with multiscale and nonlinear properties. Motor Imagery EEG (MI-EEG not only has a close correlation with the human imagination and movement intention but also contains a large amount of physiological or disease information. As a result, it has been fully studied in the field of rehabilitation. To correctly interpret and accurately extract the features of MI-EEG signals, many nonlinear dynamic methods based on entropy, such as Approximate Entropy (ApEn, Sample Entropy (SampEn, Fuzzy Entropy (FE, and Permutation Entropy (PE, have been proposed and exploited continuously in recent years. However, these entropy-based methods can only measure the complexity of MI-EEG based on a single scale and therefore fail to account for the multiscale property inherent in MI-EEG. To solve this problem, Multiscale Sample Entropy (MSE, Multiscale Permutation Entropy (MPE, and Multiscale Fuzzy Entropy (MFE are developed by introducing scale factor. However, MFE has not been widely used in analysis of MI-EEG, and the same parameter values are employed when the MFE method is used to calculate the fuzzy entropy values on multiple scales. Actually, each coarse-grained MI-EEG carries the characteristic information of the original signal on different scale factors. It is necessary to optimize MFE parameters to discover more feature information. In this paper, the parameters of MFE are optimized independently for each scale factor, and the improved MFE (IMFE is applied to the feature extraction of MI-EEG. Based on the event-related desynchronization (ERD/event-related synchronization (ERS phenomenon, IMFE features from multi channels are fused organically to construct the feature vector. Experiments are conducted on a public dataset by using Support Vector Machine (SVM as a classifier. The experiment results of 10-fold cross-validation show that the proposed method yields

  14. Analysis of Technique to Extract Data from the Web for Improved Performance

    Science.gov (United States)

    Gupta, Neena; Singh, Manish

    2010-11-01

    The World Wide Web rapidly guides the world into a newly amazing electronic world, where everyone can publish anything in electronic form and extract almost all the information. Extraction of information from semi structured or unstructured documents, such as web pages, is a useful yet complex task. Data extraction, which is important for many applications, extracts the records from the HTML files automatically. Ontologies can achieve a high degree of accuracy in data extraction. We analyze method for data extraction OBDE (Ontology-Based Data Extraction), which automatically extracts the query result records from the web with the help of agents. OBDE first constructs an ontology for a domain according to information matching between the query interfaces and query result pages from different web sites within the same domain. Then, the constructed domain ontology is used during data extraction to identify the query result section in a query result page and to align and label the data values in the extracted records. The ontology-assisted data extraction method is fully automatic and overcomes many of the deficiencies of current automatic data extraction methods.

  15. Optimal Information Extraction of Laser Scanning Dataset by Scale-Adaptive Reduction

    Science.gov (United States)

    Zang, Y.; Yang, B.

    2018-04-01

    3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  16. OPTIMAL INFORMATION EXTRACTION OF LASER SCANNING DATASET BY SCALE-ADAPTIVE REDUCTION

    Directory of Open Access Journals (Sweden)

    Y. Zang

    2018-04-01

    Full Text Available 3D laser technology is widely used to collocate the surface information of object. For various applications, we need to extract a good perceptual quality point cloud from the scanned points. To solve the problem, most of existing methods extract important points based on a fixed scale. However, geometric features of 3D object come from various geometric scales. We propose a multi-scale construction method based on radial basis function. For each scale, important points are extracted from the point cloud based on their importance. We apply a perception metric Just-Noticeable-Difference to measure degradation of each geometric scale. Finally, scale-adaptive optimal information extraction is realized. Experiments are undertaken to evaluate the effective of the proposed method, suggesting a reliable solution for optimal information extraction of object.

  17. Knowledge Dictionary for Information Extraction on the Arabic Text Data

    Directory of Open Access Journals (Sweden)

    Wahyu Jauharis Saputra

    2013-04-01

    Full Text Available Information extraction is an early stage of a process of textual data analysis. Information extraction is required to get information from textual data that can be used for process analysis, such as classification and categorization. A textual data is strongly influenced by the language. Arabic is gaining a significant attention in many studies because Arabic language is very different from others, and in contrast to other languages, tools and research on the Arabic language is still lacking. The information extracted using the knowledge dictionary is a concept of expression. A knowledge dictionary is usually constructed manually by an expert and this would take a long time and is specific to a problem only. This paper proposed a method for automatically building a knowledge dictionary. Dictionary knowledge is formed by classifying sentences having the same concept, assuming that they will have a high similarity value. The concept that has been extracted can be used as features for subsequent computational process such as classification or categorization. Dataset used in this paper was the Arabic text dataset. Extraction result was tested by using a decision tree classification engine and the highest precision value obtained was 71.0% while the highest recall value was 75.0%. 

  18. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  19. Multi-Filter String Matching and Human-Centric Entity Matching for Information Extraction

    Science.gov (United States)

    Sun, Chong

    2012-01-01

    More and more information is being generated in text documents, such as Web pages, emails and blogs. To effectively manage this unstructured information, one broadly used approach includes locating relevant content in documents, extracting structured information and integrating the extracted information for querying, mining or further analysis. In…

  20. PSR extraction kicker system improvements

    International Nuclear Information System (INIS)

    Hardek, T.W.

    1991-01-01

    A program to improve the reliability of hardware required to operate the Los Alamos Proton Storage Ring has been under way for the past three years. The extraction kicker system for the PSR was identified as one candidate for improvement. Pulse modulators produce 50kV pulses 360 nsec in length at up to 24-Hz pulse repetition rate and drive two 4-meter-long stripline electrodes. Sources of difficulty with this system included short width switch tube lifetime, drive cable electrical breakdown, high-voltage connector failure, and occasional electrode breakdown. This paper discusses modifications completed on this system to correct these difficulties. 2 refs., 3 figs

  1. Improving photometric redshift estimation using GPZ: size information, post processing, and improved photometry

    Science.gov (United States)

    Gomes, Zahra; Jarvis, Matt J.; Almosallam, Ibrahim A.; Roberts, Stephen J.

    2018-03-01

    The next generation of large-scale imaging surveys (such as those conducted with the Large Synoptic Survey Telescope and Euclid) will require accurate photometric redshifts in order to optimally extract cosmological information. Gaussian Process for photometric redshift estimation (GPZ) is a promising new method that has been proven to provide efficient, accurate photometric redshift estimations with reliable variance predictions. In this paper, we investigate a number of methods for improving the photometric redshift estimations obtained using GPZ (but which are also applicable to others). We use spectroscopy from the Galaxy and Mass Assembly Data Release 2 with a limiting magnitude of r Program Data Release 1 and find that it produces significant improvements in accuracy, similar to the effect of including additional features.

  2. Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame

    Science.gov (United States)

    Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi

    2018-01-01

    At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.

  3. Information Extraction with Character-level Neural Networks and Free Noisy Supervision

    OpenAIRE

    Meerkamp, Philipp; Zhou, Zhengyi

    2016-01-01

    We present an architecture for information extraction from text that augments an existing parser with a character-level neural network. The network is trained using a measure of consistency of extracted data with existing databases as a form of noisy supervision. Our architecture combines the ability of constraint-based information extraction systems to easily incorporate domain knowledge and constraints with the ability of deep neural networks to leverage large amounts of data to learn compl...

  4. Unsupervised information extraction by text segmentation

    CERN Document Server

    Cortez, Eli

    2013-01-01

    A new unsupervised approach to the problem of Information Extraction by Text Segmentation (IETS) is proposed, implemented and evaluated herein. The authors' approach relies on information available on pre-existing data to learn how to associate segments in the input string with attributes of a given domain relying on a very effective set of content-based features. The effectiveness of the content-based features is also exploited to directly learn from test data structure-based features, with no previous human-driven training, a feature unique to the presented approach. Based on the approach, a

  5. Extraction of CT dose information from DICOM metadata: automated Matlab-based approach.

    Science.gov (United States)

    Dave, Jaydev K; Gingold, Eric L

    2013-01-01

    The purpose of this study was to extract exposure parameters and dose-relevant indexes of CT examinations from information embedded in DICOM metadata. DICOM dose report files were identified and retrieved from a PACS. An automated software program was used to extract from these files information from the structured elements in the DICOM metadata relevant to exposure. Extracting information from DICOM metadata eliminated potential errors inherent in techniques based on optical character recognition, yielding 100% accuracy.

  6. Advanced applications of natural language processing for performing information extraction

    CERN Document Server

    Rodrigues, Mário

    2015-01-01

    This book explains how can be created information extraction (IE) applications that are able to tap the vast amount of relevant information available in natural language sources: Internet pages, official documents such as laws and regulations, books and newspapers, and social web. Readers are introduced to the problem of IE and its current challenges and limitations, supported with examples. The book discusses the need to fill the gap between documents, data, and people, and provides a broad overview of the technology supporting IE. The authors present a generic architecture for developing systems that are able to learn how to extract relevant information from natural language documents, and illustrate how to implement working systems using state-of-the-art and freely available software tools. The book also discusses concrete applications illustrating IE uses.   ·         Provides an overview of state-of-the-art technology in information extraction (IE), discussing achievements and limitations for t...

  7. Post-processing of Deep Web Information Extraction Based on Domain Ontology

    Directory of Open Access Journals (Sweden)

    PENG, T.

    2013-11-01

    Full Text Available Many methods are utilized to extract and process query results in deep Web, which rely on the different structures of Web pages and various designing modes of databases. However, some semantic meanings and relations are ignored. So, in this paper, we present an approach for post-processing deep Web query results based on domain ontology which can utilize the semantic meanings and relations. A block identification model (BIM based on node similarity is defined to extract data blocks that are relevant to specific domain after reducing noisy nodes. Feature vector of domain books is obtained by result set extraction model (RSEM based on vector space model (VSM. RSEM, in combination with BIM, builds the domain ontology on books which can not only remove the limit of Web page structures when extracting data information, but also make use of semantic meanings of domain ontology. After extracting basic information of Web pages, a ranking algorithm is adopted to offer an ordered list of data records to users. Experimental results show that BIM and RSEM extract data blocks and build domain ontology accurately. In addition, relevant data records and basic information are extracted and ranked. The performances precision and recall show that our proposed method is feasible and efficient.

  8. Improving meat quality through cattle feed enriched with mate extract

    DEFF Research Database (Denmark)

    Zawadzki, Andressa

    The use of plant extracts in animal feeding trials has been considered as a potential alternative to improve the redox stability of meat. Bioactive compounds from plant extracts can provide the antioxidative mechanisms required to improve animal health and welfare and, to protect meat against...... oxidation. Pharmacological properties and antioxidant effects have been associated to the extract of hops and to the extracts of yerba mate. However, the effects of hops and yerba mate as dietary supplement for animal feeding on the metabolic profile and the redox stability of meat have not been reported...... yet. Addition of extract of mate to a standard maize/soy feed at a level of 0.5, 1.0 or 1.5% to the diet of feedlot for cattle resulted in an increased level of inosine monophosphate, creatine, carnosine and of conjugated linoleic acid in the fresh meat. The tendency to radical formation in meat...

  9. Data Assimilation to Extract Soil Moisture Information from SMAP Observations

    Directory of Open Access Journals (Sweden)

    Jana Kolassa

    2017-11-01

    Full Text Available This study compares different methods to extract soil moisture information through the assimilation of Soil Moisture Active Passive (SMAP observations. Neural network (NN and physically-based SMAP soil moisture retrievals were assimilated into the National Aeronautics and Space Administration (NASA Catchment model over the contiguous United States for April 2015 to March 2017. By construction, the NN retrievals are consistent with the global climatology of the Catchment model soil moisture. Assimilating the NN retrievals without further bias correction improved the surface and root zone correlations against in situ measurements from 14 SMAP core validation sites (CVS by 0.12 and 0.16, respectively, over the model-only skill, and reduced the surface and root zone unbiased root-mean-square error (ubRMSE by 0.005 m 3 m − 3 and 0.001 m 3 m − 3 , respectively. The assimilation reduced the average absolute surface bias against the CVS measurements by 0.009 m 3 m − 3 , but increased the root zone bias by 0.014 m 3 m − 3 . Assimilating the NN retrievals after a localized bias correction yielded slightly lower surface correlation and ubRMSE improvements, but generally the skill differences were small. The assimilation of the physically-based SMAP Level-2 passive soil moisture retrievals using a global bias correction yielded similar skill improvements, as did the direct assimilation of locally bias-corrected SMAP brightness temperatures within the SMAP Level-4 soil moisture algorithm. The results show that global bias correction methods may be able to extract more independent information from SMAP observations compared to local bias correction methods, but without accurate quality control and observation error characterization they are also more vulnerable to adverse effects from retrieval errors related to uncertainties in the retrieval inputs and algorithm. Furthermore, the results show that using global bias correction approaches without a

  10. An Effective Approach to Biomedical Information Extraction with Limited Training Data

    Science.gov (United States)

    Jonnalagadda, Siddhartha

    2011-01-01

    In the current millennium, extensive use of computers and the internet caused an exponential increase in information. Few research areas are as important as information extraction, which primarily involves extracting concepts and the relations between them from free text. Limitations in the size of training data, lack of lexicons and lack of…

  11. MedTime: a temporal information extraction system for clinical narratives.

    Science.gov (United States)

    Lin, Yu-Kai; Chen, Hsinchun; Brown, Randall A

    2013-12-01

    Temporal information extraction from clinical narratives is of critical importance to many clinical applications. We participated in the EVENT/TIMEX3 track of the 2012 i2b2 clinical temporal relations challenge, and presented our temporal information extraction system, MedTime. MedTime comprises a cascade of rule-based and machine-learning pattern recognition procedures. It achieved a micro-averaged f-measure of 0.88 in both the recognitions of clinical events and temporal expressions. We proposed and evaluated three time normalization strategies to normalize relative time expressions in clinical texts. The accuracy was 0.68 in normalizing temporal expressions of dates, times, durations, and frequencies. This study demonstrates and evaluates the integration of rule-based and machine-learning-based approaches for high performance temporal information extraction from clinical narratives. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. a Statistical Texture Feature for Building Collapse Information Extraction of SAR Image

    Science.gov (United States)

    Li, L.; Yang, H.; Chen, Q.; Liu, X.

    2018-04-01

    Synthetic Aperture Radar (SAR) has become one of the most important ways to extract post-disaster collapsed building information, due to its extreme versatility and almost all-weather, day-and-night working capability, etc. In view of the fact that the inherent statistical distribution of speckle in SAR images is not used to extract collapsed building information, this paper proposed a novel texture feature of statistical models of SAR images to extract the collapsed buildings. In the proposed feature, the texture parameter of G0 distribution from SAR images is used to reflect the uniformity of the target to extract the collapsed building. This feature not only considers the statistical distribution of SAR images, providing more accurate description of the object texture, but also is applied to extract collapsed building information of single-, dual- or full-polarization SAR data. The RADARSAT-2 data of Yushu earthquake which acquired on April 21, 2010 is used to present and analyze the performance of the proposed method. In addition, the applicability of this feature to SAR data with different polarizations is also analysed, which provides decision support for the data selection of collapsed building information extraction.

  13. DKIE: Open Source Information Extraction for Danish

    DEFF Research Database (Denmark)

    Derczynski, Leon; Field, Camilla Vilhelmsen; Bøgh, Kenneth Sejdenfaden

    2014-01-01

    Danish is a major Scandinavian language spoken daily by around six million people. However, it lacks a unified, open set of NLP tools. This demonstration will introduce DKIE, an extensible open-source toolkit for processing Danish text. We implement an information extraction architecture for Danish...

  14. Improvement in light-extraction efficiency of light emitting diode ...

    Indian Academy of Sciences (India)

    The effect of various microlens parameters such as diameter and area fraction on light-extraction efficiency was systematically studied. Improvement of 4% in extraction efficiency was obtained by employing it on white light emitting diode. The area fraction of microlenses was increased up to 0.34 by reducing the spin speed.

  15. Extraspectral Imaging for Improving the Perceived Information Presented in Retinal Prosthesis

    Directory of Open Access Journals (Sweden)

    Walid Al-Atabany

    2018-01-01

    Full Text Available Retinal prosthesis is steadily improving as a clinical treatment for blindness caused by retinitis pigmentosa. However, despite the continued exciting progress, the level of visual return is still very poor. It is also unlikely that those utilising these devices will stop being legally blind in the near future. Therefore, it is important to develop methods to maximise the transfer of useful information extracted from the visual scene. Such an approach can be achieved by digitally suppressing less important visual features and textures within the scene. The result can be interpreted as a cartoon-like image of the scene. Furthermore, utilising extravisual wavelengths such as infrared can be useful in the decision process to determine the optimal information to present. In this paper, we, therefore, present a processing methodology that utilises information extracted from the infrared spectrum to assist in the preprocessing of the visual image prior to conversion to retinal information. We demonstrate how this allows for enhanced recognition and how it could be implemented for optogenetic forms of retinal prosthesis. The new approach has been quantitatively evaluated on volunteers showing 112% enhancement in recognizing objects over normal approaches.

  16. End-to-end information extraction without token-level supervision

    DEFF Research Database (Denmark)

    Palm, Rasmus Berg; Hovy, Dirk; Laws, Florian

    2017-01-01

    Most state-of-the-art information extraction approaches rely on token-level labels to find the areas of interest in text. Unfortunately, these labels are time-consuming and costly to create, and consequently, not available for many real-life IE tasks. To make matters worse, token-level labels...... and output text. We evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT movie corpus and compare to neural baselines that do use token-level labels. We achieve competitive results, within a few percentage points of the baselines, showing the feasibility of E2E information extraction...

  17. Improving extraction technology of level seams. Sovershenstvovanie tekhnologii razrabotki pologikh plastov

    Energy Technology Data Exchange (ETDEWEB)

    Shetser, M G; Spitsyn, Yu G

    1985-01-01

    This report deals with conditions and prospects for intensifying extraction of level and inclined seams and improving extraction technology. Reviews mechanization of excavation of stables with automatic cutter-loaders (KA80 in conjunction with KD80); coal extraction using two cutter-loaders in seams 0.9 - 1.9 m thick and up to 20 degrees inclination (pillar mining); reciprocating method of coal cutting; one-sided method of coal extraction (KMK97 cutter loaders). Discusses strengthening of junctions of faces with gate roads (KSU and KSU3M props); improved types of props (hydraulic props SUG-30, SUG-V and GVD); roof control methods (induced caving, advance torpedoing or using KM87UMP and KMT power supports). Deals in detail with introduction of new extraction technology and strengthening of unstable rock by injecting polyurethene compounds, extraction of seams with wide-web cutter-loaders (Kirovets, IK101) and plowing. (3 refs.)

  18. Mining knowledge from text repositories using information extraction ...

    Indian Academy of Sciences (India)

    Information extraction (IE); text mining; text repositories; knowledge discovery from .... general purpose English words. However ... of precision and recall, as extensive experimentation is required due to lack of public tagged corpora. 4. Mining ...

  19. Towards an information extraction and knowledge formation framework based on Shannon entropy

    Directory of Open Access Journals (Sweden)

    Iliescu Dragoș

    2017-01-01

    Full Text Available Information quantity subject is approached in this paperwork, considering the specific domain of nonconforming product management as information source. This work represents a case study. Raw data were gathered from a heavy industrial works company, information extraction and knowledge formation being considered herein. Involved method for information quantity estimation is based on Shannon entropy formula. Information and entropy spectrum are decomposed and analysed for extraction of specific information and knowledge-that formation. The result of the entropy analysis point out the information needed to be acquired by the involved organisation, this being presented as a specific knowledge type.

  20. An improved feature extraction algorithm based on KAZE for multi-spectral image

    Science.gov (United States)

    Yang, Jianping; Li, Jun

    2018-02-01

    Multi-spectral image contains abundant spectral information, which is widely used in all fields like resource exploration, meteorological observation and modern military. Image preprocessing, such as image feature extraction and matching, is indispensable while dealing with multi-spectral remote sensing image. Although the feature matching algorithm based on linear scale such as SIFT and SURF performs strong on robustness, the local accuracy cannot be guaranteed. Therefore, this paper proposes an improved KAZE algorithm, which is based on nonlinear scale, to raise the number of feature and to enhance the matching rate by using the adjusted-cosine vector. The experiment result shows that the number of feature and the matching rate of the improved KAZE are remarkably than the original KAZE algorithm.

  1. Architecture and data processing alternatives for the TSE computer. Volume 2: Extraction of topological information from an image by the Tse computer

    Science.gov (United States)

    Jones, J. R.; Bodenheimer, R. E.

    1976-01-01

    A simple programmable Tse processor organization and arithmetic operations necessary for extraction of the desired topological information are described. Hardware additions to this organization are discussed along with trade-offs peculiar to the tse computing concept. An improved organization is presented along with the complementary software for the various arithmetic operations. The performance of the two organizations is compared in terms of speed, power, and cost. Software routines developed to extract the desired information from an image are included.

  2. Tagline: Information Extraction for Semi-Structured Text Elements in Medical Progress Notes

    Science.gov (United States)

    Finch, Dezon Kile

    2012-01-01

    Text analysis has become an important research activity in the Department of Veterans Affairs (VA). Statistical text mining and natural language processing have been shown to be very effective for extracting useful information from medical documents. However, neither of these techniques is effective at extracting the information stored in…

  3. Mars Target Encyclopedia: Information Extraction for Planetary Science

    Science.gov (United States)

    Wagstaff, K. L.; Francis, R.; Gowda, T.; Lu, Y.; Riloff, E.; Singh, K.

    2017-06-01

    Mars surface targets / and published compositions / Seek and ye will find. We used text mining methods to extract information from LPSC abstracts about the composition of Mars surface targets. Users can search by element, mineral, or target.

  4. Optimum detection for extracting maximum information from symmetric qubit sets

    International Nuclear Information System (INIS)

    Mizuno, Jun; Fujiwara, Mikio; Sasaki, Masahide; Akiba, Makoto; Kawanishi, Tetsuya; Barnett, Stephen M.

    2002-01-01

    We demonstrate a class of optimum detection strategies for extracting the maximum information from sets of equiprobable real symmetric qubit states of a single photon. These optimum strategies have been predicted by Sasaki et al. [Phys. Rev. A 59, 3325 (1999)]. The peculiar aspect is that the detections with at least three outputs suffice for optimum extraction of information regardless of the number of signal elements. The cases of ternary (or trine), quinary, and septenary polarization signals are studied where a standard von Neumann detection (a projection onto a binary orthogonal basis) fails to access the maximum information. Our experiments demonstrate that it is possible with present technologies to attain about 96% of the theoretical limit

  5. Improving PERSIANN-CCS rain estimation using probabilistic approach and multi-sensors information

    Science.gov (United States)

    Karbalaee, N.; Hsu, K. L.; Sorooshian, S.; Kirstetter, P.; Hong, Y.

    2016-12-01

    This presentation discusses the recent implemented approaches to improve the rainfall estimation from Precipitation Estimation from Remotely Sensed Information using Artificial Neural Network-Cloud Classification System (PERSIANN-CCS). PERSIANN-CCS is an infrared (IR) based algorithm being integrated in the IMERG (Integrated Multi-Satellite Retrievals for the Global Precipitation Mission GPM) to create a precipitation product in 0.1x0.1degree resolution over the chosen domain 50N to 50S every 30 minutes. Although PERSIANN-CCS has a high spatial and temporal resolution, it overestimates or underestimates due to some limitations.PERSIANN-CCS can estimate rainfall based on the extracted information from IR channels at three different temperature threshold levels (220, 235, and 253k). This algorithm relies only on infrared data to estimate rainfall indirectly from this channel which cause missing the rainfall from warm clouds and false estimation for no precipitating cold clouds. In this research the effectiveness of using other channels of GOES satellites such as visible and water vapors has been investigated. By using multi-sensors the precipitation can be estimated based on the extracted information from multiple channels. Also, instead of using the exponential function for estimating rainfall from cloud top temperature, the probabilistic method has been used. Using probability distributions of precipitation rates instead of deterministic values has improved the rainfall estimation for different type of clouds.

  6. Study on methods and techniques of aeroradiometric weak information extraction for sandstone-hosted uranium deposits based on GIS

    International Nuclear Information System (INIS)

    Han Shaoyang; Ke Dan; Hou Huiqun

    2005-01-01

    The weak information extraction is one of the important research contents in the current sandstone-type uranium prospecting in China. This paper introduces the connotation of aeroradiometric weak information extraction, and discusses the formation theories of aeroradiometric weak information extraction, and discusses the formation theories of aeroradiometric weak information and establishes some effective mathematic models for weak information extraction. Models for weak information extraction are realized based on GIS software platform. Application tests of weak information extraction are realized based on GIS software platform. Application tests of weak information extraction are completed in known uranium mineralized areas. Research results prove that the prospective areas of sandstone-type uranium deposits can be rapidly delineated by extracting aeroradiometric weak information. (authors)

  7. Improvement of Soybean Oil Solvent Extraction through Enzymatic Pretreatment

    Directory of Open Access Journals (Sweden)

    F. V. Grasso

    2012-01-01

    Full Text Available The purpose of this study is to evaluate multienzyme hydrolysis as a pretreatment option to improve soybean oil solvent extraction and its eventual adaptation to conventional processes. Enzymatic action causes the degradation of the cell structures that contain oil. Improvements in terms of extraction, yield, and extraction rate are expected to be achieved. Soybean flakes and collets were used as materials and hexane was used as a solvent. Temperature, pH, and incubation time were optimized and diffusion coefficients were estimated for each solid. Extractions were carried out in a column, oil content was determined according to time, and a mathematical model was developed to describe the system. The optimum conditions obtained were pH 5.4, 38°C, and 9.7 h, and pH 5.8, 44°C, and 5.8h of treatment for flakes and collets, respectively. Hydrolyzed solids exhibited a higher yield. Diffusion coefficients were estimated between 10-11 and 10-10. The highest diffusion coefficient was obtained for hydrolyzed collets. 0.73 g oil/mL and 0.7 g oil/mL were obtained at 240 s in a column for collets and flakes, respectively. Hydrolyzed solids exhibited a higher yield. The enzymatic incubation accelerates the extraction rate and allows for higher yield. The proposed model proved to be appropriate.

  8. Using text mining techniques to extract phenotypic information from the PhenoCHF corpus.

    Science.gov (United States)

    Alnazzawi, Noha; Thompson, Paul; Batista-Navarro, Riza; Ananiadou, Sophia

    2015-01-01

    Phenotypic information locked away in unstructured narrative text presents significant barriers to information accessibility, both for clinical practitioners and for computerised applications used for clinical research purposes. Text mining (TM) techniques have previously been applied successfully to extract different types of information from text in the biomedical domain. They have the potential to be extended to allow the extraction of information relating to phenotypes from free text. To stimulate the development of TM systems that are able to extract phenotypic information from text, we have created a new corpus (PhenoCHF) that is annotated by domain experts with several types of phenotypic information relating to congestive heart failure. To ensure that systems developed using the corpus are robust to multiple text types, it integrates text from heterogeneous sources, i.e., electronic health records (EHRs) and scientific articles from the literature. We have developed several different phenotype extraction methods to demonstrate the utility of the corpus, and tested these methods on a further corpus, i.e., ShARe/CLEF 2013. Evaluation of our automated methods showed that PhenoCHF can facilitate the training of reliable phenotype extraction systems, which are robust to variations in text type. These results have been reinforced by evaluating our trained systems on the ShARe/CLEF corpus, which contains clinical records of various types. Like other studies within the biomedical domain, we found that solutions based on conditional random fields produced the best results, when coupled with a rich feature set. PhenoCHF is the first annotated corpus aimed at encoding detailed phenotypic information. The unique heterogeneous composition of the corpus has been shown to be advantageous in the training of systems that can accurately extract phenotypic information from a range of different text types. Although the scope of our annotation is currently limited to a single

  9. An improved discriminative filter bank selection approach for motor imagery EEG signal classification using mutual information.

    Science.gov (United States)

    Kumar, Shiu; Sharma, Alok; Tsunoda, Tatsuhiko

    2017-12-28

    Common spatial pattern (CSP) has been an effective technique for feature extraction in electroencephalography (EEG) based brain computer interfaces (BCIs). However, motor imagery EEG signal feature extraction using CSP generally depends on the selection of the frequency bands to a great extent. In this study, we propose a mutual information based frequency band selection approach. The idea of the proposed method is to utilize the information from all the available channels for effectively selecting the most discriminative filter banks. CSP features are extracted from multiple overlapping sub-bands. An additional sub-band has been introduced that cover the wide frequency band (7-30 Hz) and two different types of features are extracted using CSP and common spatio-spectral pattern techniques, respectively. Mutual information is then computed from the extracted features of each of these bands and the top filter banks are selected for further processing. Linear discriminant analysis is applied to the features extracted from each of the filter banks. The scores are fused together, and classification is done using support vector machine. The proposed method is evaluated using BCI Competition III dataset IVa, BCI Competition IV dataset I and BCI Competition IV dataset IIb, and it outperformed all other competing methods achieving the lowest misclassification rate and the highest kappa coefficient on all three datasets. Introducing a wide sub-band and using mutual information for selecting the most discriminative sub-bands, the proposed method shows improvement in motor imagery EEG signal classification.

  10. Improving ELM-Based Service Quality Prediction by Concise Feature Extraction

    Directory of Open Access Journals (Sweden)

    Yuhai Zhao

    2015-01-01

    Full Text Available Web services often run on highly dynamic and changing environments, which generate huge volumes of data. Thus, it is impractical to monitor the change of every QoS parameter for the timely trigger precaution due to high computational costs associated with the process. To address the problem, this paper proposes an active service quality prediction method based on extreme learning machine. First, we extract web service trace logs and QoS information from the service log and convert them into feature vectors. Second, by the proposed EC rules, we are enabled to trigger the precaution of QoS as soon as possible with high confidence. An efficient prefix tree based mining algorithm together with some effective pruning rules is developed to mine such rules. Finally, we study how to extract a set of diversified features as the representative of all mined results. The problem is proved to be NP-hard. A greedy algorithm is presented to approximate the optimal solution. Experimental results show that ELM trained by the selected feature subsets can efficiently improve the reliability and the earliness of service quality prediction.

  11. Information Extraction, Data Integration, and Uncertain Data Management: The State of The Art

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2011-01-01

    Information Extraction, data Integration, and uncertain data management are different areas of research that got vast focus in the last two decades. Many researches tackled those areas of research individually. However, information extraction systems should have integrated with data integration

  12. Information extraction from muon radiography data

    International Nuclear Information System (INIS)

    Borozdin, K.N.; Asaki, T.J.; Chartrand, R.; Hengartner, N.W.; Hogan, G.E.; Morris, C.L.; Priedhorsky, W.C.; Schirato, R.C.; Schultz, L.J.; Sottile, M.J.; Vixie, K.R.; Wohlberg, B.E.; Blanpied, G.

    2004-01-01

    Scattering muon radiography was proposed recently as a technique of detection and 3-d imaging for dense high-Z objects. High-energy cosmic ray muons are deflected in matter in the process of multiple Coulomb scattering. By measuring the deflection angles we are able to reconstruct the configuration of high-Z material in the object. We discuss the methods for information extraction from muon radiography data. Tomographic methods widely used in medical images have been applied to a specific muon radiography information source. Alternative simple technique based on the counting of high-scattered muons in the voxels seems to be efficient in many simulated scenes. SVM-based classifiers and clustering algorithms may allow detection of compact high-Z object without full image reconstruction. The efficiency of muon radiography can be increased using additional informational sources, such as momentum estimation, stopping power measurement, and detection of muonic atom emission.

  13. Feature extraction and learning using context cue and Rényi entropy based mutual information

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    information. In particular, for feature extraction, we develop a new set of kernel descriptors−Context Kernel Descriptors (CKD), which enhance the original KDES by embedding the spatial context into the descriptors. Context cues contained in the context kernel enforce some degree of spatial consistency, thus...... improving the robustness of CKD. For feature learning and reduction, we propose a novel codebook learning method, based on a Rényi quadratic entropy based mutual information measure called Cauchy-Schwarz Quadratic Mutual Information (CSQMI), to learn a compact and discriminative CKD codebook. Projecting...... as the information about the underlying labels of the CKD using CSQMI. Thus the resulting codebook and reduced CKD are discriminative. We verify the effectiveness of our method on several public image benchmark datasets such as YaleB, Caltech-101 and CIFAR-10, as well as a challenging chicken feet dataset of our own...

  14. Recognition techniques for extracting information from semistructured documents

    Science.gov (United States)

    Della Ventura, Anna; Gagliardi, Isabella; Zonta, Bruna

    2000-12-01

    Archives of optical documents are more and more massively employed, the demand driven also by the new norms sanctioning the legal value of digital documents, provided they are stored on supports that are physically unalterable. On the supply side there is now a vast and technologically advanced market, where optical memories have solved the problem of the duration and permanence of data at costs comparable to those for magnetic memories. The remaining bottleneck in these systems is the indexing. The indexing of documents with a variable structure, while still not completely automated, can be machine supported to a large degree with evident advantages both in the organization of the work, and in extracting information, providing data that is much more detailed and potentially significant for the user. We present here a system for the automatic registration of correspondence to and from a public office. The system is based on a general methodology for the extraction, indexing, archiving, and retrieval of significant information from semi-structured documents. This information, in our prototype application, is distributed among the database fields of sender, addressee, subject, date, and body of the document.

  15. Optimized ultra-high-pressure-assisted extraction of procyanidins from lychee pericarp improves the antioxidant activity of extracts.

    Science.gov (United States)

    Zhang, Ruifen; Su, Dongxiao; Hou, Fangli; Liu, Lei; Huang, Fei; Dong, Lihong; Deng, Yuanyuan; Zhang, Yan; Wei, Zhencheng; Zhang, Mingwei

    2017-08-01

    To establish optimal ultra-high-pressure (UHP)-assisted extraction conditions for procyanidins from lychee pericarp, a response surface analysis method with four factors and three levels was adopted. The optimum conditions were as follows: 295 MPa pressure, 13 min pressure holding time, 16.0 mL/g liquid-to-solid ratio, and 70% ethanol concentration. Compared with conventional ethanol extraction and ultrasonic-assisted extraction methods, the yields of the total procyanidins, flavonoids, and phenolics extracted using the UHP process were significantly increased; consequently, the oxygen radical absorbance capacity and cellular antioxidant activity of UHP-assisted lychee pericarp extracts were substantially enhanced. LC-MS/MS and high-performance liquid chromatography quantification results for individual phenolic compounds revealed that the yield of procyanidin compounds, including epicatechin, procyanidin A2, and procyanidin B2, from lychee pericarp could be significantly improved by the UHP-assisted extraction process. This UHP-assisted extraction process is thus a practical method for the extraction of procyanidins from lychee pericarp.

  16. Improving KPCA Online Extraction by Orthonormalization in the Feature Space.

    Science.gov (United States)

    Souza Filho, Joao B O; Diniz, Paulo S R

    2018-04-01

    Recently, some online kernel principal component analysis (KPCA) techniques based on the generalized Hebbian algorithm (GHA) were proposed for use in large data sets, defining kernel components using concise dictionaries automatically extracted from data. This brief proposes two new online KPCA extraction algorithms, exploiting orthogonalized versions of the GHA rule. In both the cases, the orthogonalization of kernel components is achieved by the inclusion of some low complexity additional steps to the kernel Hebbian algorithm, thus not substantially affecting the computational cost of the algorithm. Results show improved convergence speed and accuracy of components extracted by the proposed methods, as compared with the state-of-the-art online KPCA extraction algorithms.

  17. Respiratory Information Extraction from Electrocardiogram Signals

    KAUST Repository

    Amin, Gamal El Din Fathy

    2010-12-01

    The Electrocardiogram (ECG) is a tool measuring the electrical activity of the heart, and it is extensively used for diagnosis and monitoring of heart diseases. The ECG signal reflects not only the heart activity but also many other physiological processes. The respiratory activity is a prominent process that affects the ECG signal due to the close proximity of the heart and the lungs. In this thesis, several methods for the extraction of respiratory process information from the ECG signal are presented. These methods allow an estimation of the lung volume and the lung pressure from the ECG signal. The potential benefit of this is to eliminate the corresponding sensors used to measure the respiration activity. A reduction of the number of sensors connected to patients will increase patients’ comfort and reduce the costs associated with healthcare. As a further result, the efficiency of diagnosing respirational disorders will increase since the respiration activity can be monitored with a common, widely available method. The developed methods can also improve the detection of respirational disorders that occur while patients are sleeping. Such disorders are commonly diagnosed in sleeping laboratories where the patients are connected to a number of different sensors. Any reduction of these sensors will result in a more natural sleeping environment for the patients and hence a higher sensitivity of the diagnosis.

  18. Improving protein fold recognition by extracting fold-specific features from predicted residue-residue contacts.

    Science.gov (United States)

    Zhu, Jianwei; Zhang, Haicang; Li, Shuai Cheng; Wang, Chao; Kong, Lupeng; Sun, Shiwei; Zheng, Wei-Mou; Bu, Dongbo

    2017-12-01

    Accurate recognition of protein fold types is a key step for template-based prediction of protein structures. The existing approaches to fold recognition mainly exploit the features derived from alignments of query protein against templates. These approaches have been shown to be successful for fold recognition at family level, but usually failed at superfamily/fold levels. To overcome this limitation, one of the key points is to explore more structurally informative features of proteins. Although residue-residue contacts carry abundant structural information, how to thoroughly exploit these information for fold recognition still remains a challenge. In this study, we present an approach (called DeepFR) to improve fold recognition at superfamily/fold levels. The basic idea of our approach is to extract fold-specific features from predicted residue-residue contacts of proteins using deep convolutional neural network (DCNN) technique. Based on these fold-specific features, we calculated similarity between query protein and templates, and then assigned query protein with fold type of the most similar template. DCNN has showed excellent performance in image feature extraction and image recognition; the rational underlying the application of DCNN for fold recognition is that contact likelihood maps are essentially analogy to images, as they both display compositional hierarchy. Experimental results on the LINDAHL dataset suggest that even using the extracted fold-specific features alone, our approach achieved success rate comparable to the state-of-the-art approaches. When further combining these features with traditional alignment-related features, the success rate of our approach increased to 92.3%, 82.5% and 78.8% at family, superfamily and fold levels, respectively, which is about 18% higher than the state-of-the-art approach at fold level, 6% higher at superfamily level and 1% higher at family level. An independent assessment on SCOP_TEST dataset showed consistent

  19. Improved Cole parameter extraction based on the least absolute deviation method

    International Nuclear Information System (INIS)

    Yang, Yuxiang; Ni, Wenwen; Sun, Qiang; Wen, He; Teng, Zhaosheng

    2013-01-01

    The Cole function is widely used in bioimpedance spectroscopy (BIS) applications. Fitting the measured BIS data onto the model and then extracting the Cole parameters (R 0 , R ∞ , α and τ) is a common practice. Accurate extraction of the Cole parameters from the measured BIS data has great significance for evaluating the physiological or pathological status of biological tissue. The traditional least-squares (LS)-based curve fitting method for Cole parameter extraction is often sensitive to noise or outliers and becomes non-robust. This paper proposes an improved Cole parameter extraction based on the least absolute deviation (LAD) method. Comprehensive simulation experiments are carried out and the performances of the LAD method are compared with those of the LS method under the conditions of outliers, random noises and both disturbances. The proposed LAD method exhibits much better robustness under all circumstances, which demonstrates that the LAD method is deserving as an improved alternative to the LS method for Cole parameter extraction for its robustness to outliers and noises. (paper)

  20. RESEARCH ON REMOTE SENSING GEOLOGICAL INFORMATION EXTRACTION BASED ON OBJECT ORIENTED CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    H. Gao

    2018-04-01

    Full Text Available The northern Tibet belongs to the Sub cold arid climate zone in the plateau. It is rarely visited by people. The geological working conditions are very poor. However, the stratum exposures are good and human interference is very small. Therefore, the research on the automatic classification and extraction of remote sensing geological information has typical significance and good application prospect. Based on the object-oriented classification in Northern Tibet, using the Worldview2 high-resolution remote sensing data, combined with the tectonic information and image enhancement, the lithological spectral features, shape features, spatial locations and topological relations of various geological information are excavated. By setting the threshold, based on the hierarchical classification, eight kinds of geological information were classified and extracted. Compared with the existing geological maps, the accuracy analysis shows that the overall accuracy reached 87.8561 %, indicating that the classification-oriented method is effective and feasible for this study area and provides a new idea for the automatic extraction of remote sensing geological information.

  1. Extracting Semantic Information from Visual Data: A Survey

    Directory of Open Access Journals (Sweden)

    Qiang Liu

    2016-03-01

    Full Text Available The traditional environment maps built by mobile robots include both metric ones and topological ones. These maps are navigation-oriented and not adequate for service robots to interact with or serve human users who normally rely on the conceptual knowledge or semantic contents of the environment. Therefore, the construction of semantic maps becomes necessary for building an effective human-robot interface for service robots. This paper reviews recent research and development in the field of visual-based semantic mapping. The main focus is placed on how to extract semantic information from visual data in terms of feature extraction, object/place recognition and semantic representation methods.

  2. Some techniques to improve time structure of slow extracted beam

    International Nuclear Information System (INIS)

    Shoji, Y.; Sato, H.; Toyama, T.; Marutsuka, K.; Sueno, T.; Mikawa, K.; Ninomiya, S.; Yoshii, M.

    1992-01-01

    In order to improve the time structure of slow extracted beam spill for the KEK 12GeV PS, the spill control system has been upgraded by adding feed forward signal to feedback signal. Further, the wake field in the RF cavity has been cancelled by the beam bunch signal to reduce the re-bunch effect during extraction period. (author)

  3. An improved protocol and a new grinding device for extraction of genomic DNA from microorganisms by a two-step extraction procedure.

    Science.gov (United States)

    Zhang, S S; Chen, D; Lu, Q

    2012-05-21

    Current protocols to extract genomic DNA from microorganisms are still laborious, tedious and costly, especially for the species with thick cell walls. In order to improve the effectiveness of extracting DNA from microbial samples, a novel protocol, defined as two-step extraction method, along with an improved tissue-grinding device, was developed. The protocol included two steps, disruption of microbial cells or spores by grinding the sample together with silica sand in a new device and extraction of DNA with an effective buffer containing cell lysis chemicals. The device was prepared by using a commercial electric mini-grinder, adapted with a grinding stone, and a sample cup processed by lathing from a polytetrafluoroethylene rod. We tested the method with vegetative cells of four microbial species and two microbial spores that have thick cell walls and are therefore hard to process; these included Escherichia coli JM109, Bacillus subtilis WB600, Sacchromyces cerevisiae INVSc1, Trichoderma viride AS3.3711, and the spores of S. cerevisiae and T. viride, respectively, representing Gram-positive bacteria, Gram-negative bacteria, yeast, filamentous fungi. We found that this new method and device extracted usable quantities of genomic DNA from the samples. The DNA fragments that were extracted exceeded 23 kb. The target sequences up to about 5 kb were successfully and exclusively amplified by PCR using extracted DNA as the template. In addition, the DNA extraction was finalized within 1.5 h. Thus, we conclude that this two-step extraction method is an effective and improved protocol for extraction of genomic DNA from microbial samples.

  4. KneeTex: an ontology-driven system for information extraction from MRI reports.

    Science.gov (United States)

    Spasić, Irena; Zhao, Bo; Jones, Christopher B; Button, Kate

    2015-01-01

    In the realm of knee pathology, magnetic resonance imaging (MRI) has the advantage of visualising all structures within the knee joint, which makes it a valuable tool for increasing diagnostic accuracy and planning surgical treatments. Therefore, clinical narratives found in MRI reports convey valuable diagnostic information. A range of studies have proven the feasibility of natural language processing for information extraction from clinical narratives. However, no study focused specifically on MRI reports in relation to knee pathology, possibly due to the complexity of knee anatomy and a wide range of conditions that may be associated with different anatomical entities. In this paper we describe KneeTex, an information extraction system that operates in this domain. As an ontology-driven information extraction system, KneeTex makes active use of an ontology to strongly guide and constrain text analysis. We used automatic term recognition to facilitate the development of a domain-specific ontology with sufficient detail and coverage for text mining applications. In combination with the ontology, high regularity of the sublanguage used in knee MRI reports allowed us to model its processing by a set of sophisticated lexico-semantic rules with minimal syntactic analysis. The main processing steps involve named entity recognition combined with coordination, enumeration, ambiguity and co-reference resolution, followed by text segmentation. Ontology-based semantic typing is then used to drive the template filling process. We adopted an existing ontology, TRAK (Taxonomy for RehAbilitation of Knee conditions), for use within KneeTex. The original TRAK ontology expanded from 1,292 concepts, 1,720 synonyms and 518 relationship instances to 1,621 concepts, 2,550 synonyms and 560 relationship instances. This provided KneeTex with a very fine-grained lexico-semantic knowledge base, which is highly attuned to the given sublanguage. Information extraction results were evaluated

  5. Audio-Visual Speech Recognition Using Lip Information Extracted from Side-Face Images

    Directory of Open Access Journals (Sweden)

    Koji Iwano

    2007-03-01

    Full Text Available This paper proposes an audio-visual speech recognition method using lip information extracted from side-face images as an attempt to increase noise robustness in mobile environments. Our proposed method assumes that lip images can be captured using a small camera installed in a handset. Two different kinds of lip features, lip-contour geometric features and lip-motion velocity features, are used individually or jointly, in combination with audio features. Phoneme HMMs modeling the audio and visual features are built based on the multistream HMM technique. Experiments conducted using Japanese connected digit speech contaminated with white noise in various SNR conditions show effectiveness of the proposed method. Recognition accuracy is improved by using the visual information in all SNR conditions. These visual features were confirmed to be effective even when the audio HMM was adapted to noise by the MLLR method.

  6. Improving readability through extractive summarization for learners with reading difficulties

    Directory of Open Access Journals (Sweden)

    K. Nandhini

    2013-11-01

    Full Text Available In this paper, we describe the design and evaluation of extractive summarization approach to assist the learners with reading difficulties. As existing summarization approaches inherently assign more weights to the important sentences, our approach predicts the summary sentences that are important as well as readable to the target audience with good accuracy. We used supervised machine learning technique for summary extraction of science and social subjects in the educational text. Various independent features from the existing literature for predicting important sentences and proposed learner dependent features for predicting readable sentences are extracted from texts and are used for automatic classification. We performed both extrinsic and intrinsic evaluation on this approach and the intrinsic evaluation is carried out using F-measure and readability analysis. The extrinsic evaluation comprises of learner feedback using likert scale and the effect of assistive summary on improving readability for learners’ with reading difficulty using ANOVA. The results show significant improvement in readability for the target audience using assistive summary.

  7. Extraction Method for Earthquake-Collapsed Building Information Based on High-Resolution Remote Sensing

    International Nuclear Information System (INIS)

    Chen, Peng; Wu, Jian; Liu, Yaolin; Wang, Jing

    2014-01-01

    At present, the extraction of earthquake disaster information from remote sensing data relies on visual interpretation. However, this technique cannot effectively and quickly obtain precise and efficient information for earthquake relief and emergency management. Collapsed buildings in the town of Zipingpu after the Wenchuan earthquake were used as a case study to validate two kinds of rapid extraction methods for earthquake-collapsed building information based on pixel-oriented and object-oriented theories. The pixel-oriented method is based on multi-layer regional segments that embody the core layers and segments of the object-oriented method. The key idea is to mask layer by layer all image information, including that on the collapsed buildings. Compared with traditional techniques, the pixel-oriented method is innovative because it allows considerably rapid computer processing. As for the object-oriented method, a multi-scale segment algorithm was applied to build a three-layer hierarchy. By analyzing the spectrum, texture, shape, location, and context of individual object classes in different layers, the fuzzy determined rule system was established for the extraction of earthquake-collapsed building information. We compared the two sets of results using three variables: precision assessment, visual effect, and principle. Both methods can extract earthquake-collapsed building information quickly and accurately. The object-oriented method successfully overcomes the pepper salt noise caused by the spectral diversity of high-resolution remote sensing data and solves the problem of same object, different spectrums and that of same spectrum, different objects. With an overall accuracy of 90.38%, the method achieves more scientific and accurate results compared with the pixel-oriented method (76.84%). The object-oriented image analysis method can be extensively applied in the extraction of earthquake disaster information based on high-resolution remote sensing

  8. Semi-automatic building extraction in informal settlements from high-resolution satellite imagery

    Science.gov (United States)

    Mayunga, Selassie David

    The extraction of man-made features from digital remotely sensed images is considered as an important step underpinning management of human settlements in any country. Man-made features and buildings in particular are required for varieties of applications such as urban planning, creation of geographical information systems (GIS) databases and Urban City models. The traditional man-made feature extraction methods are very expensive in terms of equipment, labour intensive, need well-trained personnel and cannot cope with changing environments, particularly in dense urban settlement areas. This research presents an approach for extracting buildings in dense informal settlement areas using high-resolution satellite imagery. The proposed system uses a novel strategy of extracting building by measuring a single point at the approximate centre of the building. The fine measurement of the building outlines is then effected using a modified snake model. The original snake model on which this framework is based, incorporates an external constraint energy term which is tailored to preserving the convergence properties of the snake model; its use to unstructured objects will negatively affect their actual shapes. The external constrained energy term was removed from the original snake model formulation, thereby, giving ability to cope with high variability of building shapes in informal settlement areas. The proposed building extraction system was tested on two areas, which have different situations. The first area was Tungi in Dar Es Salaam, Tanzania where three sites were tested. This area is characterized by informal settlements, which are illegally formulated within the city boundaries. The second area was Oromocto in New Brunswick, Canada where two sites were tested. Oromocto area is mostly flat and the buildings are constructed using similar materials. Qualitative and quantitative measures were employed to evaluate the accuracy of the results as well as the performance

  9. Accurate facade feature extraction method for buildings from three-dimensional point cloud data considering structural information

    Science.gov (United States)

    Wang, Yongzhi; Ma, Yuqing; Zhu, A.-xing; Zhao, Hui; Liao, Lixia

    2018-05-01

    Facade features represent segmentations of building surfaces and can serve as a building framework. Extracting facade features from three-dimensional (3D) point cloud data (3D PCD) is an efficient method for 3D building modeling. By combining the advantages of 3D PCD and two-dimensional optical images, this study describes the creation of a highly accurate building facade feature extraction method from 3D PCD with a focus on structural information. The new extraction method involves three major steps: image feature extraction, exploration of the mapping method between the image features and 3D PCD, and optimization of the initial 3D PCD facade features considering structural information. Results show that the new method can extract the 3D PCD facade features of buildings more accurately and continuously. The new method is validated using a case study. In addition, the effectiveness of the new method is demonstrated by comparing it with the range image-extraction method and the optical image-extraction method in the absence of structural information. The 3D PCD facade features extracted by the new method can be applied in many fields, such as 3D building modeling and building information modeling.

  10. Information analysis of iris biometrics for the needs of cryptology key extraction

    Directory of Open Access Journals (Sweden)

    Adamović Saša

    2013-01-01

    Full Text Available The paper presents a rigorous analysis of iris biometric information for the synthesis of an optimized system for the extraction of a high quality cryptology key. Estimations of local entropy and mutual information were identified as segments of the iris most suitable for this purpose. In order to optimize parameters, corresponding wavelets were transformed, in order to obtain the highest possible entropy and mutual information lower in the transformation domain, which set frameworks for the synthesis of systems for the extraction of truly random sequences of iris biometrics, without compromising authentication properties. [Projekat Ministarstva nauke Republike Srbije, br. TR32054 i br. III44006

  11. Information retrieval and terminology extraction in online resources for patients with diabetes.

    Science.gov (United States)

    Seljan, Sanja; Baretić, Maja; Kucis, Vlasta

    2014-06-01

    Terminology use, as a mean for information retrieval or document indexing, plays an important role in health literacy. Specific types of users, i.e. patients with diabetes need access to various online resources (on foreign and/or native language) searching for information on self-education of basic diabetic knowledge, on self-care activities regarding importance of dietetic food, medications, physical exercises and on self-management of insulin pumps. Automatic extraction of corpus-based terminology from online texts, manuals or professional papers, can help in building terminology lists or list of "browsing phrases" useful in information retrieval or in document indexing. Specific terminology lists represent an intermediate step between free text search and controlled vocabulary, between user's demands and existing online resources in native and foreign language. The research aiming to detect the role of terminology in online resources, is conducted on English and Croatian manuals and Croatian online texts, and divided into three interrelated parts: i) comparison of professional and popular terminology use ii) evaluation of automatic statistically-based terminology extraction on English and Croatian texts iii) comparison and evaluation of extracted terminology performed on English manual using statistical and hybrid approaches. Extracted terminology candidates are evaluated by comparison with three types of reference lists: list created by professional medical person, list of highly professional vocabulary contained in MeSH and list created by non-medical persons, made as intersection of 15 lists. Results report on use of popular and professional terminology in online diabetes resources, on evaluation of automatically extracted terminology candidates in English and Croatian texts and on comparison of statistical and hybrid extraction methods in English text. Evaluation of automatic and semi-automatic terminology extraction methods is performed by recall

  12. [Extraction of buildings three-dimensional information from high-resolution satellite imagery based on Barista software].

    Science.gov (United States)

    Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi

    2010-05-01

    The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.

  13. Extraction and Analysis of Information Related to Research & Development Declared Under an Additional Protocol

    International Nuclear Information System (INIS)

    Idinger, J.; Labella, R.; Rialhe, A.; Teller, N.

    2015-01-01

    The additional protocol (AP) provides important tools to strengthen and improve the effectiveness and efficiency of the safeguards system. Safeguards are designed to verify that States comply with their international commitments not to use nuclear material or to engage in nuclear-related activities for the purpose of developing nuclear weapons or other nuclear explosive devices. Under an AP based on INFCIRC/540, a State must provide to the IAEA additional information about, and inspector access to, all parts of its nuclear fuel cycle. In addition, the State has to supply information about its nuclear fuel cycle-related research and development (R&D) activities. The majority of States declare their R&D activities under the AP Articles 2.a.(i), 2.a.(x), and 2.b.(i) as part of initial declarations and their annual updates under the AP. In order to verify consistency and completeness of information provided under the AP by States, the Agency has started to analyze declared R&D information by identifying interrelationships between States in different R&D areas relevant to safeguards. The paper outlines the quality of R&D information provided by States to the Agency, describes how the extraction and analysis of relevant declarations are currently carried out at the Agency and specifies what kinds of difficulties arise during evaluation in respect to cross-linking international projects and finding gaps in reporting. In addition, the paper tries to elaborate how the reporting quality of AP information with reference to R&D activities and the assessment process of R&D information could be improved. (author)

  14. Improvement of protein extraction from sunflower meal by hydrolysis with alcalase

    Directory of Open Access Journals (Sweden)

    Vioque, J.

    2003-12-01

    Full Text Available Extraction of proteins from defatted sunflower meal has been improved by addition of the protease alcalase during alkaline extraction. This method offers several additional advantages as compared to the traditional alkaline extraction without alcalase, which is usually carried out after a sedimentation/flotation step to remove the lignocellulosic fraction. As compared to extraction without alcalase, addition of 0.1% (v/v alcalase improved the yield of protein extraction from 57.5% to 87.4%, providing an extract that is 22% hydrolyzed. In addition, an increment of up to 4.5 times in protein solubility at low pH values is achieved, which correlates with the degree of hydrolysis. The extracts that were obtained in the presence of alcalase had a higher proline and glycine content, suggesting that the protease improves extraction of proline-rich and glycine-rich cell wall proteins that are part of the lignocellulosic fraction. These protein extracts can be directly dried without generation of wastewater, and the resulting fiber-rich material could be used for animal feeding.Se ha mejorado la extracción proteica de la harina desengrasa de girasol mediante la adición de la proteasa alcalasa durante la extracción alcalina. Este método ofrece varias ventajas adicionales en comparación con la extracción alcalina tradicional sin alcalasa, que se desarrolla normalmente mediante un proceso de flotación/sedimentación para retirar la fracción lignocelulósica. En comparación a la extracción sin alcalasa, la adicción de 0.1% (v/v de alcalasa mejora los rendimientos de extracción proteica desde un 57.5% a un 87.4%, dando un extracto con un 22% de grado de hidrólisis. Además se obtiene un incremento de hasta 4.5 veces de la solubilidad proteica a bajos pHs, que se correlaciona con el grado de hidrólisis. Los extractos obtenidos con alcalasa tenían un mayor contenido de prolina y glicina, sugiriendo que la proteasa mejora la extracción de las

  15. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories.

    Science.gov (United States)

    Yang, Wei; Ai, Tinghua; Lu, Wei

    2018-04-19

    Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT). First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS) traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction) by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  16. A Method for Extracting Road Boundary Information from Crowdsourcing Vehicle GPS Trajectories

    Directory of Open Access Journals (Sweden)

    Wei Yang

    2018-04-01

    Full Text Available Crowdsourcing trajectory data is an important approach for accessing and updating road information. In this paper, we present a novel approach for extracting road boundary information from crowdsourcing vehicle traces based on Delaunay triangulation (DT. First, an optimization and interpolation method is proposed to filter abnormal trace segments from raw global positioning system (GPS traces and interpolate the optimization segments adaptively to ensure there are enough tracking points. Second, constructing the DT and the Voronoi diagram within interpolated tracking lines to calculate road boundary descriptors using the area of Voronoi cell and the length of triangle edge. Then, the road boundary detection model is established integrating the boundary descriptors and trajectory movement features (e.g., direction by DT. Third, using the boundary detection model to detect road boundary from the DT constructed by trajectory lines, and a regional growing method based on seed polygons is proposed to extract the road boundary. Experiments were conducted using the GPS traces of taxis in Beijing, China, and the results show that the proposed method is suitable for extracting the road boundary from low-frequency GPS traces, multi-type road structures, and different time intervals. Compared with two existing methods, the automatically extracted boundary information was proved to be of higher quality.

  17. Rapid automatic keyword extraction for information retrieval and analysis

    Science.gov (United States)

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  18. Contribution to the improvement of seed extraction in larch

    Energy Technology Data Exchange (ETDEWEB)

    Philippe, G. [Cemagref - Forest nurseries and Plant genetics division, Nogent-sur-Vernisson (France)

    1995-12-31

    Industrial techniques in a seed plant resulted in the extraction of a small percentage of seed potential (18% in hybrid larch and 40% in European larch). It was determined that the seeds remaining within the cones were as viable as the seeds extracted. Thus, the quantity of marketable seeds was directly proportional to the quantity of seeds extracted and an important gain in seed yield could be achieved by improving extraction techniques. Among the five techniques tested in a second experiment, the two cone grinding treatments, done in a hammer mill, were characterized by the best extraction rate (up to 80% of seed potential) but they resulted in a decrease in seed viability. However, cone grinding applied to large volume of cones produced highly viable seeds. The difference was attributed to the number of cones treated. With regard to other techniques, cone drying followed by cone threshing was more efficient when it was applied to cones previously cut in two or let outside for a couple of months. All these techniques gave better results when cones were collected in spring rather than in autumn whereas cone grinding efficiency did not depend on cone collection date. 8 refs, 1 fig, 2 tabs

  19. Contribution to the improvement of seed extraction in larch

    Energy Technology Data Exchange (ETDEWEB)

    Philippe, G [Cemagref - Forest nurseries and Plant genetics division, Nogent-sur-Vernisson (France)

    1996-12-31

    Industrial techniques in a seed plant resulted in the extraction of a small percentage of seed potential (18% in hybrid larch and 40% in European larch). It was determined that the seeds remaining within the cones were as viable as the seeds extracted. Thus, the quantity of marketable seeds was directly proportional to the quantity of seeds extracted and an important gain in seed yield could be achieved by improving extraction techniques. Among the five techniques tested in a second experiment, the two cone grinding treatments, done in a hammer mill, were characterized by the best extraction rate (up to 80% of seed potential) but they resulted in a decrease in seed viability. However, cone grinding applied to large volume of cones produced highly viable seeds. The difference was attributed to the number of cones treated. With regard to other techniques, cone drying followed by cone threshing was more efficient when it was applied to cones previously cut in two or let outside for a couple of months. All these techniques gave better results when cones were collected in spring rather than in autumn whereas cone grinding efficiency did not depend on cone collection date. 8 refs, 1 fig, 2 tabs

  20. Color improvement by irradiation of Curcuma aromatica extract for industrial application

    International Nuclear Information System (INIS)

    Kim, Jae Kyung; Jo, Cheorun; Hwang, Han Joon; Park, Hyun Jin; Kim, Young Ji; Byun, Myung Woo

    2006-01-01

    Curcuma species are medicinal herbs with various pharmacological activities. They have a characteristic yellow color and contain curcuminoids which are natural antioxidants. In this study, Curcuma aromatica (Canada) and Curcuma longa (CL) extracts were gamma-irradiated for improving the color, and the irradiation effects on the curcuminoids contents in CA and CL extracts were determined in order to evaluate if CA can replace CL on the market, where the price of CA is 70% lower than the price of CL. The Hunter color L*-values were increased significantly in all the samples with increasing dose, while the a*-values and b*-values decreased, which implies that the color of the CA and CL extracts changed from dark yellow to brighter yellow. Curcuminoids contents of all the samples were evaluated, and CA contains more curcuminoids than CL. These results indicated that irradiation improved the properties of CA for possible industrial use in manufacturing food and cosmetic industrial products

  1. Color improvement by irradiation of Curcuma aromatica extract for industrial application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Kyung [Radiation Food Science and Biotechnology Team, Advanced Radiation Technology Institute, Korea Atomic Energy Research Institute, 1266 Sinjeong-dong, Jeongeup 580-185 (Korea, Republic of); Jo, Cheorun [Radiation Food Science and Biotechnology Team, Advanced Radiation Technology Institute, Korea Atomic Energy Research Institute, 1266 Sinjeong-dong, Jeongeup 580-185 (Korea, Republic of); Hwang, Han Joon [Graduate School of Biotechnology, Korea University, 5-1, Anam-dong, Sungbuk-Gu, Seoul 136-701 (Korea, Republic of); Park, Hyun Jin [Graduate School of Biotechnology, Korea University, 5-1, Anam-dong, Sungbuk-Gu, Seoul 136-701 (Korea, Republic of); Kim, Young Ji [Division of Food Beverage and Culinary Arts, Younganm College of Science and Technology, Daegu 705-703 (Korea, Republic of); Byun, Myung Woo [Radiation Food Science and Biotechnology Team, Advanced Radiation Technology Institute, Korea Atomic Energy Research Institute, 1266 Sinjeong-dong, Jeongeup 580-185 (Korea, Republic of)]. E-mail: mwbyun@kaeri.re.kr

    2006-03-15

    Curcuma species are medicinal herbs with various pharmacological activities. They have a characteristic yellow color and contain curcuminoids which are natural antioxidants. In this study, Curcuma aromatica (Canada) and Curcuma longa (CL) extracts were gamma-irradiated for improving the color, and the irradiation effects on the curcuminoids contents in CA and CL extracts were determined in order to evaluate if CA can replace CL on the market, where the price of CA is 70% lower than the price of CL. The Hunter color L*-values were increased significantly in all the samples with increasing dose, while the a*-values and b*-values decreased, which implies that the color of the CA and CL extracts changed from dark yellow to brighter yellow. Curcuminoids contents of all the samples were evaluated, and CA contains more curcuminoids than CL. These results indicated that irradiation improved the properties of CA for possible industrial use in manufacturing food and cosmetic industrial products.

  2. Color improvement by irradiation of Curcuma aromatica extract for industrial application

    Science.gov (United States)

    Kim, Jae Kyung; Jo, Cheorun; Hwang, Han Joon; Park, Hyun Jin; Kim, Young Ji; Byun, Myung Woo

    2006-03-01

    Curcuma species are medicinal herbs with various pharmacological activities. They have a characteristic yellow color and contain curcuminoids which are natural antioxidants. In this study, Curcuma aromatica (CA) and Curcuma longa (CL) extracts were gamma-irradiated for improving the color, and the irradiation effects on the curcuminoids contents in CA and CL extracts were determined in order to evaluate if CA can replace CL on the market, where the price of CA is 70% lower than the price of CL. The Hunter color L*-values were increased significantly in all the samples with increasing dose, while the a*-values and b*-values decreased, which implies that the color of the CA and CL extracts changed from dark yellow to brighter yellow. Curcuminoids contents of all the samples were evaluated, and CA contains more curcuminoids than CL. These results indicated that irradiation improved the properties of CA for possible industrial use in manufacturing food and cosmetic industrial products.

  3. Improving carotenoid extraction from tomato waste by pulsed electric fields.

    Directory of Open Access Journals (Sweden)

    Elisa eLuengo

    2014-08-01

    Full Text Available In this investigation, the influence of the application of Pulsed Electric Fields (PEF of different intensities (3-7 kV/cm and 0-300 μs on the carotenoid extraction from tomato peel and pulp in a mixture of hexane:acetone:ethanol was studied with the aim of increasing extraction yield or reducing the percentage of the less green solvents in the extraction medium. According to the cellular disintegration index, the optimum treatment time for the permeabilization of tomato peel and pulp at different electric field strengths was 90 µs. The PEF permeabilization of tomato pulp did not significantly increase the carotenoid extraction. However, a PEF-treatment at 5 kV/cm improved the carotenoid extraction from tomato peel by 39 % as compared with the control in a mixture of hexane:ethanol:acetone (50:25:25. Further increments of electric field from 5 to 7 kV/cm did not increase significantly the extraction of carotenoids. . The presence of acetone in the solvent mixture did not positively affect the carotenoid extraction when the tomato peels were PEF-treated. Response surface methodology was used to determine the potential of PEF for reducing the percentage of hexane in a hexane:ethanol mixture. The application of a PEF-treatment allowed reducing the hexane percentage from 45 to 30 % without affecting the carotenoid extraction yield. The antioxidant capacity of the extracts obtained from tomato peel was correlated with the carotenoid concentration and it was not affected by the PEF-treatment.

  4. Utilizing a Value of Information Framework to Improve Ore Collection and Classification Procedures

    National Research Council Canada - National Science Library

    Phillips, Julia A

    2006-01-01

    .... We use a value of information framework (VOI) to consider the economic feasibility of a mine purchasing additional information on extracted ore type to reduce the uncertainty of extracted ore grade quality...

  5. Improving the information environment for analysts

    DEFF Research Database (Denmark)

    Farooq, Omar; Nielsen, Christian

    2014-01-01

    they have more information. Our results also show that intellectual capital disclosure related to employees and strategic statements are the most important disclosures for analysts. Research limitations/implications: More relevant methods, such as survey or interviews with management, may be used to improve...... the information content of intellectual capital disclosure. Analysts, probably, deduce intellectual capital of a firm from interaction with management rather than financial statements. Practical implications: Firms in biotechnology sector can improve their information environment by disclosing more information...

  6. Extracting Social Networks and Contact Information From Email and the Web

    National Research Council Canada - National Science Library

    Culotta, Aron; Bekkerman, Ron; McCallum, Andrew

    2005-01-01

    ...-suited for such information extraction tasks. By recursively calling itself on new people discovered on the Web, the system builds a social network with multiple degrees of separation from the user...

  7. Information Extraction of High-Resolution Remotely Sensed Image Based on Multiresolution Segmentation

    Directory of Open Access Journals (Sweden)

    Peng Shao

    2014-08-01

    Full Text Available The principle of multiresolution segmentation was represented in detail in this study, and the canny algorithm was applied for edge-detection of a remotely sensed image based on this principle. The target image was divided into regions based on object-oriented multiresolution segmentation and edge-detection. Furthermore, object hierarchy was created, and a series of features (water bodies, vegetation, roads, residential areas, bare land and other information were extracted by the spectral and geometrical features. The results indicate that the edge-detection has a positive effect on multiresolution segmentation, and overall accuracy of information extraction reaches to 94.6% by the confusion matrix.

  8. About increasing informativity of diagnostic system of asynchronous electric motor by extracting additional information from values of consumed current parameter

    Science.gov (United States)

    Zhukovskiy, Y.; Korolev, N.; Koteleva, N.

    2018-05-01

    This article is devoted to expanding the possibilities of assessing the technical state of the current consumption of asynchronous electric drives, as well as increasing the information capacity of diagnostic methods, in conditions of limited access to equipment and incompleteness of information. The method of spectral analysis of the electric drive current can be supplemented by an analysis of the components of the current of the Park's vector. The research of the hodograph evolution in the moment of appearance and development of defects was carried out using the example of current asymmetry in the phases of an induction motor. The result of the study is the new diagnostic parameters of the asynchronous electric drive. During the research, it was proved that the proposed diagnostic parameters allow determining the type and level of the defect. At the same time, there is no need to stop the equipment and taky it out of service for repair. Modern digital control and monitoring systems can use the proposed parameters based on the stator current of an electrical machine to improve the accuracy and reliability of obtaining diagnostic patterns and predicting their changes in order to improve the equipment maintenance systems. This approach can also be used in systems and objects where there are significant parasitic vibrations and unsteady loads. The extraction of useful information can be carried out in electric drive systems in the structure of which there is a power electric converter.

  9. Extracting information from multiplex networks

    Science.gov (United States)

    Iacovacci, Jacopo; Bianconi, Ginestra

    2016-06-01

    Multiplex networks are generalized network structures that are able to describe networks in which the same set of nodes are connected by links that have different connotations. Multiplex networks are ubiquitous since they describe social, financial, engineering, and biological networks as well. Extending our ability to analyze complex networks to multiplex network structures increases greatly the level of information that is possible to extract from big data. For these reasons, characterizing the centrality of nodes in multiplex networks and finding new ways to solve challenging inference problems defined on multiplex networks are fundamental questions of network science. In this paper, we discuss the relevance of the Multiplex PageRank algorithm for measuring the centrality of nodes in multilayer networks and we characterize the utility of the recently introduced indicator function Θ ˜ S for describing their mesoscale organization and community structure. As working examples for studying these measures, we consider three multiplex network datasets coming for social science.

  10. OpenCV-Based Nanomanipulation Information Extraction and the Probe Operation in SEM

    Directory of Open Access Journals (Sweden)

    Dongjie Li

    2015-02-01

    Full Text Available Aimed at the established telenanomanipulation system, the method of extracting location information and the strategies of probe operation were studied in this paper. First, the machine learning algorithm of OpenCV was used to extract location information from SEM images. Thus nanowires and probe in SEM images can be automatically tracked and the region of interest (ROI can be marked quickly. Then the location of nanowire and probe can be extracted from the ROI. To study the probe operation strategy, the Van der Waals force between probe and a nanowire was computed; thus relevant operating parameters can be obtained. With these operating parameters, the nanowire in 3D virtual environment can be preoperated and an optimal path of the probe can be obtained. The actual probe runs automatically under the telenanomanipulation system's control. Finally, experiments were carried out to verify the above methods, and results show the designed methods have achieved the expected effect.

  11. A method for automating the extraction of specialized information from the web

    NARCIS (Netherlands)

    Lin, L.; Liotta, A.; Hippisley, A.; Hao, Y.; Liu, J.; Wang, Y.; Cheung, Y-M.; Yin, H.; Jiao, L.; Ma, j.; Jiao, Y-C.

    2005-01-01

    The World Wide Web can be viewed as a gigantic distributed database including millions of interconnected hosts some of which publish information via web servers or peer-to-peer systems. We present here a novel method for the extraction of semantically rich information from the web in a fully

  12. Improving patient safety through a clinical audit spiral: prevention of wrong tooth extraction in orthodontics.

    Science.gov (United States)

    Anwar, H; Waring, D

    2017-07-07

    Introduction With an increasing demand to improve patient safety within the NHS, it is important to ensure that measures are undertaken to continually improve patient care. Wrong site surgery has been defined as a 'never event'. This article highlights the importance of preventing wrong tooth extraction within orthodontics through an audit spiral over five years investigating the accuracy and clarity of orthodontic extraction letters at the University Dental Hospital of Manchester.Aims To examine compliance with the standards for accuracy and clarity of extraction letters and the incidence of wrong tooth extractions, and to increase awareness of the errors that can occur with extraction letters and of the current guidelines.Method A retrospective audit was conducted examining extraction letters sent to clinicians outside the department.Results It can be seen there has been no occurrence of a wrong site tooth extraction. The initial audit highlighted issues in conformity, with it falling below expected standards. Cycle two generally demonstrated a further reduction in compliance. Cycle three appeared to result in an increase in levels of compliance. Cycles 4 and 5 have demonstrated gradual improvements. However, it is noteworthy that in all cycles the audit standards were still not achieved, with the exception of no incidences of the incorrect tooth being extracted.Conclusion This audit spiral demonstrates the importance of long term re-audit to aim to achieve excellence in clinical care. There has been a gradual increase in standards through each audit.

  13. Corn silk extract improves benign prostatic hyperplasia in experimental rat model.

    Science.gov (United States)

    Kim, So Ra; Ha, Ae Wha; Choi, Hyun Ji; Kim, Sun Lim; Kang, Hyeon Jung; Kim, Myung Hwan; Kim, Woo Kyoung

    2017-10-01

    This study was conducted to investigate the effect of a corn silk extract on improving benign prostatic hyperplasia (BPH). The experimental animals, 6-week-old male Wistar rats, were divided into sham-operated control (Sham) and experimental groups. The experimental group, which underwent orchiectomy and received subcutaneous injection of 10 mg/kg of testosterone propionate to induce BPH, was divided into a Testo Only group that received only testosterone, a Testo+Fina group that received testosterone and 5 mg/kg finasteride, a Testo+CSE10 group that received testosterone and 10 mg/kg of corn silk extract, and a Testo+CSE100 group that received testosterone and 100 mg/kg of corn silk extract. Prostate weight and concentrations of dihydrotestosterone (DHT), 5α-reductase 2 (5α-R2), and prostate specific antigen (PSA) in serum or prostate tissue were determined. The mRNA expressions of 5α-R2 and proliferating cell nuclear antigen (PCNA) in prostate tissue were also measured. Compared to the Sham group, prostate weight was significantly higher in the Testo Only group and decreased significantly in the Testo+Fina, Testo+CSE10, and Testo+CSE100 groups ( P corn silk extract treatment improved BPH symptoms by inhibiting the mRNA expression of 5α-R2 and decreasing the amount of 5α-R2, DHT, and PSA in serum and prostate tissue.

  14. Automated Fault Interpretation and Extraction using Improved Supplementary Seismic Datasets

    Science.gov (United States)

    Bollmann, T. A.; Shank, R.

    2017-12-01

    During the interpretation of seismic volumes, it is necessary to interpret faults along with horizons of interest. With the improvement of technology, the interpretation of faults can be expedited with the aid of different algorithms that create supplementary seismic attributes, such as semblance and coherency. These products highlight discontinuities, but still need a large amount of human interaction to interpret faults and are plagued by noise and stratigraphic discontinuities. Hale (2013) presents a method to improve on these datasets by creating what is referred to as a Fault Likelihood volume. In general, these volumes contain less noise and do not emphasize stratigraphic features. Instead, planar features within a specified strike and dip range are highlighted. Once a satisfactory Fault Likelihood Volume is created, extraction of fault surfaces is much easier. The extracted fault surfaces are then exported to interpretation software for QC. Numerous software packages have implemented this methodology with varying results. After investigating these platforms, we developed a preferred Automated Fault Interpretation workflow.

  15. Solutions for improving data extraction from virtual data warehouses

    Directory of Open Access Journals (Sweden)

    Adela BARA

    2010-09-01

    Full Text Available The data warehousing project’s team is always confronted with low performance in data extraction. In a Business Intelligence environment this problem can be critical because the data displayed are no longer available for taking decisions, so the project can be compromised. In this case there are several techniques that can be applied to reduce queries’ execution time and to improve the performance of the BI analyses and reports. Some of the techniques that can be applied to reduce the cost of execution for improving query performance in BI systems will be presented in this paper.

  16. Development of Ingredients of the Feed-stuff for Improving Immune system using Centipede grass Extracts

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Hyoungwoo; Chung, Byungyeoup; Lee, Seungsik; Lee, Sungbeom

    2013-09-15

    The purpose of the this project provides new application areas using naturally occurring flavonoids, cenetpedegrass extracts, for improving immune system and used as ingredients for feed-stuff. In order to provide the immune improving effects of centipedegrass, cell and animal experiments were carried out. Research scope includes determine the effect of centipedegrass extracts on immune functions using LPS-induced RAW cells and found that cytokines, IL-6 and IL-10, which were induced by LPS, were reduced by inhibiting phosphorylation of STAT-3, determine the effects of immune stimulating activity of centipedegrass in animals, cenetipedegrass extracts were administrated once a day for 2 weeks. After treated with LPS, immune suppressor, cytokines were down regulated, however, the cytokines in the group pretreated with centipedegrass extracts, were not down regulated as much as non treated group. The overall mechanism of immune stimulating effect of centipedegrass extracts, was that STAT-3 phosphorylation was inhibited by contipedegrass extracts.

  17. Improving Griffith's protocol for co-extraction of microbial DNA and RNA in adsorptive soils

    DEFF Research Database (Denmark)

    Paulin, Mélanie Marie; Nicolaisen, Mette Haubjerg; Jacobsen, Carsten Suhr

    2013-01-01

    Quantification of microbial gene expression is increasingly being used to study key functions in soil microbial communities, yet major limitations still exist for efficient extraction of nucleic acids, especially RNA for transcript analysis, from this complex matrix. We present an improved......-time PCR on both the RNA (after conversion to cDNA) and the DNA fraction of the extracts. Non-adsorptive soils were characterized by low clay content and/or high phosphate content, whereas adsorptive soils had clay contents above 20% and/or a strong presence of divalent Ca in combination with high p......H. Modifications to the co-extraction protocol improved nucleic acid extraction efficiency from all adsorptive soils and were successfully validated by DGGE analysis of the indigenous community based on 16S rRNA gene and transcripts in soils representing low biomass and/or high clay content. This new approach...

  18. Improved External Base Resistance Extraction for Submicrometer InP/InGaAs DHBT Models

    DEFF Research Database (Denmark)

    Johansen, Tom Keinicke; Krozer, Viktor; Nodjiadjim, Virginie

    2011-01-01

    An improved direct parameter extraction method is proposed for III–V heterojunction bipolar transistor (HBT) external base resistance $R_{\\rm bx}$ extraction from forward active $S$-parameters. The method is formulated taking into account the current dependence of the intrinsic base–collector cap......An improved direct parameter extraction method is proposed for III–V heterojunction bipolar transistor (HBT) external base resistance $R_{\\rm bx}$ extraction from forward active $S$-parameters. The method is formulated taking into account the current dependence of the intrinsic base...... factor given as the ratio of the emitter to the collector area. The determination of the parameters $I_{p}$ and $X_{0}$ from experimental $S$-parameters is described. The method is applied to high-speed submicrometer InP/InGaAs DHBT devices and leads to small-signal equivalent circuit models, which...

  19. Extraction of Graph Information Based on Image Contents and the Use of Ontology

    Science.gov (United States)

    Kanjanawattana, Sarunya; Kimura, Masaomi

    2016-01-01

    A graph is an effective form of data representation used to summarize complex information. Explicit information such as the relationship between the X- and Y-axes can be easily extracted from a graph by applying human intelligence. However, implicit knowledge such as information obtained from other related concepts in an ontology also resides in…

  20. Improved detection limits for phthalates by selective solid-phase micro-extraction

    KAUST Repository

    Zia, Asif I.; Afsarimanesh, Nasrin; Xie, Li; Nag, Anindya; Al-Bahadly, I. H.; Yu, P. L.; Kosel, Jü rgen

    2016-01-01

    Presented research reports on an improved method and enhanced limits of detection for phthalates; a hazardous additive used in the production of plastics by solid-phase micro-extraction (SPME) polymer in comparison to molecularly imprinted solid

  1. The algorithm of fast image stitching based on multi-feature extraction

    Science.gov (United States)

    Yang, Chunde; Wu, Ge; Shi, Jing

    2018-05-01

    This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.

  2. An Improved Method for Extraction and Separation of Photosynthetic Pigments

    Science.gov (United States)

    Katayama, Nobuyasu; Kanaizuka, Yasuhiro; Sudarmi, Rini; Yokohama, Yasutsugu

    2003-01-01

    The method for extracting and separating hydrophobic photosynthetic pigments proposed by Katayama "et al." ("Japanese Journal of Phycology," 42, 71-77, 1994) has been improved to introduce it to student laboratories at the senior high school level. Silica gel powder was used for removing water from fresh materials prior to…

  3. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  4. Knowledge discovery: Extracting usable information from large amounts of data

    International Nuclear Information System (INIS)

    Whiteson, R.

    1998-01-01

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best

  5. Extraction of temporal information in functional MRI

    Science.gov (United States)

    Singh, M.; Sungkarat, W.; Jeong, Jeong-Won; Zhou, Yongxia

    2002-10-01

    The temporal resolution of functional MRI (fMRI) is limited by the shape of the haemodynamic response function (hrf) and the vascular architecture underlying the activated regions. Typically, the temporal resolution of fMRI is on the order of 1 s. We have developed a new data processing approach to extract temporal information on a pixel-by-pixel basis at the level of 100 ms from fMRI data. Instead of correlating or fitting the time-course of each pixel to a single reference function, which is the common practice in fMRI, we correlate each pixel's time-course to a series of reference functions that are shifted with respect to each other by 100 ms. The reference function yielding the highest correlation coefficient for a pixel is then used as a time marker for that pixel. A Monte Carlo simulation and experimental study of this approach were performed to estimate the temporal resolution as a function of signal-to-noise ratio (SNR) in the time-course of a pixel. Assuming a known and stationary hrf, the simulation and experimental studies suggest a lower limit in the temporal resolution of approximately 100 ms at an SNR of 3. The multireference function approach was also applied to extract timing information from an event-related motor movement study where the subjects flexed a finger on cue. The event was repeated 19 times with the event's presentation staggered to yield an approximately 100-ms temporal sampling of the haemodynamic response over the entire presentation cycle. The timing differences among different regions of the brain activated by the motor task were clearly visualized and quantified by this method. The results suggest that it is possible to achieve a temporal resolution of /spl sim/200 ms in practice with this approach.

  6. Removal of Water-Soluble Extractives Improves the Enzymatic Digestibility of Steam-Pretreated Softwood Barks.

    Science.gov (United States)

    Frankó, Balázs; Carlqvist, Karin; Galbe, Mats; Lidén, Gunnar; Wallberg, Ola

    2018-02-01

    Softwood bark contains a large amounts of extractives-i.e., soluble lipophilic (such as resin acids) and hydrophilic components (phenolic compounds, stilbenes). The effects of the partial removal of water-soluble extractives before acid-catalyzed steam pretreatment on enzymatic digestibility were assessed for two softwood barks-Norway spruce and Scots pine. A simple hot water extraction step removed more than half of the water-soluble extractives from the barks, which improved the enzymatic digestibility of both steam-pretreated materials. This effect was more pronounced for the spruce than the pine bark, as evidenced by the 30 and 11% glucose yield improvement, respectively, in the enzymatic digestibility. Furthermore, analysis of the chemical composition showed that the acid-insoluble lignin content of the pretreated materials decreased when water-soluble extractives were removed prior to steam pretreatment. This can be explained by a decreased formation of water-insoluble "pseudo-lignin" from water-soluble bark phenolics during the acid-catalyzed pretreatment, which otherwise results in distorted lignin analysis and may also contribute to the impaired enzymatic digestibility of the barks. Thus, this study advocates the removal of extractives as the first step in the processing of bark or bark-rich materials in a sugar platform biorefinery.

  7. Improving the two-step remediation process for CCA-treated wood. Part I, Evaluating oxalic acid extraction

    Science.gov (United States)

    Carol Clausen

    2004-01-01

    In this study, three possible improvements to a remediation process for chromated-copper-arsenate (CCA) treated wood were evaluated. The process involves two steps: oxalic acid extraction of wood fiber followed by bacterial culture with Bacillus licheniformis CC01. The three potential improvements to the oxalic acid extraction step were (1) reusing oxalic acid for...

  8. Improved discrete swarm intelligence algorithms for endmember extraction from hyperspectral remote sensing images

    Science.gov (United States)

    Su, Yuanchao; Sun, Xu; Gao, Lianru; Li, Jun; Zhang, Bing

    2016-10-01

    Endmember extraction is a key step in hyperspectral unmixing. A new endmember extraction framework is proposed for hyperspectral endmember extraction. The proposed approach is based on the swarm intelligence (SI) algorithm, where discretization is used to solve the SI algorithm because pixels in a hyperspectral image are naturally defined within a discrete space. Moreover, a "distance" factor is introduced into the objective function to limit the endmember numbers which is generally limited in real scenarios, while traditional SI algorithms likely produce superabundant spectral signatures, which generally belong to the same classes. Three endmember extraction methods are proposed based on the artificial bee colony, ant colony optimization, and particle swarm optimization algorithms. Experiments with both simulated and real hyperspectral images indicate that the proposed framework can improve the accuracy of endmember extraction.

  9. INFORMATION EXTRACTION IN TOMB PIT USING HYPERSPECTRAL DATA

    Directory of Open Access Journals (Sweden)

    X. Yang

    2018-04-01

    Full Text Available Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  10. Information Extraction in Tomb Pit Using Hyperspectral Data

    Science.gov (United States)

    Yang, X.; Hou, M.; Lyu, S.; Ma, S.; Gao, Z.; Bai, S.; Gu, M.; Liu, Y.

    2018-04-01

    Hyperspectral data has characteristics of multiple bands and continuous, large amount of data, redundancy, and non-destructive. These characteristics make it possible to use hyperspectral data to study cultural relics. In this paper, the hyperspectral imaging technology is adopted to recognize the bottom images of an ancient tomb located in Shanxi province. There are many black remains on the bottom surface of the tomb, which are suspected to be some meaningful texts or paintings. Firstly, the hyperspectral data is preprocessing to get the reflectance of the region of interesting. For the convenient of compute and storage, the original reflectance value is multiplied by 10000. Secondly, this article uses three methods to extract the symbols at the bottom of the ancient tomb. Finally we tried to use morphology to connect the symbols and gave fifteen reference images. The results show that the extraction of information based on hyperspectral data can obtain a better visual experience, which is beneficial to the study of ancient tombs by researchers, and provides some references for archaeological research findings.

  11. Electronic simulation of the supported liquid membrane in electromembrane extraction systems: Improvement of the extraction by precise periodical reversing of the field polarity

    International Nuclear Information System (INIS)

    Moazami, Hamid Reza; Nojavan, Saeed; Zahedi, Pegah; Davarani, Saied Saeed Hosseiny

    2014-01-01

    Highlights: • A simple equivalent circuit has been proposed for a supported liquid membrane. • A dual charge transfer mechanism was proposed for electromembrane extraction. • An improvement was observed by precise periodical reversing of the field polarity. - Abstract: In order to understand the limitations of electromebrane extraction procedure better, a simple equivalent circuit has been proposed for a supported liquid membrane consisting of a resistor and a low leakage capacitor in series. To verify the equivalent circuit, it was subjected to a simulated periodical polarity changing potential and the resulting time variation of the current was compared with that of a real electromembrane extraction system. The results showed a good agreement between the simulated current patterns and those of the real ones. In order to investigate the impact of various limiting factors, the corresponding values of the equivalent circuit were estimated for a real electromembrane extraction system and were attributed to the physical parameters of the extraction system. A dual charge transfer mechanism was proposed for electromembrane extraction by combining general migration equation and fundamental aspects derived from the simulation. Dual mechanism comprises a current dependent contribution of analyte in total current and could support the possibility of an improvement in performance of an electromembrane extraction by application of an asymmetric polarity changing potential. The optimization of frequency and duty cycle of the asymmetric polarity exchanging potential resulted in a higher recovery (2.17 times greater) in comparison with the conventional electromebrane extraction. The simulation also provided more quantitative approaches toward the investigation of the mechanism of extraction and contribution of different limiting factors in electromembrane extraction. Results showed that the buildup of the double layer is the main limiting factor and the Joule heating has

  12. Helichrysum and grapefruit extracts inhibit carbohydrate digestion and absorption, improving postprandial glucose levels and hyperinsulinemia in rats.

    Science.gov (United States)

    de la Garza, Ana Laura; Etxeberria, Usune; Lostao, María Pilar; San Román, Belén; Barrenetxe, Jaione; Martínez, J Alfredo; Milagro, Fermín I

    2013-12-11

    Several plant extracts rich in flavonoids have been reported to improve hyperglycemia by inhibiting digestive enzyme activities and SGLT1-mediated glucose uptake. In this study, helichrysum ( Helichrysum italicum ) and grapefruit ( Citrus × paradisi ) extracts inhibited in vitro enzyme activities. The helichrysum extract showed higher inhibitory activity of α-glucosidase (IC50 = 0.19 mg/mL) than α-amylase (IC50 = 0.83 mg/mL), whereas the grapefruit extract presented similar α-amylase and α-glucosidase inhibitory activities (IC50 = 0.42 mg/mL and IC50 = 0.41 mg/mL, respectively). Both extracts reduced maltose digestion in noneverted intestinal sacs (57% with helichrysum and 46% with grapefruit). Likewise, both extracts inhibited SGLT1-mediated methylglucoside uptake in Caco-2 cells in the presence of Na(+) (56% of inhibition with helichrysum and 54% with grapefruit). In vivo studies demonstrated that helichrysum decreased blood glucose levels after an oral maltose tolerance test (OMTT), and both extracts reduced postprandial glucose levels after the oral starch tolerance test (OSTT). Finally, both extracts improved hyperinsulinemia (31% with helichrysum and 50% with grapefruit) and HOMA index (47% with helichrysum and 54% with grapefruit) in a dietary model of insulin resistance in rats. In summary, helichrysum and grapefruit extracts improve postprandial glycemic control in rats, possibly by inhibiting α-glucosidase and α-amylase enzyme activities and decreasing SGLT1-mediated glucose uptake.

  13. Assessing the efficacy of PEF treatments for improving polyphenol extraction during red wine vinifications.

    Science.gov (United States)

    Saldaña, Guillermo; Cebrián, Guillermo; Abenoza, María; Sánchez-Gimeno, Cristina; Álvarez, Ignacio; Raso, Javier

    2017-02-01

    The influence of the electric field intensity and pulse width on the improvement of total polyphenol index (TPI) and colour intensity (CI) during extraction in an ethanolic solution (30%) and during fermentation-maceration has been investigated in different grape varieties: Grenache from two harvesting times, Syrah and Tempranillo. The aim of this study was to develop a procedure to establish the PEF treatment conditions that cause enough permeabilization in the skin cells of different grape varieties to obtain a significant improvement in the vinification process in terms of increment on the polyphenol content or reduction of maceration time. Results obtained in this investigation indicate that extraction of polyphenols in a solution of ethanol (30%) for 2 h could be a suitable procedure to know if the PEF technology is effective for improving extraction of polyphenols from the grapes during vinification and to determine the most suitable PEF treatment conditions to obtain this objective. Improvement in the extraction during vinification only was observed with those grapes and under treatment conditions in which the improvement of the polyphenol extraction was higher than 40%. Other interesting observation from this research is the highest efficacy of PEF when treatments of the same duration are applied using longer pulses. Therefore, in a continuous process, where the flow processed is determined by the frequency applied by the PEF generator, it is possible to increase the processing capacity of the PEF installation. Benefits from PEF treatment of the grapes before the maceration step in the vinification process have been demonstrated. Nevertheless, the characteristics of the grapes may change in different vintages and grape varieties. Therefore, it is of high importance to be able to determine the optimum PEF conditions in order to obtain the desired benefit during the vinification. The rapid method developed permits to determine PEF process parameters before

  14. Leveraging management information in improving call centre productivity

    Directory of Open Access Journals (Sweden)

    Manthisana Mosese

    2016-04-01

    Objectives: This research explored the use of management information and its impact on two fundamental functions namely, improving productivity without compromising the quality of service, in the call centre of a well-known South African fashion retailer, Edcon. Following the implementation of the call centre technology project the research set out to determine how Edcon can transform their call centre to improve productivity and customer service through effective utilisation of their management information. Method: Internal documents and reports were analysed to provide the basis of evaluation between the measures of productivity prior to and post the implementation of a technology project at Edcon’s call centre. Semi-structured in-depth and group interviews were conducted to establish the importance and use of management information in improving productivity and customer service. Results: The results indicated that the availability of management information has indeed contributed to improved efficiency at the Edcon call centre. Although literature claims that there is a correlation between a call centre technology upgrade and improvement in performance, evident in the return on investment being realised within a year or two of implementation, it fell beyond the scope of this study to investigate the return on investment for Edcon’s call centre. Conclusion: Although Edcon has begun realising benefits in improved productivity in their call centre from their available management information, information will continue to play a crucial role in supporting management with informed decisions that will improve the call centre operations. [pdf to follow

  15. Dual-wavelength phase-shifting digital holography selectively extracting wavelength information from wavelength-multiplexed holograms.

    Science.gov (United States)

    Tahara, Tatsuki; Mori, Ryota; Kikunaga, Shuhei; Arai, Yasuhiko; Takaki, Yasuhiro

    2015-06-15

    Dual-wavelength phase-shifting digital holography that selectively extracts wavelength information from five wavelength-multiplexed holograms is presented. Specific phase shifts for respective wavelengths are introduced to remove the crosstalk components and extract only the object wave at the desired wavelength from the holograms. Object waves in multiple wavelengths are selectively extracted by utilizing 2π ambiguity and the subtraction procedures based on phase-shifting interferometry. Numerical results show the validity of the proposed technique. The proposed technique is also experimentally demonstrated.

  16. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...... or disappear depending on the experimental conditions. Such biomarkers are found by comparing the relative volumes of individual spots in the individual gels. Multivariate statistical analysis and modelling of 2-DE data for comparison and classification is an alternative approach utilising the combination...

  17. Double-Grating Displacement Structure for Improving the Light Extraction Efficiency of LEDs

    Directory of Open Access Journals (Sweden)

    Zhibin Wang

    2012-01-01

    Full Text Available To improve the light extraction efficiency of light-emitting diodes (LEDs, grating patterns were etched on GaN and silver film surfaces. The grating-patterned surface etching enabled the establishment of an LED model with a double-grating displacement structure that is based on the surface plasmon resonance principle. A numerical simulation was conducted using the finite difference time domain method. The influence of different grating periods for GaN surface and silver film thickness on light extraction efficiency was analyzed. The light extraction efficiency of LEDs was highest when the grating period satisfied grating coupling conditions. The wavelength of the highest value was also close to the light wavelength of the medium. The plasmon resonance frequencies on both sides of the silver film were affected by silver film thickness. With increasing film thickness, plasmon resonance frequency tended toward the same value and light extraction efficiency reached its maximum. When the grating period for the GaN surface was 365 nm and the silver film thickness was 390 nm, light extraction efficiency reached a maximum of 55%.

  18. Academic Activities Transaction Extraction Based on Deep Belief Network

    Directory of Open Access Journals (Sweden)

    Xiangqian Wang

    2017-01-01

    Full Text Available Extracting information about academic activity transactions from unstructured documents is a key problem in the analysis of academic behaviors of researchers. The academic activities transaction includes five elements: person, activities, objects, attributes, and time phrases. The traditional method of information extraction is to extract shallow text features and then to recognize advanced features from text with supervision. Since the information processing of different levels is completed in steps, the error generated from various steps will be accumulated and affect the accuracy of final results. However, because Deep Belief Network (DBN model has the ability to automatically unsupervise learning of the advanced features from shallow text features, the model is employed to extract the academic activities transaction. In addition, we use character-based feature to describe the raw features of named entities of academic activity, so as to improve the accuracy of named entity recognition. In this paper, the accuracy of the academic activities extraction is compared by using character-based feature vector and word-based feature vector to express the text features, respectively, and with the traditional text information extraction based on Conditional Random Fields. The results show that DBN model is more effective for the extraction of academic activities transaction information.

  19. An Effective Fault Feature Extraction Method for Gas Turbine Generator System Diagnosis

    Directory of Open Access Journals (Sweden)

    Jian-Hua Zhong

    2016-01-01

    Full Text Available Fault diagnosis is very important to maintain the operation of a gas turbine generator system (GTGS in power plants, where any abnormal situations will interrupt the electricity supply. The fault diagnosis of the GTGS faces the main challenge that the acquired data, vibration or sound signals, contain a great deal of redundant information which extends the fault identification time and degrades the diagnostic accuracy. To improve the diagnostic performance in the GTGS, an effective fault feature extraction framework is proposed to solve the problem of the signal disorder and redundant information in the acquired signal. The proposed framework combines feature extraction with a general machine learning method, support vector machine (SVM, to implement an intelligent fault diagnosis. The feature extraction method adopts wavelet packet transform and time-domain statistical features to extract the features of faults from the vibration signal. To further reduce the redundant information in extracted features, kernel principal component analysis is applied in this study. Experimental results indicate that the proposed feature extracted technique is an effective method to extract the useful features of faults, resulting in improvement of the performance of fault diagnosis for the GTGS.

  20. Hydro-alcoholic Extract of Commiphora mukul Gum Resin May Improve Cognitive Impairments in Diabetic Rats

    Directory of Open Access Journals (Sweden)

    Salehi

    2015-02-01

    Full Text Available Background Diabetes causes cognitive impairment. Medicinal plants due to different mechanisms, such as antioxidant activities may improve diabetes and relieve its symptoms. Commiphora mukul (Burseraceae has a significant antioxidant activity. Objectives This study aimed to examine the effect of hydro- alcoholic extract of C. mukul on passive-avoidance learning and memory in streptozotocin (STZ induced diabetic male rats. Materials and Methods Thirty-two adult male Wistar rats were randomly allocated to four groups: normal, diabetic, normal + extract of C. mukul and diabetic + extract of C. mukul groups with free access to regular rat diet. Diabetes was induced in male rats by single interaperitoneal injection of 60 mg/kg STZ. After the confirmation of diabetes, 300 mg/kg C. mukul extract was orally administered to the extract-treated groups. Control groups received normal saline at the same time. Passive-avoidance memory was tested eight weeks after the STZ treatment, and blood glucose and body weight were measured in all groups at the beginning and end of the experiment. Results In the present study, diabetes decreased learning and memory. Although the administration of C. mukul extract did not affect the step-through latency (STLa and the number of trials of the diabetic groups during the first acquisition trial, a significant decrease was observed in STLr and also a significant increase in time spent in the dark compartment (TDC and number of crossing (NOC in the retention test (after 24 and 48 hours. Although no significant difference was observed in body weight of diabetic + extract of C. mukul (DE and diabetic control (DC groups, the plasma glucose of DE group was significantly lower in comparison to DC group. Conclusions Commiphora mukul extract can improve passive-avoidance learning and memory impairments in the STZ-induced diabetic rats. This improvement may be due to the antioxidant, acetylcholinesterase inhibitory activity, anti

  1. Spearmint Extract Improves Working Memory in Men and Women with Age-Associated Memory Impairment.

    Science.gov (United States)

    Herrlinger, Kelli A; Nieman, Kristin M; Sanoshy, Kristen D; Fonseca, Brenda A; Lasrado, Joanne A; Schild, Arianne L; Maki, Kevin C; Wesnes, Keith A; Ceddia, Michael A

    2018-01-01

    The purpose of this study was to investigate the effects of supplementation with a spearmint (Mentha spicata L.) extract, high in polyphenols including rosmarinic acid, on cognitive performance, sleep, and mood in individuals with age-associated memory impairment (AAMI). Subjects with AAMI (N = 90; 67% female; age = 59.4 ± 0.6 years) were randomly assigned (n = 30/group) to consume 900, 600, or 0 mg/day (two capsules, once daily) spearmint extract for 90 days, in this double-blind, placebo-controlled trial. Assessments were completed for cognition (days 0, 45, and 90), sleep (days 0 and 90), and mood (days 0 and 90) by using the Cognitive Drug Research (CDR) System ™ , Leeds Sleep Evaluation Questionnaire (LSEQ), and Profile of Mood States (POMS ™ ), respectively. Quality of working memory and spatial working memory accuracy improved after supplementation with 900 mg/day spearmint extract by 15% (p = 0.0469) and 9% (p = 0.0456), respectively, versus placebo. Subjects consuming 900 mg/day spearmint extract reported improvement in their ability to fall asleep, relative to subjects consuming placebo (p = 0.0046). Overall treatment effects were evident for vigor-activity (p = 0.0399), total mood disturbance (p = 0.0374), and alertness and behavior following wakefulness (p = 0.0415), with trends observed for improvements after spearmint supplementation relative to placebo. These results suggest that the distinct spearmint extract may be a beneficial nutritional intervention for cognitive health in older subjects with AAMI.

  2. Chaotic spectra: How to extract dynamic information

    International Nuclear Information System (INIS)

    Taylor, H.S.; Gomez Llorente, J.M.; Zakrzewski, J.; Kulander, K.C.

    1988-10-01

    Nonlinear dynamics is applied to chaotic unassignable atomic and molecular spectra with the aim of extracting detailed information about regular dynamic motions that exist over short intervals of time. It is shown how this motion can be extracted from high resolution spectra by doing low resolution studies or by Fourier transforming limited regions of the spectrum. These motions mimic those of periodic orbits (PO) and are inserts into the dominant chaotic motion. Considering these inserts and the PO as a dynamically decoupled region of space, resonant scattering theory and stabilization methods enable us to compute ladders of resonant states which interact with the chaotic quasi-continuum computed in principle from basis sets placed off the PO. The interaction of the resonances with the quasicontinuum explains the low resolution spectra seen in such experiments. It also allows one to associate low resolution features with a particular PO. The motion on the PO thereby supplies the molecular movements whose quantization causes the low resolution spectra. Characteristic properties of the periodic orbit based resonances are discussed. The method is illustrated on the photoabsorption spectrum of the hydrogen atom in a strong magnetic field and on the photodissociation spectrum of H 3 + . Other molecular systems which are currently under investigation using this formalism are also mentioned. 53 refs., 10 figs., 2 tabs

  3. Automated Extraction of Substance Use Information from Clinical Texts.

    Science.gov (United States)

    Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B

    2015-01-01

    Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.

  4. Improvement of Aluminum-Air Battery Performances by the Application of Flax Straw Extract.

    Science.gov (United States)

    Grishina, Ekaterina; Gelman, Danny; Belopukhov, Sergey; Starosvetsky, David; Groysman, Alec; Ein-Eli, Yair

    2016-08-23

    The effect of a flax straw extract on Al corrosion inhibition in a strong alkaline solution was studied by using electrochemical measurements, weight-loss analysis, SEM, and FTIR spectroscopy. Flax straw extract added (3 vol %) to the 5 m KOH solution to act as a mixed-type Al corrosion inhibitor. The electrochemistry of Al in the presence of a flax straw extract in the alkaline solution, the effect of the extract on the Al morphology and surface films formed, and the corrosion inhibition mechanism are discussed. Finally, the Al-air battery discharge capacity recorded from a cell that used the flax straw extract in the alkaline electrolyte is substantially higher than that with only a pure alkaline electrolyte. This improved sustainability of the Al anode is attributed to Al corrosion inhibition and, consequently, to hydrogen evolution suppression. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Extractive Summarisation of Medical Documents

    Directory of Open Access Journals (Sweden)

    Abeed Sarker

    2012-09-01

    Full Text Available Background Evidence Based Medicine (EBM practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medical documents. MethodWe use a corpus that is specifically designed for summarisation in the EBM domain. We use approximately half the corpus for deriving important statistics associated with the best possible extractive summaries. We take into account factors such as sentence position, length, sentence content, and the type of the query posed. Using the statistics from the first set, we evaluate our approach on a separate set. Evaluation of the qualities of the generated summaries is performed automatically using ROUGE, which is a popular tool for evaluating automatic summaries. Results Our summarisation approach outperforms all baselines (best baseline score: 0.1594; our score 0.1653. Further improvements are achieved when query types are taken into account. Conclusion The quality of extractive summarisation in the medical domain can be significantly improved by incorporating domain knowledge and statistics derived from a specialised corpus. Such techniques can therefore be applied for content selection in end-to-end summarisation systems.

  6. Three-dimensional information extraction from GaoFen-1 satellite images for landslide monitoring

    Science.gov (United States)

    Wang, Shixin; Yang, Baolin; Zhou, Yi; Wang, Futao; Zhang, Rui; Zhao, Qing

    2018-05-01

    To more efficiently use GaoFen-1 (GF-1) satellite images for landslide emergency monitoring, a Digital Surface Model (DSM) can be generated from GF-1 across-track stereo image pairs to build a terrain dataset. This study proposes a landslide 3D information extraction method based on the terrain changes of slope objects. The slope objects are mergences of segmented image objects which have similar aspects; and the terrain changes are calculated from the post-disaster Digital Elevation Model (DEM) from GF-1 and the pre-disaster DEM from GDEM V2. A high mountain landslide that occurred in Wenchuan County, Sichuan Province is used to conduct a 3D information extraction test. The extracted total area of the landslide is 22.58 ha; the displaced earth volume is 652,100 m3; and the average sliding direction is 263.83°. The accuracies of them are 0.89, 0.87 and 0.95, respectively. Thus, the proposed method expands the application of GF-1 satellite images to the field of landslide emergency monitoring.

  7. Improving mental task classification by adding high frequency band information.

    Science.gov (United States)

    Zhang, Li; He, Wei; He, Chuanhong; Wang, Ping

    2010-02-01

    Features extracted from delta, theta, alpha, beta and gamma bands spanning low frequency range are commonly used to classify scalp-recorded electroencephalogram (EEG) for designing brain-computer interface (BCI) and higher frequencies are often neglected as noise. In this paper, we implemented an experimental validation to demonstrate that high frequency components could provide helpful information for improving the performance of the mental task based BCI. Electromyography (EMG) and electrooculography (EOG) artifacts were removed by using blind source separation (BSS) techniques. Frequency band powers and asymmetry ratios from the high frequency band (40-100 Hz) together with those from the lower frequency bands were used to represent EEG features. Finally, Fisher discriminant analysis (FDA) combining with Mahalanobis distance were used as the classifier. In this study, four types of classifications were performed using EEG signals recorded from four subjects during five mental tasks. We obtained significantly higher classification accuracy by adding the high frequency band features compared to using the low frequency bands alone, which demonstrated that the information in high frequency components from scalp-recorded EEG is valuable for the mental task based BCI.

  8. Lithium NLP: A System for Rich Information Extraction from Noisy User Generated Text on Social Media

    OpenAIRE

    Bhargava, Preeti; Spasojevic, Nemanja; Hu, Guoning

    2017-01-01

    In this paper, we describe the Lithium Natural Language Processing (NLP) system - a resource-constrained, high- throughput and language-agnostic system for information extraction from noisy user generated text on social media. Lithium NLP extracts a rich set of information including entities, topics, hashtags and sentiment from text. We discuss several real world applications of the system currently incorporated in Lithium products. We also compare our system with existing commercial and acad...

  9. Protocole of a controlled before-after evaluation of a national health information technology-based program to improve healthcare coordination and access to information.

    Science.gov (United States)

    Saillour-Glénisson, Florence; Duhamel, Sylvie; Fourneyron, Emmanuelle; Huiart, Laetitia; Joseph, Jean Philippe; Langlois, Emmanuel; Pincemail, Stephane; Ramel, Viviane; Renaud, Thomas; Roberts, Tamara; Sibé, Matthieu; Thiessard, Frantz; Wittwer, Jerome; Salmi, Louis Rachid

    2017-04-21

    Improvement of coordination of all health and social care actors in the patient pathways is an important issue in many countries. Health Information (HI) technology has been considered as a potentially effective answer to this issue. The French Health Ministry first funded the development of five TSN ("Territoire de Soins Numérique"/Digital health territories) projects, aiming at improving healthcare coordination and access to information for healthcare providers, patients and the population, and at improving healthcare professionals work organization. The French Health Ministry then launched a call for grant to fund one research project consisting in evaluating the TSN projects implementation and impact and in developing a model for HI technology evaluation. EvaTSN is mainly based on a controlled before-after study design. Data collection covers three periods: before TSN program implementation, during early TSN program implementation and at late TSN program implementation, in the five TSN projects' territories and in five comparison territories. Three populations will be considered: "TSN-targeted people" (healthcare system users and people having characteristics targeted by the TSN projects), "TSN patient users" (people included in TSN experimentations or using particular services) and "TSN professional users" (healthcare professionals involved in TSN projects). Several samples will be made in each population depending on the objective, axis and stage of the study. Four types of data sources are considered: 1) extractions from the French National Heath Insurance Database (SNIIRAM) and the French Autonomy Personalized Allowance database, 2) Ad hoc surveys collecting information on knowledge of TSN projects, TSN program use, ease of use, satisfaction and understanding, TSN pathway experience and appropriateness of hospital admissions, 3) qualitative analyses using semi-directive interviews and focus groups and document analyses and 4) extractions of TSN

  10. System and method for extracting physiological information from remotely detected electromagnetic radiation

    NARCIS (Netherlands)

    2016-01-01

    The present invention relates to a device and a method for extracting physiological information indicative of at least one health symptom from remotely detected electromagnetic radiation. The device comprises an interface (20) for receiving a data stream comprising remotely detected image data

  11. System and method for extracting physiological information from remotely detected electromagnetic radiation

    NARCIS (Netherlands)

    2015-01-01

    The present invention relates to a device and a method for extracting physiological information indicative of at least one health symptom from remotely detected electromagnetic radiation. The device comprises an interface (20) for receiving a data stream comprising remotely detected image data

  12. Policosanol extraction from beeswax and improvement of the purity

    Directory of Open Access Journals (Sweden)

    Srisaipet Anakhaorn

    2017-01-01

    Full Text Available Policosanol is a mixture of high molecular weight aliphatic long chain alcohols (20-36 carbon atoms. It has been use in pharmaceutical composition and food supplements. This research aimed to isolate and improve the purity of policosanol extracted from beeswax. Triglycerides and other impurities were eliminated from beeswax by refluxing with hexane followed by isopropanol. The purified beeswax was hydrolyzed by refluxing with 1 M ethanolic NaOH for 2 hours. Purification of policosanol was performed by extracting the hydrolyzed product with acetone at 50-60 °C for 3 hours and it was stored at 4 °C for precipitation. The precipitate was refluxed with heptanes followed by washing with hot water. The heptanes layer was kept for policosanol precipitation at 4 °C. The purity of policosanol was confirmed by TLC and high performance liquid chromatography (HPLC. The yield of purified policosanol was 13.23-13.89 %.

  13. Cinnamon extract improves insulin sensitivity in the brain and lowers liver fat in mouse models of obesity.

    Science.gov (United States)

    Sartorius, Tina; Peter, Andreas; Schulz, Nadja; Drescher, Andrea; Bergheim, Ina; Machann, Jürgen; Schick, Fritz; Siegel-Axel, Dorothea; Schürmann, Annette; Weigert, Cora; Häring, Hans-Ulrich; Hennige, Anita M

    2014-01-01

    Treatment of diabetic subjects with cinnamon demonstrated an improvement in blood glucose concentrations and insulin sensitivity but the underlying mechanisms remained unclear. This work intends to elucidate the impact of cinnamon effects on the brain by using isolated astrocytes, and an obese and diabetic mouse model. Cinnamon components (eugenol, cinnamaldehyde) were added to astrocytes and liver cells to measure insulin signaling and glycogen synthesis. Ob/ob mice were supplemented with extract from cinnamomum zeylanicum for 6 weeks and cortical brain activity, locomotion and energy expenditure were evaluated. Insulin action was determined in brain and liver tissues. Treatment of primary astrocytes with eugenol promoted glycogen synthesis, whereas the effect of cinnamaldehyde was attenuated. In terms of brain function in vivo, cinnamon extract improved insulin sensitivity and brain activity in ob/ob mice, and the insulin-stimulated locomotor activity was improved. In addition, fasting blood glucose levels and glucose tolerance were greatly improved in ob/ob mice due to cinnamon extracts, while insulin secretion was unaltered. This corresponded with lower triglyceride and increased liver glycogen content and improved insulin action in liver tissues. In vitro, Fao cells exposed to cinnamon exhibited no change in insulin action. Together, cinnamon extract improved insulin action in the brain as well as brain activity and locomotion. This specific effect may represent an important central feature of cinnamon in improving insulin action in the brain, and mediates metabolic alterations in the periphery to decrease liver fat and improve glucose homeostasis.

  14. Off-flavors removal and storage improvement of mackerel viscera by supercritical carbon dioxide extraction.

    Science.gov (United States)

    Lee, Min Kyung; Uddin, M Salim; Chun, Byung Soo

    2008-07-01

    The oil in mackerel viscera was extracted by supercritical carbon dioxide (SCO2) at a semi-batch flow extraction process and the fatty acids composition in the oil was identified. Also the off-flavors removal in mackerel viscera and the storage improvement of the oils were carried out. As results obtained, by increasing pressure and temperature, quantity was increased. The maximum yield of oils obtained from mackerel viscera by SCO, extraction was 118 mgg(-1) (base on dry weight of freeze-dried raw anchovy) at 50 degrees C, 350 bar And the extracted oil contained high concentration of EPA and DHA. Also it was found that the autoxidation of the oils using SCO2 extraction occurred very slowly compared to the oils by organic solvent extraction. The off-flavors in the powder after SCO2 extraction were significantly removed. Especially complete removal of the trimethylamine which influences a negative compound to the products showed. Also other significant off-flavors such as aldehydes, sulfur-containing compounds, ketones, acids or alcohols were removed by the extraction.

  15. Competitiveness Improvement Project Informational Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Sinclair, Karin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Preus, Robert W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dana, Scott [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Van Dam, Jeroen J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Damiani, Rick R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jackson, Kyndall R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Baring-Gould, Edward I [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Jain, Anant [Intertek

    2018-02-27

    This presentation was given at the Competitiveness Improvement Project (CIP) Informational Workshop on December 6, 2017. Topics covered during the workshop include an overview of the CIP, past projects, scoring criteria, technical support opportunities, certification body requirements, standards applicable to distributed wind generators, information on the National Electric Code, certification testing requirements, test site requirements, National Environmental Policy Act, design review, levelized cost of energy, procurement/contracting, project management/deliverables, and outreach materials.

  16. Improved Extraction and Quality Characterization of Water-Soluble Polysaccharide from Gamma-Irradiated Lentinus edodes.

    Science.gov (United States)

    Akram, Kashif; Shahbaz, Hafiz Muhammad; Kim, Gui-Ran; Farooq, Umar; Kwon, Joong-Ho

    2017-02-01

    Gamma irradiation was applied to the improved extraction of water-soluble polysaccharides (WSPs) from dried Lentinus edodes. Irradiation provided a dose-dependent increase in extraction yield (0 kGy, 2.01%; 7.5 kGy, 4.03%; 15 kGy, 7.17%) and purity (0 kGy, 78.8%; 7.5 kGy, 83.1%; 15 kGy, 85.6%) of the WSPs from hot-water extraction. The effect of irradiation was evident in the degraded microstructures and reduced molecular weights of the WSPs. However, nuclear magnetic resonance, Fourier-transform infrared, and X-ray diffraction spectroscopic analyses provided comparable structures of WSPs from nonirradiated and irradiated samples. UV-visible spectra showed a dose-dependent decline in intensity, but an improvement in thermal properties of the WSPs from the irradiated mushroom samples was observed. © 2017 Institute of Food Technologists®.

  17. Extracting of implicit information in English advertising texts with phonetic and lexical-morphological means

    Directory of Open Access Journals (Sweden)

    Traikovskaya Natalya Petrovna

    2015-12-01

    Full Text Available The article deals with phonetic and lexical-morphological language means participating in the process of extracting implicit information in English-speaking advertising texts for men and women. The functioning of phonetic means of the English language is not the basis for implication of information in advertising texts. Lexical and morphological means play the role of markers of relevant information, playing the role of the activator ofimplicit information in the texts of advertising.

  18. Web multimedia information retrieval using improved Bayesian algorithm.

    Science.gov (United States)

    Yu, Yi-Jun; Chen, Chun; Yu, Yi-Min; Lin, Huai-Zhong

    2003-01-01

    The main thrust of this paper is application of a novel data mining approach on the log of user's feedback to improve web multimedia information retrieval performance. A user space model was constructed based on data mining, and then integrated into the original information space model to improve the accuracy of the new information space model. It can remove clutter and irrelevant text information and help to eliminate mismatch between the page author's expression and the user's understanding and expectation. User space model was also utilized to discover the relationship between high-level and low-level features for assigning weight. The authors proposed improved Bayesian algorithm for data mining. Experiment proved that the authors' proposed algorithm was efficient.

  19. Natural Arctium lappa fruit extract improves the clinical signs of aging skin.

    Science.gov (United States)

    Knott, Anja; Reuschlein, Katja; Mielke, Heiko; Wensorra, Ursula; Mummert, Christopher; Koop, Urte; Kausch, Martina; Kolbe, Ludger; Peters, Nils; Stäb, Franz; Wenck, Horst; Gallinat, Stefan

    2008-12-01

    Subclinical, chronic tissue inflammation involving the generation of cytokines (e.g., interleukin-6 and tumor necrosis factor-alpha) might contribute to the cutaneous aging process. This study aims to screen for an active ingredient with anti-inflammatory (i.e., reduction of interleukin-6 and tumor necrosis factor-alpha) and matrix-stimulating efficacy which improves the clinical signs of skin aging in vivo. In vitro studies with pure Arctiin were performed investigating the inhibition of cytokine induction and stimulation of collagen neo-synthesis. In vivo home-in-use studies using an Arctium lappa fruit extract-containing formulation were carried out to determine procollagen and hyaluronan synthesis, hyaluronan synthase-2 gene expression, and reduction of wrinkle volume after treatment. In vitro studies on human dermal fibroblasts and monocyte-derived dendritic cells supplemented with pure Arctiin showed relative to untreated control cells a stimulation of collagen synthesis and a decrease in interleukin-6 and tumor necrosis factor-alpha concentration, respectively. In addition, topical in vivo application of an A. lappa fruit extract-containing formulation for 12 weeks significantly stimulated procollagen synthesis and increased hyaluronan synthase-2 expression as well as hyaluronan levels compared to vehicle-treated control areas. Similarly, after a 4-week treatment with an A. lappa fruit extract-containing formulation, wrinkle volume in the crow's feet area was significantly reduced as compared to treatment with the vehicle. Our data show that topical treatment with a natural A. lappa fruit extract significantly improves the metabolism of the dermal extracellular matrix and leads to a visible wrinkle reduction in vivo. In conclusion, A. lappa fruit extract represents a targeted means to regenerate dermal structures and, thus, offers an effective treatment option for mature skin.

  20. Methods from Information Extraction from LIDAR Intensity Data and Multispectral LIDAR Technology

    Science.gov (United States)

    Scaioni, M.; Höfle, B.; Baungarten Kersting, A. P.; Barazzetti, L.; Previtali, M.; Wujanz, D.

    2018-04-01

    LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on `Information Extraction from LiDAR Intensity Data' has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  1. METHODS FROM INFORMATION EXTRACTION FROM LIDAR INTENSITY DATA AND MULTISPECTRAL LIDAR TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    M. Scaioni

    2018-04-01

    Full Text Available LiDAR is a consolidated technology for topographic mapping and 3D reconstruction, which is implemented in several platforms On the other hand, the exploitation of the geometric information has been coupled by the use of laser intensity, which may provide additional data for multiple purposes. This option has been emphasized by the availability of sensors working on different wavelength, thus able to provide additional information for classification of surfaces and objects. Several applications ofmonochromatic and multi-spectral LiDAR data have been already developed in different fields: geosciences, agriculture, forestry, building and cultural heritage. The use of intensity data to extract measures of point cloud quality has been also developed. The paper would like to give an overview on the state-of-the-art of these techniques, and to present the modern technologies for the acquisition of multispectral LiDAR data. In addition, the ISPRS WG III/5 on ‘Information Extraction from LiDAR Intensity Data’ has collected and made available a few open data sets to support scholars to do research on this field. This service is presented and data sets delivered so far as are described.

  2. Possibilities for using plant extracts added to ruminant feed aimed at improving production results

    Directory of Open Access Journals (Sweden)

    Grdović Svetlana

    2010-01-01

    Full Text Available The use of plant extracts with the objective of improving production results and the quality of food articles of animal origin is an area which is acquiring increasing scientific importance. Numerous investigations carried out so far on ruminants and other species of domestic animals have been aimed at examining specific bioactive matter of plants. The results of these investigations have demonstrated a positive influence on the production results. A large number of data indicate that plant extracts added to animal feed contribute to increasing overall productivity. Furthermore, plant extracts as additives in animal feed have a positive effect also on the health condition of the animals. A large number of plants have characteristics which potentially improve consumption, digestibility and conversion of food, and also growth. Examinations have been performed of the effects of different plant extracts on food consumption, wool growth, growth and composition of the trunk, milk production, reproductive parameters, agents for wool shearing, preventing bloat, methane production, as well as the influence of plants on curbing nematode infestations of ruminants. This work presents a review of scientific investigations of different plant species and their effects on the production characteristics of ruminants. .

  3. Extraction of ultrashort DNA molecules from herbarium specimens.

    Science.gov (United States)

    Gutaker, Rafal M; Reiter, Ella; Furtwängler, Anja; Schuenemann, Verena J; Burbano, Hernán A

    2017-02-01

    DNA extracted from herbarium specimens is highly fragmented; therefore, it is crucial to use extraction protocols that retrieve short DNA molecules. Improvements in extraction and DNA library preparation protocols for animal remains have allowed efficient retrieval of molecules shorter than 50 bp. Here, we applied these improvements to DNA extraction protocols for herbarium specimens and evaluated extraction performance by shotgun sequencing, which allows an accurate estimation of the distribution of DNA fragment lengths. Extraction with N-phenacylthiazolium bromide (PTB) buffer decreased median fragment length by 35% when compared with cetyl-trimethyl ammonium bromide (CTAB); modifying the binding conditions of DNA to silica allowed for an additional decrease of 10%. We did not observe a further decrease in length for single-stranded DNA (ssDNA) versus double-stranded DNA (dsDNA) library preparation methods. Our protocol enables the retrieval of ultrashort molecules from herbarium specimens, which will help to unlock the genetic information stored in herbaria.

  4. Automated extraction of chemical structure information from digital raster images

    Directory of Open Access Journals (Sweden)

    Shedden Kerby A

    2009-02-01

    Full Text Available Abstract Background To search for chemical structures in research articles, diagrams or text representing molecules need to be translated to a standard chemical file format compatible with cheminformatic search engines. Nevertheless, chemical information contained in research articles is often referenced as analog diagrams of chemical structures embedded in digital raster images. To automate analog-to-digital conversion of chemical structure diagrams in scientific research articles, several software systems have been developed. But their algorithmic performance and utility in cheminformatic research have not been investigated. Results This paper aims to provide critical reviews for these systems and also report our recent development of ChemReader – a fully automated tool for extracting chemical structure diagrams in research articles and converting them into standard, searchable chemical file formats. Basic algorithms for recognizing lines and letters representing bonds and atoms in chemical structure diagrams can be independently run in sequence from a graphical user interface-and the algorithm parameters can be readily changed-to facilitate additional development specifically tailored to a chemical database annotation scheme. Compared with existing software programs such as OSRA, Kekule, and CLiDE, our results indicate that ChemReader outperforms other software systems on several sets of sample images from diverse sources in terms of the rate of correct outputs and the accuracy on extracting molecular substructure patterns. Conclusion The availability of ChemReader as a cheminformatic tool for extracting chemical structure information from digital raster images allows research and development groups to enrich their chemical structure databases by annotating the entries with published research articles. Based on its stable performance and high accuracy, ChemReader may be sufficiently accurate for annotating the chemical database with links

  5. Improved hybrid information filtering based on limited time window

    Science.gov (United States)

    Song, Wen-Jun; Guo, Qiang; Liu, Jian-Guo

    2014-12-01

    Adopting the entire collecting information of users, the hybrid information filtering of heat conduction and mass diffusion (HHM) (Zhou et al., 2010) was successfully proposed to solve the apparent diversity-accuracy dilemma. Since the recent behaviors are more effective to capture the users' potential interests, we present an improved hybrid information filtering of adopting the partial recent information. We expand the time window to generate a series of training sets, each of which is treated as known information to predict the future links proven by the testing set. The experimental results on one benchmark dataset Netflix indicate that by only using approximately 31% recent rating records, the accuracy could be improved by an average of 4.22% and the diversity could be improved by 13.74%. In addition, the performance on the dataset MovieLens could be preserved by considering approximately 60% recent records. Furthermore, we find that the improved algorithm is effective to solve the cold-start problem. This work could improve the information filtering performance and shorten the computational time.

  6. Efficacy and Safety of Ashwagandha (Withania somnifera (L.) Dunal) Root Extract in Improving Memory and Cognitive Functions.

    Science.gov (United States)

    Choudhary, Dnyanraj; Bhattacharyya, Sauvik; Bose, Sekhar

    2017-11-02

    Cognitive decline is often associated with the aging process. Ashwagandha (Withania somnifera (L.) Dunal) has long been used in the traditional Ayurvedic system of medicine to enhance memory and improve cognition. This pilot study was designed to evaluate the efficacy and safety of ashwagandha (Withania somnifera (L.) Dunal) in improving memory and cognitive functioning in adults with mild cognitive impairment (MCI). A prospective, randomized, double-blind, placebo-controlled study was conducted in 50 adults. Subjects were treated with either ashwagandha-root extract (300 mg twice daily) or placebo for eight weeks. After eight weeks of study, the ashwagandha treatment group demonstrated significant improvements compared with the placebo group in both immediate and general memory, as evidenced by Wechsler Memory Scale III subtest scores for logical memory I (p = 0.007), verbal paired associates I (p = 0.042), faces I (p = 0.020), family pictures I (p = 0.006), logical memory II (p = 0.006), verbal paired associates II (p = 0.031), faces II (p = 0.014), and family pictures II (p = 0.006). The treatment group also demonstrated significantly greater improvement in executive function, sustained attention, and information-processing speed as indicated by scores on the Eriksen Flanker task (p = 0.002), Wisconsin Card Sort test (p = 0.014), Trail-Making test part A (p = 0.006), and the Mackworth Clock test (p = 0.009). Ashwagandha may be effective in enhancing both immediate and general memory in people with MCI as well as improving executive function, attention, and information processing speed.

  7. Improved Light Extraction Efficiency by Photonic Crystal Arrays on Transparent Contact Layer Using Focused Ion Beams

    International Nuclear Information System (INIS)

    Wu, G.M.; Tsai, B.H.; Kung, S.F.; Wu, C.F.

    2011-01-01

    Nitride-based thin-film materials have become increasingly important for the high brightness light-emitting diode applications. The improvements in light extraction and lower power consumption are highly desired. Although the internal quantum efficiency of GaN-based LED has been relatively high, only a small fraction of light can be extracted. In this study, a new design of two-dimensional photonic crystal array has been prepared on the top transparent contact layer of indium-tin oxide film to improve the light extraction efficiency using focused ion beam. The acceleration voltage of the Ga dual-beam nanotechnology system SMI 3050 was 30 kV and the ion beam current was 100 pA. The cylindrical air holes had the diameter of 150 nm and depth of 100 nm. The micro photoluminescence analysis results showed that the light output intensity could be 1.5 times of that of the non-patterned control sample. In addition, the structural damage from the focused ion beam drilling of GaN step could be eliminated. The excellent I-V characteristics have been maintained, and the external light extraction efficiency would be still improved for the LED devices. (author)

  8. Improved light extraction from white organic light-emitting devices using a binary random phase array

    International Nuclear Information System (INIS)

    Inada, Yasuhisa; Nishiwaki, Seiji; Hirasawa, Taku; Nakamura, Yoshitaka; Hashiya, Akira; Wakabayashi, Shin-ichi; Suzuki, Masa-aki; Matsuzaki, Jumpei

    2014-01-01

    We have developed a binary random phase array (BRPA) to improve the light extraction performance of white organic light-emitting devices (WOLEDs). We demonstrated that the scattering of incoming light can be controlled by employing diffraction optics to modify the structural parameters of the BRPA. Applying a BRPA to the substrate of the WOLED leads to enhanced extraction efficiency and suppression of angle-dependent color changes. Our systematic study clarifies the effect of scattering on the light extraction of WOLEDs

  9. Improved light extraction from white organic light-emitting devices using a binary random phase array

    Energy Technology Data Exchange (ETDEWEB)

    Inada, Yasuhisa, E-mail: inada.yasuhisa@jp.panasonic.com; Nishiwaki, Seiji; Hirasawa, Taku; Nakamura, Yoshitaka; Hashiya, Akira; Wakabayashi, Shin-ichi; Suzuki, Masa-aki [R and D Division, Panasonic Corporation, 1006 Kadoma, Kadoma City, Osaka 571-8501 (Japan); Matsuzaki, Jumpei [Device Development Center, Eco Solutions Company, Panasonic Corporation, 1048 Kadoma, Osaka 571-8686 Japan (Japan)

    2014-02-10

    We have developed a binary random phase array (BRPA) to improve the light extraction performance of white organic light-emitting devices (WOLEDs). We demonstrated that the scattering of incoming light can be controlled by employing diffraction optics to modify the structural parameters of the BRPA. Applying a BRPA to the substrate of the WOLED leads to enhanced extraction efficiency and suppression of angle-dependent color changes. Our systematic study clarifies the effect of scattering on the light extraction of WOLEDs.

  10. Understanding and using quality information for quality improvement : The effect of information presentation

    NARCIS (Netherlands)

    Zwijnenberg, N.C.; Hendriks, M.; Delnoij, D.; De Veer, A.J.; Spreeuwenberg, P.; Wagner, C.

    2016-01-01

    Objective To examine how information presentation affects the understanding and use of information for quality improvement. Design An experimental design, testing 22 formats, and showing information on patient safety culture. Formats differed in visualization, outcomes and benchmark information.

  11. An Algorithm of Building Extraction in Urban Area Based on Improved Top-hat Transformations and LBP Elevation Texture

    Directory of Open Access Journals (Sweden)

    HE Manyun

    2017-09-01

    Full Text Available Classification of building and vegetation is difficult solely by LiDAR data and vegetation in shadows can't be eliminated only by aerial images. The improved top-hat transformations and local binary patterns (LBP elevation texture analysis for building extraction are proposed based on the fusion of aerial images and LiDAR data. Firstly, LiDAR data is reorganized into grid cell, the algorithm removes ground points through top-hat transform. Then, the vegetation points are extracted by normalized difference vegetation index (NDVI. Thirdly, according to the elevation information of LiDAR points, LBP elevation texture is calculated and achieving precise elimination of vegetation in shadows or surrounding to the buildings. At last, morphological operations are used to fill the holes of building roofs, and region growing for complete building edges. The simulation is based on the complex urban area in Vaihingen benchmark provided by ISPRS, the results show that the algorithm affording higher classification accuracy.

  12. Unsupervised improvement of named entity extraction in short informal context using disambiguation clues

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    2012-01-01

    Short context messages (like tweets and SMS’s) are a potentially rich source of continuously and instantly updated information. Shortness and informality of such messages are challenges for Natural Language Processing tasks. Most efforts done in this direction rely on machine learning techniques

  13. Improving Strategies for Low-Income Family Children's Information Literacy

    Science.gov (United States)

    Zhang, Haiyan; Washington, Rodney; Yin, Jianjun

    2014-01-01

    This article discussed the significance of improving low-income family children's information literacy, which could improve educational quality, enhance children's self-esteem, adapt children to the future competitive world market, as well as the problems in improving low-income family children's information literacy, such as no home computer and…

  14. Improved detection limits for phthalates by selective solid-phase micro-extraction

    KAUST Repository

    Zia, Asif I.

    2016-03-30

    Presented research reports on an improved method and enhanced limits of detection for phthalates; a hazardous additive used in the production of plastics by solid-phase micro-extraction (SPME) polymer in comparison to molecularly imprinted solid-phase extraction (MISPE) polymer. The polymers were functionalized on an interdigital capacitive sensor for selective binding of phthalate molecules from a complex mixture of chemicals. Both polymers owned predetermined selectivity by formation of valuable molecular recognition sites for Bis (2-ethylhexyl) phthalate (DEHP). Polymers were immobilized on planar electrochemical sensor fabricated on a single crystal silicon substrate with 500 nm sputtered gold electrodes fabricated using MEMS fabrication techniques. Impedance spectra were obtained using electrochemical impedance spectroscopy (EIS) to determine sample conductance for evaluation of phthalate concentration in the spiked sample solutions with various phthalate concentrations. Experimental results revealed that the ability of SPME polymer to adsorb target molecules on the sensing surface is better than that of MISPE polymer for phthalates in the sensing system. Testing the extracted samples using high performance liquid chromatography with photodiode array detectors validated the results.

  15. Fully Convolutional Network Based Shadow Extraction from GF-2 Imagery

    Science.gov (United States)

    Li, Z.; Cai, G.; Ren, H.

    2018-04-01

    There are many shadows on the high spatial resolution satellite images, especially in the urban areas. Although shadows on imagery severely affect the information extraction of land cover or land use, they provide auxiliary information for building extraction which is hard to achieve a satisfactory accuracy through image classification itself. This paper focused on the method of building shadow extraction by designing a fully convolutional network and training samples collected from GF-2 satellite imagery in the urban region of Changchun city. By means of spatial filtering and calculation of adjacent relationship along the sunlight direction, the small patches from vegetation or bridges have been eliminated from the preliminary extracted shadows. Finally, the building shadows were separated. The extracted building shadow information from the proposed method in this paper was compared with the results from the traditional object-oriented supervised classification algorihtms. It showed that the deep learning network approach can improve the accuracy to a large extent.

  16. Improvements of the beam timing structure during a slow extraction from the 70 GeV IFVE accelerator

    International Nuclear Information System (INIS)

    Vorob'ev, V.K.; Levin, A.V.; Mojzhes, L.L.; Myznikov, K.P.; Tatarenko, V.M.; Fedotov, Yu.S.

    1977-01-01

    To improve the density uniformity of an extracted beam in the slow extraction system of the IFVE accelerator a correlation analysis of a timing structure of a proton beam is developed. A passive filter for a power supply system of an annular electromagnet is reconstructed by introduction of a double-loop circuit to reduce pulsations of 600 Hz main frequency and higher harmonics. To suppress accelerator field pulsations of subharmonic components from 50 to 300 Hz an active filter was introduced, where high Q qualities band filters were inserted. Using the above methods of pulsation suppression permits to improve the density uniformity of the extracted beam

  17. Automated concept-level information extraction to reduce the need for custom software and rules development.

    Science.gov (United States)

    D'Avolio, Leonard W; Nguyen, Thien M; Goryachev, Sergey; Fiore, Louis D

    2011-01-01

    Despite at least 40 years of promising empirical performance, very few clinical natural language processing (NLP) or information extraction systems currently contribute to medical science or care. The authors address this gap by reducing the need for custom software and rules development with a graphical user interface-driven, highly generalizable approach to concept-level retrieval. A 'learn by example' approach combines features derived from open-source NLP pipelines with open-source machine learning classifiers to automatically and iteratively evaluate top-performing configurations. The Fourth i2b2/VA Shared Task Challenge's concept extraction task provided the data sets and metrics used to evaluate performance. Top F-measure scores for each of the tasks were medical problems (0.83), treatments (0.82), and tests (0.83). Recall lagged precision in all experiments. Precision was near or above 0.90 in all tasks. Discussion With no customization for the tasks and less than 5 min of end-user time to configure and launch each experiment, the average F-measure was 0.83, one point behind the mean F-measure of the 22 entrants in the competition. Strong precision scores indicate the potential of applying the approach for more specific clinical information extraction tasks. There was not one best configuration, supporting an iterative approach to model creation. Acceptable levels of performance can be achieved using fully automated and generalizable approaches to concept-level information extraction. The described implementation and related documentation is available for download.

  18. Extraction of quantifiable information from complex systems

    CERN Document Server

    Dahmen, Wolfgang; Griebel, Michael; Hackbusch, Wolfgang; Ritter, Klaus; Schneider, Reinhold; Schwab, Christoph; Yserentant, Harry

    2014-01-01

    In April 2007, the  Deutsche Forschungsgemeinschaft (DFG) approved the  Priority Program 1324 “Mathematical Methods for Extracting Quantifiable Information from Complex Systems.” This volume presents a comprehensive overview of the most important results obtained over the course of the program.   Mathematical models of complex systems provide the foundation for further technological developments in science, engineering and computational finance.  Motivated by the trend toward steadily increasing computer power, ever more realistic models have been developed in recent years. These models have also become increasingly complex, and their numerical treatment poses serious challenges.   Recent developments in mathematics suggest that, in the long run, much more powerful numerical solution strategies could be derived if the interconnections between the different fields of research were systematically exploited at a conceptual level. Accordingly, a deeper understanding of the mathematical foundations as w...

  19. THE EXTRACTION OF INDOOR BUILDING INFORMATION FROM BIM TO OGC INDOORGML

    Directory of Open Access Journals (Sweden)

    T.-A. Teo

    2017-07-01

    Full Text Available Indoor Spatial Data Infrastructure (indoor-SDI is an important SDI for geosptial analysis and location-based services. Building Information Model (BIM has high degree of details in geometric and semantic information for building. This study proposed direct conversion schemes to extract indoor building information from BIM to OGC IndoorGML. The major steps of the research include (1 topological conversion from building model into indoor network model; and (2 generation of IndoorGML. The topological conversion is a major process of generating and mapping nodes and edges from IFC to indoorGML. Node represents every space (e.g. IfcSpace and objects (e.g. IfcDoor in the building while edge shows the relationships between nodes. According to the definition of IndoorGML, the topological model in the dual space is also represented as a set of nodes and edges. These definitions of IndoorGML are the same as in the indoor network. Therefore, we can extract the necessary data in the indoor network and easily convert them into IndoorGML based on IndoorGML Schema. The experiment utilized a real BIM model to examine the proposed method. The experimental results indicated that the 3D indoor model (i.e. IndoorGML model can be automatically imported from IFC model by the proposed procedure. In addition, the geometric and attribute of building elements are completely and correctly converted from BIM to indoor-SDI.

  20. Engineering spray-dried rosemary extracts with improved physicomechanical properties: a design of experiments issue

    Directory of Open Access Journals (Sweden)

    Luiza T. Chaul

    Full Text Available ABSTRACT A 33 Box–Behnken design and Response Surface Methodology were performed to evaluate the influence of extract feed rate, drying air inlet temperature and spray nozzle airflow rate on the process yield, stability parameters (moisture content and water activity and on several physicomechanical properties of spray-dried rosemary extracts. Powder yield ranged from 17.1 to 74.96%. The spray-dried rosemary extracts showed moisture content and water activity below 5% and 0.5%, respectively, which indicate their chemical and microbiological stabilities. Even without using drying aids, some sets of experimental conditions rendered dried products with suitable flowability and compressibility characteristics for direct preparation of solid dosage forms. Analysis of variance and Response Surface Methodology proved that studied factors significantly affected most of the spray-dried rosemary extract quality indicators at different levels. The main processing parameter affecting the spray-dried rosemary extract characteristics was inlet temperature. The best combination of parameters used to obtain a reasonable yield of stable dry rosemary extracts with adequate technological properties for pharmaceutical purpose involves an extract feed rate of 2 ml/min, 80 °C inlet temperature and 40 l/min SA. The design of experiments approach is an interesting strategy for engineering spray-dried rosemary extracts with improved characteristics for pharmaceutical industrial purpose.

  1. ONTOGRABBING: Extracting Information from Texts Using Generative Ontologies

    DEFF Research Database (Denmark)

    Nilsson, Jørgen Fischer; Szymczak, Bartlomiej Antoni; Jensen, P.A.

    2009-01-01

    We describe principles for extracting information from texts using a so-called generative ontology in combination with syntactic analysis. Generative ontologies are introduced as semantic domains for natural language phrases. Generative ontologies extend ordinary finite ontologies with rules...... for producing recursively shaped terms representing the ontological content (ontological semantics) of NL noun phrases and other phrases. We focus here on achieving a robust, often only partial, ontology-driven parsing of and ascription of semantics to a sentence in the text corpus. The aim of the ontological...... analysis is primarily to identify paraphrases, thereby achieving a search functionality beyond mere keyword search with synsets. We further envisage use of the generative ontology as a phrase-based rather than word-based browser into text corpora....

  2. MedEx: a medication information extraction system for clinical narratives

    Science.gov (United States)

    Stenner, Shane P; Doan, Son; Johnson, Kevin B; Waitman, Lemuel R; Denny, Joshua C

    2010-01-01

    Medication information is one of the most important types of clinical data in electronic medical records. It is critical for healthcare safety and quality, as well as for clinical research that uses electronic medical record data. However, medication data are often recorded in clinical notes as free-text. As such, they are not accessible to other computerized applications that rely on coded data. We describe a new natural language processing system (MedEx), which extracts medication information from clinical notes. MedEx was initially developed using discharge summaries. An evaluation using a data set of 50 discharge summaries showed it performed well on identifying not only drug names (F-measure 93.2%), but also signature information, such as strength, route, and frequency, with F-measures of 94.5%, 93.9%, and 96.0% respectively. We then applied MedEx unchanged to outpatient clinic visit notes. It performed similarly with F-measures over 90% on a set of 25 clinic visit notes. PMID:20064797

  3. Methods to extract information on the atomic and molecular states from scientific abstracts

    International Nuclear Information System (INIS)

    Sasaki, Akira; Ueshima, Yutaka; Yamagiwa, Mitsuru; Murata, Masaki; Kanamaru, Toshiyuki; Shirado, Tamotsu; Isahara, Hitoshi

    2005-01-01

    We propose a new application of information technology to recognize and extract expressions of atomic and molecular states from electrical forms of scientific abstracts. Present results will help scientists to understand atomic states as well as the physics discussed in the articles. Combining with the internet search engines, it will make one possible to collect not only atomic and molecular data but broader scientific information over a wide range of research fields. (author)

  4. Information Architecture of Web-Based Interventions to Improve Health Outcomes: Systematic Review.

    Science.gov (United States)

    Pugatch, Jillian; Grenen, Emily; Surla, Stacy; Schwarz, Mary; Cole-Lewis, Heather

    2018-03-21

    The rise in usage of and access to new technologies in recent years has led to a growth in digital health behavior change interventions. As the shift to digital platforms continues to grow, it is increasingly important to consider how the field of information architecture (IA) can inform the development of digital health interventions. IA is the way in which digital content is organized and displayed, which strongly impacts users' ability to find and use content. While many information architecture best practices exist, there is a lack of empirical evidence on the role it plays in influencing behavior change and health outcomes. Our aim was to conduct a systematic review synthesizing the existing literature on website information architecture and its effect on health outcomes, behavioral outcomes, and website engagement. To identify all existing information architecture and health behavior literature, we searched articles published in English in the following databases (no date restrictions imposed): ACM Digital Library, CINAHL, Cochrane Library, Google Scholar, Ebsco, and PubMed. The search terms used included information terms (eg, information architecture, interaction design, persuasive design), behavior terms (eg, health behavior, behavioral intervention, ehealth), and health terms (eg, smoking, physical activity, diabetes). The search results were reviewed to determine if they met the inclusion and exclusion criteria created to identify empirical research that studied the effect of IA on health outcomes, behavioral outcomes, or website engagement. Articles that met inclusion criteria were assessed for study quality. Then, data from the articles were extracted using a priori categories established by 3 reviewers. However, the limited health outcome data gathered from the studies precluded a meta-analysis. The initial literature search yielded 685 results, which was narrowed down to three publications that examined the effect of information architecture on

  5. An improved facile method for extraction and determination of steroidal saponins in Tribulus terrestris by focused microwave-assisted extraction coupled with GC-MS.

    Science.gov (United States)

    Li, Tianlin; Zhang, Zhuomin; Zhang, Lan; Huang, Xinjian; Lin, Junwei; Chen, Guonan

    2009-12-01

    An improved fast method for extraction of steroidal saponins in Tribulus terrestris based on the use of focus microwave-assisted extraction (FMAE) is proposed. Under optimized conditions, four steroidal saponins were extracted from Tribulus terrestris and identified by GC-MS, which are Tigogenin (TG), Gitogenin (GG), Hecogenin (HG) and Neohecogenin (NG). One of the most important steroidal saponins, namely TG was quantified finally. The recovery of TG was in the range of 86.7-91.9% with RSDTribulus terrestris from different areas of occurrence. The difference in chromatographic characteristics of steroidal saponins was proved to be related to the different areas of occurrence. The results showed that FMAE-GC-MS is a simple, rapid, solvent-saving method for the extraction and determination of steroidal saponins in Tribulus terrestris.

  6. Improving the extraction-and-loading process in the open mining operations

    Directory of Open Access Journals (Sweden)

    Cheban A. Yu.

    2017-09-01

    Full Text Available Using the explosions is the main way to prepare solid rocks for the excavation, and that results in the formation of a rock mass of uneven granulometric composition, which makes it impossible to use a conveyor quarry transport without the preliminary large crushing of the rock mass obtained during the explosion. A way to achieve the greatest technical and economic effect is the full conveyorization of quarry transport, what, in this case, ensures the sequenced-flow of transport operations, automation of management and high labor productivity. The extraction-and-loading machines are the determining factor in the performance of mining and transport machines in the technological flow of the quarry. When extracting a blasted rock mass with single-bucket excavators or loaders working in combination with bottom-hole conveyors, one uses self-propelled crushing and reloading units of various designs to grind large individual parts to fractions of conditioning size. The presence of a crushing and reloading unit in the pit-face along with the excavator requires an additional space for its placement, complicates the maneuvering of the equipment in the pit-face, and increases the number of personnel and the cost of maintaining the extraction-and-reloading operations. The article proposes an improved method for carrying out the extraction-and-loading process, as well as the design of extraction-and-grinding unit based on a quarry hydraulic excavator. The design of the proposed unit makes it possible to convert the cyclic process of scooping the rock mass into the continuous process of its loading on the bottom-hole conveyor. Using the extraction-and-grinding unit allows one to combine the processes of excavation, preliminary crushing and loading of the rock mass, which ensures an increase in the efficiency of mining operations.

  7. Understanding and using quality information for quality improvement: the effect of information presentation.

    NARCIS (Netherlands)

    Zwijnenberg, N.C.; Hendriks, M.; Delnoij, D.M.J.; Veer, A.J.E. de; Spreeuwenberg, P.; Wagner, C.

    2016-01-01

    Objective: To examine how information presentation affects the understanding and use of information for quality improvement. Design: An experimental design, testing 22 formats, and showing information on patient safety culture. Formats differed in visualization, outcomes and benchmark

  8. Sophia: A Expedient UMLS Concept Extraction Annotator.

    Science.gov (United States)

    Divita, Guy; Zeng, Qing T; Gundlapalli, Adi V; Duvall, Scott; Nebeker, Jonathan; Samore, Matthew H

    2014-01-01

    An opportunity exists for meaningful concept extraction and indexing from large corpora of clinical notes in the Veterans Affairs (VA) electronic medical record. Currently available tools such as MetaMap, cTAKES and HITex do not scale up to address this big data need. Sophia, a rapid UMLS concept extraction annotator was developed to fulfill a mandate and address extraction where high throughput is needed while preserving performance. We report on the development, testing and benchmarking of Sophia against MetaMap and cTAKEs. Sophia demonstrated improved performance on recall as compared to cTAKES and MetaMap (0.71 vs 0.66 and 0.38). The overall f-score was similar to cTAKES and an improvement over MetaMap (0.53 vs 0.57 and 0.43). With regard to speed of processing records, we noted Sophia to be several fold faster than cTAKES and the scaled-out MetaMap service. Sophia offers a viable alternative for high-throughput information extraction tasks.

  9. Analysis Methods for Extracting Knowledge from Large-Scale WiFi Monitoring to Inform Building Facility Planning

    DEFF Research Database (Denmark)

    Ruiz-Ruiz, Antonio; Blunck, Henrik; Prentow, Thor Siiger

    2014-01-01

    realistic data to inform facility planning. In this paper, we propose analysis methods to extract knowledge from large sets of network collected WiFi traces to better inform facility management and planning in large building complexes. The analysis methods, which build on a rich set of temporal and spatial......The optimization of logistics in large building com- plexes with many resources, such as hospitals, require realistic facility management and planning. Current planning practices rely foremost on manual observations or coarse unverified as- sumptions and therefore do not properly scale or provide....... Spatio-temporal visualization tools built on top of these methods enable planners to inspect and explore extracted information to inform facility-planning activities. To evaluate the methods, we present results for a large hospital complex covering more than 10 hectares. The evaluation is based on Wi...

  10. Artemisia Extract Improves Insulin Sensitivity in Women With Gestational Diabetes Mellitus by Up-Regulating Adiponectin.

    Science.gov (United States)

    Sun, Xia; Sun, Hong; Zhang, Jing; Ji, Xianghong

    2016-12-01

    Gestational diabetes mellitus (GDM) has affected a great number of pregnant women worldwide. Artemisia extracts have been found to exhibit a potent antidiabetic effect in the treatment of type 2 diabetes mellitus. We aimed to examine the effects of Artemisia extract on insulin resistance and lipid profiles in pregnant GDM patients. Patients in their second trimester were randomly assigned to the Artemisia extract group (AE) or to a placebo group (PO). They were instructed to consume either AE or PO daily for a period of 10 weeks. Glucose and insulin profiles and adiponectin level were assessed at baseline (week 0) and after the treatment (week 10). Compared to the PO group, fasting plasma glucose, serum insulin levels, homeostasis model of assessment of insulin resistance (HOMA-IR), and β-cell function (HOMA-B) were significantly reduced in the AE group participants. Moreover, levels of circulating adiponectin were also significantly up-regulated in the AE group, which also positively contributed to improved insulin sensitivity. Daily administration of Artemisia extract improves insulin sensitivity by up-regulating adiponectin in women with gestational diabetes mellitus. © 2016, The American College of Clinical Pharmacology.

  11. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  12. Domain-independent information extraction in unstructured text

    Energy Technology Data Exchange (ETDEWEB)

    Irwin, N.H. [Sandia National Labs., Albuquerque, NM (United States). Software Surety Dept.

    1996-09-01

    Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness when compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.

  13. Ginseng Berry Extract Supplementation Improves Age-Related Decline of Insulin Signaling in Mice

    Directory of Open Access Journals (Sweden)

    Eunhui Seo

    2015-04-01

    Full Text Available The aim of this study was to evaluate the effects of ginseng berry extract on insulin sensitivity and associated molecular mechanisms in aged mice. C57BL/6 mice (15 months old were maintained on a regular diet (CON or a regular diet supplemented with 0.05% ginseng berry extract (GBD for 24 or 32 weeks. GBD-fed mice showed significantly lower serum insulin levels (p = 0.016 and insulin resistance scores (HOMA-IR (p = 0.012, suggesting that GBD improved insulin sensitivity. Pancreatic islet hypertrophy was also ameliorated in GBD-fed mice (p = 0.007. Protein levels of tyrosine phosphorylated insulin receptor substrate (IRS-1 (p = 0.047, and protein kinase B (AKT (p = 0.037, were up-regulated in the muscle of insulin-injected GBD-fed mice compared with CON-fed mice. The expressions of forkhead box protein O1 (FOXO1 (p = 0.036 and peroxisome proliferator-activated receptor gamma (PPARγ (p = 0.032, which are known as aging- and insulin resistance-related genes, were also increased in the muscle of GBD-fed mice. We conclude that ginseng berry extract consumption might increase activation of IRS-1 and AKT, contributing to the improvement of insulin sensitivity in aged mice.

  14. Extracting breathing rate information from a wearable reflectance pulse oximeter sensor.

    Science.gov (United States)

    Johnston, W S; Mendelson, Y

    2004-01-01

    The integration of multiple vital physiological measurements could help combat medics and field commanders to better predict a soldier's health condition and enhance their ability to perform remote triage procedures. In this paper we demonstrate the feasibility of extracting accurate breathing rate information from a photoplethysmographic signal that was recorded by a reflectance pulse oximeter sensor mounted on the forehead and subsequently processed by a simple time domain filtering and frequency domain Fourier analysis.

  15. Using information from the electronic health record to improve measurement of unemployment in service members and veterans with mTBI and post-deployment stress.

    Directory of Open Access Journals (Sweden)

    Christina Dillahunt-Aspillaga

    Full Text Available The purpose of this pilot study is 1 to develop an annotation schema and a training set of annotated notes to support the future development of a natural language processing (NLP system to automatically extract employment information, and 2 to determine if information about employment status, goals and work-related challenges reported by service members and Veterans with mild traumatic brain injury (mTBI and post-deployment stress can be identified in the Electronic Health Record (EHR.Retrospective cohort study using data from selected progress notes stored in the EHR.Post-deployment Rehabilitation and Evaluation Program (PREP, an in-patient rehabilitation program for Veterans with TBI at the James A. Haley Veterans' Hospital in Tampa, Florida.Service members and Veterans with TBI who participated in the PREP program (N = 60.Documentation of employment status, goals, and work-related challenges reported by service members and recorded in the EHR.Two hundred notes were examined and unique vocational information was found indicating a variety of self-reported employment challenges. Current employment status and future vocational goals along with information about cognitive, physical, and behavioral symptoms that may affect return-to-work were extracted from the EHR. The annotation schema developed for this study provides an excellent tool upon which NLP studies can be developed.Information related to employment status and vocational history is stored in text notes in the EHR system. Information stored in text does not lend itself to easy extraction or summarization for research and rehabilitation planning purposes. Development of NLP systems to automatically extract text-based employment information provides data that may improve the understanding and measurement of employment in this important cohort.

  16. Support patient search on pathology reports with interactive online learning based data extraction.

    Science.gov (United States)

    Zheng, Shuai; Lu, James J; Appin, Christina; Brat, Daniel; Wang, Fusheng

    2015-01-01

    Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user's interaction with minimal human effort. We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system's data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users' corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of tests. Extracting data from pathology reports could enable

  17. Improving information filtering via network manipulation

    Science.gov (United States)

    Zhang, Fuguo; Zeng, An

    2012-12-01

    The recommender system is a very promising way to address the problem of overabundant information for online users. Although the information filtering for the online commercial systems has received much attention recently, almost all of the previous works are dedicated to design new algorithms and consider the user-item bipartite networks as given and constant information. However, many problems for recommender systems such as the cold-start problem (i.e., low recommendation accuracy for the small-degree items) are actually due to the limitation of the underlying user-item bipartite networks. In this letter, we propose a strategy to enhance the performance of the already existing recommendation algorithms by directly manipulating the user-item bipartite networks, namely adding some virtual connections to the networks. Numerical analyses on two benchmark data sets, MovieLens and Netflix, show that our method can remarkably improves the recommendation performance. Specifically, it not only improves the recommendations accuracy (especially for the small-degree items), but also helps the recommender systems generate more diverse and novel recommendations.

  18. Sparsity-based shrinkage approach for practicability improvement of H-LBP-based edge extraction

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Chenyi [School of Physics, Northeast Normal University, Changchun 130024 (China); Qiao, Shuang, E-mail: qiaos810@nenu.edu.cn [School of Physics, Northeast Normal University, Changchun 130024 (China); Sun, Jianing, E-mail: sunjn118@nenu.edu.cn [School of Mathematics and Statistics, Northeast Normal University, Changchun 130024 (China); Zhao, Ruikun; Wu, Wei [Jilin Cancer Hospital, Changchun 130021 (China)

    2016-07-21

    The local binary pattern with H function (H-LBP) technique enables fast and efficient edge extraction in digital radiography. In this paper, we reformulate the model of H-LBP and propose a novel sparsity-based shrinkage approach, in which the threshold can be adapted to the data sparsity. Using this model, we upgrade fast H-LBP framework and apply it to real digital radiography. The experiments show that the method improved using the new shrinkage approach can avoid elaborately artificial modulation of parameters and possess greater robustness in edge extraction compared with the other current methods without increasing processing time. - Highlights: • An novel sparsity-based shrinkage approach for edge extraction on digital radiography is proposed. • The threshold of SS-LBP can be adaptive to the data sparsity. • SS-LBP is the development of AH-LBP and H-LBP. • Without boosting processing time and losing processing efficiency, SS-LBP can avoid elaborately artificial modulation of parameters provides. • SS-LBP has more robust performance in edge extraction compared with the existing methods.

  19. Can information technology improve my ambulatory practice ...

    African Journals Online (AJOL)

    eHealth is the use of information and communication technologies for health. mHealth is the use of mobile technology in health. As with all information technology (IT), advances in development are rapidly taking place. The application of such technology to individual ambulatory anaesthesia practice should improve the ...

  20. Improvement of stability and carotenoids fraction of virgin olive oils by addition of microalgae Scenedesmus almeriensis extracts.

    Science.gov (United States)

    Limón, Piedad; Malheiro, Ricardo; Casal, Susana; Acién-Fernández, F Gabriel; Fernández-Sevilla, José M; Rodrigues, Nuno; Cruz, Rebeca; Bermejo, Ruperto; Pereira, José Alberto

    2015-05-15

    Humans are not capable of synthesizing carotenoids de novo and thus, their presence in human tissues is entirely of dietary origin. Consumption of essential carotenoids is reduced due to the lower intake of fruits and vegetables. Microalgae are a good source of carotenoids that can be exploited. In the present work, carotenoids rich extracts from Scenedesmus almeriensis were added to extra-virgin olive oils at different concentrations (0.1 and 0.21 mg/mL) in order to enhance the consumption of these bioactives. Extracts brought changes in olive oils color, turning them orange-reddish. Quality of olive oils was improved, since peroxidation was inhibited. Olive oils fatty acids and tocopherols were not affected. β-carotene and lutein contents increase considerably, as well as oxidative stability, improving olive oils shelf-life and nutritional value. Inclusion of S. almeriensis extracts is a good strategy to improve and enhance the consumption of carotenoids, since olive oil consumption is increasing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Systematically extracting metal- and solvent-related occupational information from free-text responses to lifetime occupational history questionnaires.

    Science.gov (United States)

    Friesen, Melissa C; Locke, Sarah J; Tornow, Carina; Chen, Yu-Cheng; Koh, Dong-Hee; Stewart, Patricia A; Purdue, Mark; Colt, Joanne S

    2014-06-01

    Lifetime occupational history (OH) questionnaires often use open-ended questions to capture detailed information about study participants' jobs. Exposure assessors use this information, along with responses to job- and industry-specific questionnaires, to assign exposure estimates on a job-by-job basis. An alternative approach is to use information from the OH responses and the job- and industry-specific questionnaires to develop programmable decision rules for assigning exposures. As a first step in this process, we developed a systematic approach to extract the free-text OH responses and convert them into standardized variables that represented exposure scenarios. Our study population comprised 2408 subjects, reporting 11991 jobs, from a case-control study of renal cell carcinoma. Each subject completed a lifetime OH questionnaire that included verbatim responses, for each job, to open-ended questions including job title, main tasks and activities (task), tools and equipment used (tools), and chemicals and materials handled (chemicals). Based on a review of the literature, we identified exposure scenarios (occupations, industries, tasks/tools/chemicals) expected to involve possible exposure to chlorinated solvents, trichloroethylene (TCE) in particular, lead, and cadmium. We then used a SAS macro to review the information reported by study participants to identify jobs associated with each exposure scenario; this was done using previously coded standardized occupation and industry classification codes, and a priori lists of associated key words and phrases related to possibly exposed tasks, tools, and chemicals. Exposure variables representing the occupation, industry, and task/tool/chemicals exposure scenarios were added to the work history records of the study respondents. Our identification of possibly TCE-exposed scenarios in the OH responses was compared to an expert's independently assigned probability ratings to evaluate whether we missed identifying

  2. The Feature Extraction Based on Texture Image Information for Emotion Sensing in Speech

    Directory of Open Access Journals (Sweden)

    Kun-Ching Wang

    2014-09-01

    Full Text Available In this paper, we present a novel texture image feature for Emotion Sensing in Speech (ESS. This idea is based on the fact that the texture images carry emotion-related information. The feature extraction is derived from time-frequency representation of spectrogram images. First, we transform the spectrogram as a recognizable image. Next, we use a cubic curve to enhance the image contrast. Then, the texture image information (TII derived from the spectrogram image can be extracted by using Laws’ masks to characterize emotional state. In order to evaluate the effectiveness of the proposed emotion recognition in different languages, we use two open emotional databases including the Berlin Emotional Speech Database (EMO-DB and eNTERFACE corpus and one self-recorded database (KHUSC-EmoDB, to evaluate the performance cross-corpora. The results of the proposed ESS system are presented using support vector machine (SVM as a classifier. Experimental results show that the proposed TII-based feature extraction inspired by visual perception can provide significant classification for ESS systems. The two-dimensional (2-D TII feature can provide the discrimination between different emotions in visual expressions except for the conveyance pitch and formant tracks. In addition, the de-noising in 2-D images can be more easily completed than de-noising in 1-D speech.

  3. Feature Fusion Based Road Extraction for HJ-1-C SAR Image

    Directory of Open Access Journals (Sweden)

    Lu Ping-ping

    2014-06-01

    Full Text Available Road network extraction in SAR images is one of the key tasks of military and civilian technologies. To solve the issues of road extraction of HJ-1-C SAR images, a road extraction algorithm is proposed based on the integration of ratio and directional information. Due to the characteristic narrow dynamic range and low signal to noise ratio of HJ-1-C SAR images, a nonlinear quantization and an image filtering method based on a multi-scale autoregressive model are proposed here. A road extraction algorithm based on information fusion, which considers ratio and direction information, is also proposed. By processing Radon transformation, main road directions can be extracted. Cross interferences can be suppressed, and the road continuity can then be improved by the main direction alignment and secondary road extraction. The HJ-1-C SAR image acquired in Wuhan, China was used to evaluate the proposed method. The experimental results show good performance with correctness (80.5% and quality (70.1% when applied to a SAR image with complex content.

  4. Solvent extraction

    Energy Technology Data Exchange (ETDEWEB)

    Coombs, D.M.; Latimer, E.G.

    1988-01-05

    It is an object of this invention to provide for the demetallization and general upgrading of heavy oil via a solvent extracton process, and to improve the efficiency of solvent extraction operations. The yield and demetallization of product oil form heavy high-metal content oil is maximized by solvent extractions which employ either or all of the following techniques: premixing of a minor amount of the solvent with feed and using countercurrent flow for the remaining solvent; use of certain solvent/free ratios; use of segmental baffle tray extraction column internals and the proper extraction column residence time. The solvent premix/countercurrent flow feature of the invention substantially improves extractions where temperatures and pressures above the critical point of the solvent are used. By using this technique, a greater yield of extract oil can be obtained at the same metals content or a lower metals-containing extract oil product can be obtained at the same yield. Furthermore, the premixing of part of the solvent with the feed before countercurrent extraction gives high extract oil yields and high quality demetallization. The solvent/feed ratio features of the invention substanially lower the captial and operating costs for such processes while not suffering a loss in selectivity for metals rejection. The column internals and rsidence time features of the invention further improve the extractor metals rejection at a constant yield or allow for an increase in extract oil yield at a constant extract oil metals content. 13 figs., 3 tabs.

  5. Figure text extraction in biomedical literature.

    Directory of Open Access Journals (Sweden)

    Daehyun Kim

    2011-01-01

    Full Text Available Figures are ubiquitous in biomedical full-text articles, and they represent important biomedical knowledge. However, the sheer volume of biomedical publications has made it necessary to develop computational approaches for accessing figures. Therefore, we are developing the Biomedical Figure Search engine (http://figuresearch.askHERMES.org to allow bioscientists to access figures efficiently. Since text frequently appears in figures, automatically extracting such text may assist the task of mining information from figures. Little research, however, has been conducted exploring text extraction from biomedical figures.We first evaluated an off-the-shelf Optical Character Recognition (OCR tool on its ability to extract text from figures appearing in biomedical full-text articles. We then developed a Figure Text Extraction Tool (FigTExT to improve the performance of the OCR tool for figure text extraction through the use of three innovative components: image preprocessing, character recognition, and text correction. We first developed image preprocessing to enhance image quality and to improve text localization. Then we adapted the off-the-shelf OCR tool on the improved text localization for character recognition. Finally, we developed and evaluated a novel text correction framework by taking advantage of figure-specific lexicons.The evaluation on 382 figures (9,643 figure texts in total randomly selected from PubMed Central full-text articles shows that FigTExT performed with 84% precision, 98% recall, and 90% F1-score for text localization and with 62.5% precision, 51.0% recall and 56.2% F1-score for figure text extraction. When limiting figure texts to those judged by domain experts to be important content, FigTExT performed with 87.3% precision, 68.8% recall, and 77% F1-score. FigTExT significantly improved the performance of the off-the-shelf OCR tool we used, which on its own performed with 36.6% precision, 19.3% recall, and 25.3% F1-score for

  6. Semantic Location Extraction from Crowdsourced Data

    Science.gov (United States)

    Koswatte, S.; Mcdougall, K.; Liu, X.

    2016-06-01

    Crowdsourced Data (CSD) has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network). This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.

  7. SEMANTIC LOCATION EXTRACTION FROM CROWDSOURCED DATA

    Directory of Open Access Journals (Sweden)

    S. Koswatte

    2016-06-01

    Full Text Available Crowdsourced Data (CSD has recently received increased attention in many application areas including disaster management. Convenience of production and use, data currency and abundancy are some of the key reasons for attracting this high interest. Conversely, quality issues like incompleteness, credibility and relevancy prevent the direct use of such data in important applications like disaster management. Moreover, location information availability of CSD is problematic as it remains very low in many crowd sourced platforms such as Twitter. Also, this recorded location is mostly related to the mobile device or user location and often does not represent the event location. In CSD, event location is discussed descriptively in the comments in addition to the recorded location (which is generated by means of mobile device's GPS or mobile communication network. This study attempts to semantically extract the CSD location information with the help of an ontological Gazetteer and other available resources. 2011 Queensland flood tweets and Ushahidi Crowd Map data were semantically analysed to extract the location information with the support of Queensland Gazetteer which is converted to an ontological gazetteer and a global gazetteer. Some preliminary results show that the use of ontologies and semantics can improve the accuracy of place name identification of CSD and the process of location information extraction.

  8. Report: EPA Improved Its National Security Information Program, but Some Improvements Still Needed

    Science.gov (United States)

    Report #16-P-0196, June 2, 2016. The EPA will continue to improve its national security information program by completing information classification guides that can be used uniformly and consistently throughout the agency.

  9. High-speed web attack detection through extracting exemplars from HTTP traffic

    KAUST Repository

    Wang, Wei

    2011-01-01

    In this work, we propose an effective method for high-speed web attack detection by extracting exemplars from HTTP traffic before the detection model is built. The smaller set of exemplars keeps valuable information of the original traffic while it significantly reduces the size of the traffic so that the detection remains effective and improves the detection efficiency. The Affinity Propagation (AP) is employed to extract the exemplars from the HTTP traffic. K-Nearest Neighbor(K-NN) and one class Support Vector Machine (SVM) are used for anomaly detection. To facilitate comparison, we also employ information gain to select key attributes (a.k.a. features) from the HTTP traffic for web attack detection. Two large real HTTP traffic are used to validate our methods. The extensive test results show that the AP based exemplar extraction significantly improves the real-time performance of the detection compared to using all the HTTP traffic and achieves a more robust detection performance than information gain based attribute selection for web attack detection. © 2011 ACM.

  10. The Application of Chinese High-Spatial Remote Sensing Satellite Image in Land Law Enforcement Information Extraction

    Science.gov (United States)

    Wang, N.; Yang, R.

    2018-04-01

    Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2 m, 1 m and 2.1 m and the multispectral images (MS) resolution are 8 m, 4 m, 5.8 m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.

  11. Improvement of infrastructure for risk-informed regulation

    International Nuclear Information System (INIS)

    Muta, Hitoshi; Tanji, Junichi; Kondo, Keisuke; Uchida, Tsuyoshi; Ito, Tomomichi

    2011-01-01

    Improvement of the infrastructure of probabilistic safety assessment (PSA) is essential to the risk-informed regulation for nuclear power plants. JNES conducted update of initiating event frequency and improvement of method for uncertainty analysis to enhance the technology bases of PSA in 2010. Furthermore, JNES improved human reliability assessment method and reliability assessment method for digital reactor protection systems. JNES estimated initiating event frequencies both for power and shutdown operation based on the recent operating experiences in NPPs of Japan using hierarchical Bayesian method. As for improvement of uncertainty analysis method, JNES conducted trial analysis using SOKC (State-Of-Knowledge Correlation) for representative PWR and BWR of Japan. The study on the advanced HRA method with operator cognitive action model was conducted. The study on reliability analysis method for digital reactor protection systems using Bayesian Network Method was conducted. In order to ensure the quality of PSA, JNES studied requirements and methods for PSA peer review via the preparation of peer review for PSA of a representative Japanese BWR plant conducted by JNES. As an effort to develop the procedures of internal fire PSA and internal flooding PSA, trial analyses were conducted to grasp the risk level cause by fire and flooding in nuclear power plants. JNES participated in OECD/NEA PRISME and FIRE project to obtain the latest information and data to validate and improve the fire propagation analysis codes and the parameters for fire PSA. Furthermore, JNES studies schemes for endorsement and application in risk-informed regulation of PSA standards established by Atomic Energy Society of Japan. (author)

  12. From Specific Information Extraction to Inferences: A Hierarchical Framework of Graph Comprehension

    Science.gov (United States)

    2004-09-01

    The skill to interpret the information displayed in graphs is so important to have, the National Council of Teachers of Mathematics has created...guidelines to ensure that students learn these skills ( NCTM : Standards for Mathematics , 2003). These guidelines are based primarily on the extraction of...graphical perception. Human Computer Interaction, 8, 353-388. NCTM : Standards for Mathematics . (2003, 2003). Peebles, D., & Cheng, P. C.-H. (2002

  13. The custodian administered research extract server: "improving the pipeline" in linked data delivery systems.

    Science.gov (United States)

    Eitelhuber, Tom; Davis, Geoff

    2014-01-01

    At Western Australia's Data Linkage Branch (DLB) the extraction of linked data has become increasingly complex over the past decade and classical methods of data delivery are unsuited to the larger extractions which have become the norm. The Custodian Administered Research Extract Server (CARES) is a fast, accurate and predictable approach to linked data extraction. The Data Linkage Branch (DLB) creates linkage keys within and between datasets. To comply with the separation principal, these keys are sent to applicable data collection agencies for extraction. Routing requests through multiple channels is inefficient and makes it hard to monitor work and predict delivery times. CARES was developed to address these shortcomings and involved ongoing consultation with the Custodians and staff of collections, plus challenges of hardware, programming, governance and security. The introduction of CARES has reduced the workload burden of linked data extractions, while improving the efficiency, stability and predictability of turnaround times. As the scope of a linkage system broadens, challenges in data delivery are inevitable. CARES overcomes multiple obstacles with no sacrifice to the integrity, confidentiality or security of data. CARES is a valuable component of linkage infrastructure that is operable at any scale and adaptable to many data environments.

  14. Improving Library and Information Services: Use of Appropriate ...

    African Journals Online (AJOL)

    Information Communication Technology in Nigerian Libraries. Murtala Aliyu ... The focus of this paper is on how to improve services in libraries and information centers in. Nigeria by ..... games, video and educational materials. Database on ...

  15. Improving life sciences information retrieval using semantic web technology.

    Science.gov (United States)

    Quan, Dennis

    2007-05-01

    The ability to retrieve relevant information is at the heart of every aspect of research and development in the life sciences industry. Information is often distributed across multiple systems and recorded in a way that makes it difficult to piece together the complete picture. Differences in data formats, naming schemes and network protocols amongst information sources, both public and private, must be overcome, and user interfaces not only need to be able to tap into these diverse information sources but must also assist users in filtering out extraneous information and highlighting the key relationships hidden within an aggregated set of information. The Semantic Web community has made great strides in proposing solutions to these problems, and many efforts are underway to apply Semantic Web techniques to the problem of information retrieval in the life sciences space. This article gives an overview of the principles underlying a Semantic Web-enabled information retrieval system: creating a unified abstraction for knowledge using the RDF semantic network model; designing semantic lenses that extract contextually relevant subsets of information; and assembling semantic lenses into powerful information displays. Furthermore, concrete examples of how these principles can be applied to life science problems including a scenario involving a drug discovery dashboard prototype called BioDash are provided.

  16. Nigella sativa EXTRACT IMPROVES SEMINIFEROUS TUBULE EPITHELIAL THICKNESS IN LEAD ACETATE-EXPOSED BALB/C MICE

    OpenAIRE

    Diana, Alis Nur; I’tishom, Reny; Sudjarwo, Sri Agus

    2017-01-01

    Lead that enters the body may lead to increased production of ROS (Reactive Oxygen Species) that may affect reproductive system. Black cumin (Nigella sativa) extract contains high antioxidant, tymoquinone, that may be used to suppress oxidative stress induced by lead in animal experiments. This study aimed to prove that black cumin (Nigella sativa) extract improves the thickness of seminiferous tubular epithelium in Balb/c mice exposed to lead (Pb) acetate. This study used post-test only cont...

  17. Libraries and E-Commerce: Improving Information Services and Beyond.

    Science.gov (United States)

    Harris, Lesley Ellen

    2000-01-01

    Explains e-commerce and discusses how it can be used by special libraries. Highlights include library goals; examples of successful uses of e-commerce; how e-commerce can improve information services, including access to information, new information resources, delivery of information, and broadening information markets; and developing an…

  18. Asan medical information system for healthcare quality improvement.

    Science.gov (United States)

    Ryu, Hyeon Jeong; Kim, Woo Sung; Lee, Jae Ho; Min, Sung Woo; Kim, Sun Ja; Lee, Yong Su; Lee, Young Ha; Nam, Sang Woo; Eo, Gi Seung; Seo, Sook Gyoung; Nam, Mi Hyun

    2010-09-01

    This purpose of this paper is to introduce the status of the Asan Medical Center (AMC) medical information system with respect to healthcare quality improvement. Asan Medical Information System (AMIS) is projected to become a completely electronic and digital information hospital. AMIS has played a role in improving the health care quality based on the following measures: safety, effectiveness, patient-centeredness, timeliness, efficiency, privacy, and security. AMIS CONSISTED OF SEVERAL DISTINCTIVE SYSTEMS: order communication system, electronic medical record, picture archiving communication system, clinical research information system, data warehouse, enterprise resource planning, IT service management system, and disaster recovery system. The most distinctive features of AMIS were the high alert-medication recognition & management system, the integrated and severity stratified alert system, the integrated patient monitoring system, the perioperative diabetic care monitoring and support system, and the clinical indicator management system. AMIS provides IT services for AMC, 7 affiliated hospitals and over 5,000 partners clinics, and was developed to improve healthcare services. The current challenge of AMIS is standard and interoperability. A global health IT strategy is needed to get through the current challenges and to provide new services as needed.

  19. Support patient search on pathology reports with interactive online learning based data extraction

    Directory of Open Access Journals (Sweden)

    Shuai Zheng

    2015-01-01

    Full Text Available Background: Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user′s interaction with minimal human effort. Methods : We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system′s data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users′ corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. Results: We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of

  20. An improved automated procedure for informal and temporary dwellings detection and enumeration, using mathematical morphology operators on VHR satellite data

    Science.gov (United States)

    Jenerowicz, Małgorzata; Kemper, Thomas

    2016-10-01

    Every year thousands of people are displaced by conflicts or natural disasters and often gather in large camps. Knowing how many people have been gathered is crucial for an efficient relief operation. However, it is often difficult to collect exact information on the total number of the population. This paper presents the improved morphological methodology for the estimation of dwellings structures located in several Internally Displaced Persons (IDPs) Camps, based on Very High Resolution (VHR) multispectral satellite imagery with pixel sizes of 1 meter or less including GeoEye-1, WorldView-2, QuickBird-2, Ikonos-2, Pléiades-A and Pléiades-B. The main topic of this paper is the approach enhancement with selection of feature extraction algorithm, the improvement and automation of pre-processing and results verification. For the informal and temporary dwellings extraction purpose the high quality of data has to be ensured. The pre-processing has been extended by including the input data hierarchy level assignment and data fusion method selection and evaluation. The feature extraction algorithm follows the procedure presented in Jenerowicz, M., Kemper, T., 2011. Optical data are analysed in a cyclic approach comprising image segmentation, geometrical, textural and spectral class modeling aiming at camp area identification. The successive steps of morphological processing have been combined in a one stand-alone application for automatic dwellings detection and enumeration. Actively implemented, these approaches can provide a reliable and consistent results, independent of the imaging satellite type and different study sites location, providing decision support in emergency response for the humanitarian community like United Nations, European Union and Non-Governmental relief organizations.

  1. In situ product removal in fermentation systems: improved process performance and rational extractant selection.

    Science.gov (United States)

    Dafoe, Julian T; Daugulis, Andrew J

    2014-03-01

    The separation of inhibitory compounds as they are produced in biotransformation and fermentation systems is termed in situ product removal (ISPR). This review examines recent ISPR strategies employing several classes of extractants including liquids, solids, gases, and combined extraction systems. Improvement through the simple application of an auxiliary phase are tabulated and summarized to indicate the breadth of recent ISPR activities. Studies within the past 5 years that have highlighted and have discussed "second phase" properties, and that have an effect on fermentation performance, are particular focus of this review. ISPR, as a demonstrably effective processing strategy, continues to be widely adopted as more applications are explored; however, focus on the properties of extractants and their rational selection based on first principle considerations will likely be key to successfully applying ISPR to more challenging target molecules.

  2. 5W1H Information Extraction with CNN-Bidirectional LSTM

    Science.gov (United States)

    Nurdin, A.; Maulidevi, N. U.

    2018-03-01

    In this work, information about who, did what, when, where, why, and how on Indonesian news articles were extracted by combining Convolutional Neural Network and Bidirectional Long Short-Term Memory. Convolutional Neural Network can learn semantically meaningful representations of sentences. Bidirectional LSTM can analyze the relations among words in the sequence. We also use word embedding word2vec for word representation. By combining these algorithms, we obtained F-measure 0.808. Our experiments show that CNN-BLSTM outperforms other shallow methods, namely IBk, C4.5, and Naïve Bayes with the F-measure 0.655, 0.645, and 0.595, respectively.

  3. Information and technology: Improving food security in Uganda ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2014-06-23

    Jun 23, 2014 ... Information and technology: Improving food security in Uganda ... knowledge to make decisions about planting, harvesting, and managing livestock, but ... to be effective for minimizing risks and increasing agricultural productivity. ... In time, this network of information – made possible by digital technology ...

  4. Information extraction and knowledge graph construction from geoscience literature

    Science.gov (United States)

    Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen

    2018-03-01

    Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.

  5. Corn silk extract improves cholesterol metabolism in C57BL/6J mouse fed high-fat diets.

    Science.gov (United States)

    Cha, Jae Hoon; Kim, Sun Rim; Kang, Hyun Joong; Kim, Myung Hwan; Ha, Ae Wha; Kim, Woo Kyoung

    2016-10-01

    Corn silk (CS) extract contains large amounts of maysin, which is a major flavonoid in CS. However, studies regarding the effect of CS extract on cholesterol metabolism is limited. Therefore, the purpose of this study was to determine the effect of CS extract on cholesterol metabolism in C57BL/6J mouse fed high-fat diets. Normal-fat group fed 7% fat diet, high-fat (HF) group fed 25% fat diet, and high-fat with corn silk (HFCS) group were orally administered CS extract (100 mg/kg body weight) daily. Serum and hepatic levels of total lipids, triglycerides, and total cholesterol as well as serum free fatty acid, glucose, and insulin levels were determined. The mRNA expression levels of acyl-CoA: cholesterol acyltransferase (ACAT), cholesterol 7-alpha hydroxylase (CYP7A1), farnesoid X receptor (FXR), lecithin cholesterol acyltransferase (LCAT), low-density lipoprotein receptor, 3-hyroxy-3-methylglutaryl-coenzyme A reductase (HMG-CoA reductase), adiponectin, leptin, and tumor necrosis factor α were determined. Oral administration of CS extract with HF improved serum glucose and insulin levels as well as attenuated HF-induced fatty liver. CS extracts significantly elevated mRNA expression levels of adipocytokines and reduced mRNA expression levels of HMG-CoA reductase, ACAT, and FXR. The mRNA expression levels of CYP7A1 and LCAT between the HF group and HFCS group were not statistically different. CS extract supplementation with a high-fat diet improves levels of adipocytokine secretion and glucose homeostasis. CS extract is also effective in decreasing the regulatory pool of hepatic cholesterol, in line with decreased blood and hepatic levels of cholesterol though modulation of mRNA expression levels of HMG-CoA reductase, ACAT, and FXR.

  6. Skin tumor area extraction using an improved dynamic programming approach.

    Science.gov (United States)

    Abbas, Qaisar; Celebi, M E; Fondón García, Irene

    2012-05-01

    Border (B) description of melanoma and other pigmented skin lesions is one of the most important tasks for the clinical diagnosis of dermoscopy images using the ABCD rule. For an accurate description of the border, there must be an effective skin tumor area extraction (STAE) method. However, this task is complicated due to uneven illumination, artifacts present in the lesions and smooth areas or fuzzy borders of the desired regions. In this paper, a novel STAE algorithm based on improved dynamic programming (IDP) is presented. The STAE technique consists of the following four steps: color space transform, pre-processing, rough tumor area detection and refinement of the segmented area. The procedure is performed in the CIE L(*) a(*) b(*) color space, which is approximately uniform and is therefore related to dermatologist's perception. After pre-processing the skin lesions to reduce artifacts, the DP algorithm is improved by introducing a local cost function, which is based on color and texture weights. The STAE method is tested on a total of 100 dermoscopic images. In order to compare the performance of STAE with other state-of-the-art algorithms, various statistical measures based on dermatologist-drawn borders are utilized as a ground truth. The proposed method outperforms the others with a sensitivity of 96.64%, a specificity of 98.14% and an error probability of 5.23%. The results demonstrate that this STAE method by IDP is an effective solution when compared with other state-of-the-art segmentation techniques. The proposed method can accurately extract tumor borders in dermoscopy images. © 2011 John Wiley & Sons A/S.

  7. Extracting the Textual and Temporal Structure of Supercomputing Logs

    Energy Technology Data Exchange (ETDEWEB)

    Jain, S; Singh, I; Chandra, A; Zhang, Z; Bronevetsky, G

    2009-05-26

    Supercomputers are prone to frequent faults that adversely affect their performance, reliability and functionality. System logs collected on these systems are a valuable resource of information about their operational status and health. However, their massive size, complexity, and lack of standard format makes it difficult to automatically extract information that can be used to improve system management. In this work we propose a novel method to succinctly represent the contents of supercomputing logs, by using textual clustering to automatically find the syntactic structures of log messages. This information is used to automatically classify messages into semantic groups via an online clustering algorithm. Further, we describe a methodology for using the temporal proximity between groups of log messages to identify correlated events in the system. We apply our proposed methods to two large, publicly available supercomputing logs and show that our technique features nearly perfect accuracy for online log-classification and extracts meaningful structural and temporal message patterns that can be used to improve the accuracy of other log analysis techniques.

  8. Extraction of neutron spectral information from Bonner-Sphere data

    CERN Document Server

    Haney, J H; Zaidins, C S

    1999-01-01

    We have extended a least-squares method of extracting neutron spectral information from Bonner-Sphere data which was previously developed by Zaidins et al. (Med. Phys. 5 (1978) 42). A pulse-height analysis with background stripping is employed which provided a more accurate count rate for each sphere. Newer response curves by Mares and Schraube (Nucl. Instr. and Meth. A 366 (1994) 461) were included for the moderating spheres and the bare detector which comprise the Bonner spectrometer system. Finally, the neutron energy spectrum of interest was divided using the philosophy of fuzzy logic into three trapezoidal regimes corresponding to slow, moderate, and fast neutrons. Spectral data was taken using a PuBe source in two different environments and the analyzed data is presented for these cases as slow, moderate, and fast neutron fluences. (author)

  9. Agricultural Library Information Retrieval Based on Improved Semantic Algorithm

    OpenAIRE

    Meiling , Xie

    2014-01-01

    International audience; To support users to quickly access information they need from the agricultural library’s vast information and to improve the low intelligence query service, a model for intelligent library information retrieval was constructed. The semantic web mode was introduced and the information retrieval framework was designed. The model structure consisted of three parts: Information data integration, user interface and information retrieval match. The key method supporting retr...

  10. Three-dimensional binding sites volume assessment during cardiac pacing lead extraction

    Directory of Open Access Journals (Sweden)

    Bich Lien Nguyen

    2015-07-01

    Conclusions: Real-time 3D binding sites assessment is feasible and improves transvenous lead extraction outcomes. Its role as a complementary information requires extensive validation, and might be beneficial for a tailored strategy.

  11. Information Extraction for Clinical Data Mining: A Mammography Case Study.

    Science.gov (United States)

    Nassif, Houssam; Woods, Ryan; Burnside, Elizabeth; Ayvaci, Mehmet; Shavlik, Jude; Page, David

    2009-01-01

    Breast cancer is the leading cause of cancer mortality in women between the ages of 15 and 54. During mammography screening, radiologists use a strict lexicon (BI-RADS) to describe and report their findings. Mammography records are then stored in a well-defined database format (NMD). Lately, researchers have applied data mining and machine learning techniques to these databases. They successfully built breast cancer classifiers that can help in early detection of malignancy. However, the validity of these models depends on the quality of the underlying databases. Unfortunately, most databases suffer from inconsistencies, missing data, inter-observer variability and inappropriate term usage. In addition, many databases are not compliant with the NMD format and/or solely consist of text reports. BI-RADS feature extraction from free text and consistency checks between recorded predictive variables and text reports are crucial to addressing this problem. We describe a general scheme for concept information retrieval from free text given a lexicon, and present a BI-RADS features extraction algorithm for clinical data mining. It consists of a syntax analyzer, a concept finder and a negation detector. The syntax analyzer preprocesses the input into individual sentences. The concept finder uses a semantic grammar based on the BI-RADS lexicon and the experts' input. It parses sentences detecting BI-RADS concepts. Once a concept is located, a lexical scanner checks for negation. Our method can handle multiple latent concepts within the text, filtering out ultrasound concepts. On our dataset, our algorithm achieves 97.7% precision, 95.5% recall and an F 1 -score of 0.97. It outperforms manual feature extraction at the 5% statistical significance level.

  12. Cross document ontology based information for multimedia retrieval

    NARCIS (Netherlands)

    Reidsma, Dennis; Kuper, Jan; Declerck, T.; Saggion, H.; Cunningham, H.; Ganter, B.; de Moor, A.

    2003-01-01

    This paper describes the MUMIS project, which applies ontology based Information Extraction to improve the results of Information Retrieval in multimedia archives. It makes use of a domain specific ontology, multilingual lexicons and reasoning algorithms to automatically create a semantic annotation

  13. Tribulus terrestris Extract Improves Human Sperm Parameters In Vitro

    Science.gov (United States)

    Khaleghi, Sara; Bakhtiari, Mitra; Asadmobini, Atefeh; Esmaeili, Farzane

    2016-01-01

    Objective. The object of present study was to investigate the effects of direct addition of Tribulus terrestris extract on human sperm parameters. Design. Semen specimens from 40 healthy men volunteers were divided into 4 groups: one group received no treatment (control group) while the others were incubated with 20, 40, and 50 µg/mL of T terrestris extract (experimental groups). Motility, viability, and DNA fragmentation were assessed in all groups. Results. The incubation of human semen with 40 and 50 μg/mL of T terrestris extract significantly enhanced total sperm motility, number of progressive motile spermatozoa, and curvilinear velocity over 60 to 120 minutes’ holding time (P terrestris extract (P terrestris extract to human sperm could affect male fertility capacity. PMID:27694560

  14. Tribulus terrestris Extract Improves Human Sperm Parameters In Vitro.

    Science.gov (United States)

    Khaleghi, Sara; Bakhtiari, Mitra; Asadmobini, Atefeh; Esmaeili, Farzane

    2016-09-30

    The object of present study was to investigate the effects of direct addition of Tribulus terrestris extract on human sperm parameters. Semen specimens from 40 healthy men volunteers were divided into 4 groups: one group received no treatment (control group) while the others were incubated with 20, 40, and 50 µg/mL of T terrestris extract (experimental groups). Motility, viability, and DNA fragmentation were assessed in all groups. The incubation of human semen with 40 and 50 μg/mL of T terrestris extract significantly enhanced total sperm motility, number of progressive motile spermatozoa, and curvilinear velocity over 60 to 120 minutes' holding time (P terrestris extract (P terrestris extract to human sperm could affect male fertility capacity. © The Author(s) 2016.

  15. Overview of image processing tools to extract physical information from JET videos

    Science.gov (United States)

    Craciunescu, T.; Murari, A.; Gelfusa, M.; Tiseanu, I.; Zoita, V.; EFDA Contributors, JET

    2014-11-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  16. Overview of image processing tools to extract physical information from JET videos

    International Nuclear Information System (INIS)

    Craciunescu, T; Tiseanu, I; Zoita, V; Murari, A; Gelfusa, M

    2014-01-01

    In magnetic confinement nuclear fusion devices such as JET, the last few years have witnessed a significant increase in the use of digital imagery, not only for the surveying and control of experiments, but also for the physical interpretation of results. More than 25 cameras are routinely used for imaging on JET in the infrared (IR) and visible spectral regions. These cameras can produce up to tens of Gbytes per shot and their information content can be very different, depending on the experimental conditions. However, the relevant information about the underlying physical processes is generally of much reduced dimensionality compared to the recorded data. The extraction of this information, which allows full exploitation of these diagnostics, is a challenging task. The image analysis consists, in most cases, of inverse problems which are typically ill-posed mathematically. The typology of objects to be analysed is very wide, and usually the images are affected by noise, low levels of contrast, low grey-level in-depth resolution, reshaping of moving objects, etc. Moreover, the plasma events have time constants of ms or tens of ms, which imposes tough conditions for real-time applications. On JET, in the last few years new tools and methods have been developed for physical information retrieval. The methodology of optical flow has allowed, under certain assumptions, the derivation of information about the dynamics of video objects associated with different physical phenomena, such as instabilities, pellets and filaments. The approach has been extended in order to approximate the optical flow within the MPEG compressed domain, allowing the manipulation of the large JET video databases and, in specific cases, even real-time data processing. The fast visible camera may provide new information that is potentially useful for disruption prediction. A set of methods, based on the extraction of structural information from the visual scene, have been developed for the

  17. Extraction of prospecting information of uranium deposit based on high spatial resolution satellite data. Taking bashibulake region as an example

    International Nuclear Information System (INIS)

    Yang Xu; Liu Dechang; Zhang Jielin

    2008-01-01

    In this study, the signification and content of prospecting information of uranium deposit are expounded. Quickbird high spatial resolution satellite data are used to extract the prospecting information of uranium deposit in Bashibulake area in the north of Tarim Basin. By using the pertinent methods of image processing, the information of ore-bearing bed, ore-control structure and mineralized alteration have been extracted. The results show a high consistency with the field survey. The aim of this study is to explore practicability of high spatial resolution satellite data for prospecting minerals, and to broaden the thinking of prospectation at similar area. (authors)

  18. Improving Access to Transit Through Crowdsourced Information

    Science.gov (United States)

    2017-11-01

    The purpose of this research was to facilitate the ongoing collection of information from the public about potential areas of multimodal service and infrastructure improvements and easily share these problems with transit agencies, departments of tra...

  19. Sulforaphane-rich broccoli sprout extract improves hepatic abnormalities in male subjects

    Science.gov (United States)

    Kikuchi, Masahiro; Ushida, Yusuke; Shiozawa, Hirokazu; Umeda, Rumiko; Tsuruya, Kota; Aoki, Yudai; Suganuma, Hiroyuki; Nishizaki, Yasuhiro

    2015-01-01

    AIM: To evaluate effects of dietary supplementation of sulforaphane (SF)-rich broccoli sprout (BS) extract on hepatic abnormalities in Japanese male participants. METHODS: In a randomized, placebo-controlled, double blind trial, male participants with fatty liver received either BS capsules containing glucoraphanin [GR; a precursor of SF (n = 24)] or placebo (n = 28) for 2 mo. Liver function markers, serum levels of aspartate and alanine aminotransferases (AST and ALT, respectively) and γ-glutamyl transpeptidase (γ-GTP) and an oxidative stress marker, urinary levels of 8-hydroxydeoxyguanosine (8-OHdG), were measured and compared in participants before and after the trial period. In an animal model, chronic liver failure was induced in Sprague-Dawley rats by successive intraperitoneal injection with N-nitrosodimethylamine (NDMA) for 4 wk. Concomitantly, rats received AIN-76 diets supplemented with or without BS extract. Thereafter, rats were sacrificed, and their sera and livers were collected to measure serum liver function markers and hepatic levels of thiobarbituric acid reactive substances (TBARS) levels and hepatic glutathione S-transferase (GST) activity, a prototypical phase 2 antioxidant enzyme. RESULTS: Dietary supplementation with BS extract containing SF precursor GR for 2 mo significantly decreased serum levels of liver function markers, ALT [median (interquartile range), before: 54.0 (34.5-79.0) vs after supplementation: 48.5 (33.3-65.3) IU/L, P NDMA-induced chronic liver failure in rats, which was attributable to the suppression of the increase in TBARS through induction of hepatic phase 2 antioxidant enzymes including hepatic GST (86.6 ± 95.2 vs 107.8 ± 7.7 IU/g, P < 0.01). CONCLUSION: Dietary supplementation with BS extract containing the SF precursor GR is likely to be highly effective in improving liver function through reduction of oxidative stress. PMID:26604653

  20. Extracting Low-Frequency Information from Time Attenuation in Elastic Waveform Inversion

    Science.gov (United States)

    Guo, Xuebao; Liu, Hong; Shi, Ying; Wang, Weihong

    2017-03-01

    Low-frequency information is crucial for recovering background velocity, but the lack of low-frequency information in field data makes inversion impractical without accurate initial models. Laplace-Fourier domain waveform inversion can recover a smooth model from real data without low-frequency information, which can be used for subsequent inversion as an ideal starting model. In general, it also starts with low frequencies and includes higher frequencies at later inversion stages, while the difference is that its ultralow frequency information comes from the Laplace-Fourier domain. Meanwhile, a direct implementation of the Laplace-transformed wavefield using frequency domain inversion is also very convenient. However, because broad frequency bands are often used in the pure time domain waveform inversion, it is difficult to extract the wavefields dominated by low frequencies in this case. In this paper, low-frequency components are constructed by introducing time attenuation into the recorded residuals, and the rest of the method is identical to the traditional time domain inversion. Time windowing and frequency filtering are also applied to mitigate the ambiguity of the inverse problem. Therefore, we can start at low frequencies and to move to higher frequencies. The experiment shows that the proposed method can achieve a good inversion result in the presence of a linear initial model and records without low-frequency information.

  1. PKDE4J: Entity and relation extraction for public knowledge discovery.

    Science.gov (United States)

    Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young

    2015-10-01

    Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Implementing and Sustaining School Improvement. The Informed Educator Series

    Science.gov (United States)

    Protheroe, Nancy

    2011-01-01

    This "Informed Educator" examines research-proven strategies for implementing and sustaining school improvement by looking at the key elements of the process, enabling conditions for improvement, issues of school culture, and implementation. It also looks at school turnarounds and how to sustain school improvement once reforms are implemented.

  3. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-01-01

    Full Text Available OBJECTIVES: A common method for conducting a quantitative systematic review (QSR for observational studies related to nutritional epidemiology is the “highest versus lowest intake” method (HLM, in which only the information concerning the effect size (ES of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM, a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES between the HLM and ICM. METHODS: A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. RESULTS: No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. CONCLUSIONS: The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  4. Improved side information generation for distributed video coding

    DEFF Research Database (Denmark)

    Huang, Xin; Forchhammer, Søren

    2008-01-01

    As a new coding paradigm, distributed video coding (DVC) deals with lossy source coding using side information to exploit the statistics at the decoder to reduce computational demands at the encoder. The performance of DVC highly depends on the quality of side information. With a better side...... information generation method, fewer bits will be requested from the encoder and more reliable decoded frames will be obtained. In this paper, a side information generation method is introduced to further improve the rate-distortion (RD) performance of transform domain distributed video coding. This algorithm...

  5. A COMPARATIVE ANALYSIS OF WEB INFORMATION EXTRACTION TECHNIQUES DEEP LEARNING vs. NAÏVE BAYES vs. BACK PROPAGATION NEURAL NETWORKS IN WEB DOCUMENT EXTRACTION

    Directory of Open Access Journals (Sweden)

    J. Sharmila

    2016-01-01

    Full Text Available Web mining related exploration is getting the chance to be more essential these days in view of the reason that a lot of information is overseen through the web. Web utilization is expanding in an uncontrolled way. A particular framework is required for controlling such extensive measure of information in the web space. Web mining is ordered into three noteworthy divisions: Web content mining, web usage mining and web structure mining. Tak-Lam Wong has proposed a web content mining methodology in the exploration with the aid of Bayesian Networks (BN. In their methodology, they were learning on separating the web data and characteristic revelation in view of the Bayesian approach. Roused from their investigation, we mean to propose a web content mining methodology, in view of a Deep Learning Algorithm. The Deep Learning Algorithm gives the interest over BN on the basis that BN is not considered in any learning architecture planning like to propose system. The main objective of this investigation is web document extraction utilizing different grouping algorithm and investigation. This work extricates the data from the web URL. This work shows three classification algorithms, Deep Learning Algorithm, Bayesian Algorithm and BPNN Algorithm. Deep Learning is a capable arrangement of strategies for learning in neural system which is connected like computer vision, speech recognition, and natural language processing and biometrics framework. Deep Learning is one of the simple classification technique and which is utilized for subset of extensive field furthermore Deep Learning has less time for classification. Naive Bayes classifiers are a group of basic probabilistic classifiers in view of applying Bayes hypothesis with concrete independence assumptions between the features. At that point the BPNN algorithm is utilized for classification. Initially training and testing dataset contains more URL. We extract the content presently from the dataset. The

  6. Zone analysis in biology articles as a basis for information extraction.

    Science.gov (United States)

    Mizuta, Yoko; Korhonen, Anna; Mullen, Tony; Collier, Nigel

    2006-06-01

    In the field of biomedicine, an overwhelming amount of experimental data has become available as a result of the high throughput of research in this domain. The amount of results reported has now grown beyond the limits of what can be managed by manual means. This makes it increasingly difficult for the researchers in this area to keep up with the latest developments. Information extraction (IE) in the biological domain aims to provide an effective automatic means to dynamically manage the information contained in archived journal articles and abstract collections and thus help researchers in their work. However, while considerable advances have been made in certain areas of IE, pinpointing and organizing factual information (such as experimental results) remains a challenge. In this paper we propose tackling this task by incorporating into IE information about rhetorical zones, i.e. classification of spans of text in terms of argumentation and intellectual attribution. As the first step towards this goal, we introduce a scheme for annotating biological texts for rhetorical zones and provide a qualitative and quantitative analysis of the data annotated according to this scheme. We also discuss our preliminary research on automatic zone analysis, and its incorporation into our IE framework.

  7. Aqueous extract of lavender (Lavandula angustifolia) improves the spatial performance of a rat model of Alzheimer's disease.

    Science.gov (United States)

    Kashani, Masoud Soheili; Tavirani, Mostafa Rezaei; Talaei, Sayyed Alireza; Salami, Mahmoud

    2011-04-01

    Alzheimer's disease (AD) is one of the most important neurodegenerative disorders. It is characterized by dementia including deficits in learning and memory. The present study aimed to evaluate the effects of aqueous extract of lavender (Lavandula angustifolia) on spatial performance of AD rats. Male Wistar rats were first divided into control and AD groups. Rat model of AD was established by intracerebroventricular injection of 10 μg Aβ1-42 20 d prior to administration of the lavender extract. Rats in both groups were then introduced to 2 stages of task learning (with an interval of 20 d) in Morris water maze, each followed by one probe test. After the first stage of spatial learning, control and AD animals received different doses (50, 100 and 200 mg/kg) of the lavender extract. In the first stage of experiment, the latency to locate the hidden platform in AD group was significantly higher than that in control group. However, in the second stage of experiment, control and AD rats that received distilled water (vehicle) showed similar performance, indicating that the maze navigation itself could improve the spatial learning of AD animals. Besides, in the second stage of experiment, control and AD rats that received lavender extract administration at different doses (50, 100, and 200 mg/ kg) spent less time locating the platform (except for the AD rats with 50 mg/kg extract treatment), as compared with their counterparts with vehicle treatment, respectively. In addition, lavender extract significantly improved the performance of control and AD rats in the probe test, only at the dose of 200 mg/kg, as compared with their counterparts with vehicle treatment. The lavender extract can effectively reverse spatial learning deficits in AD rats.

  8. EXTRACTION OF BUILDING BOUNDARY LINES FROM AIRBORNE LIDAR POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    Y.-H. Tseng

    2016-10-01

    Full Text Available Building boundary lines are important spatial features that characterize the topographic maps and three-dimensional (3D city models. Airborne LiDAR Point clouds provide adequate 3D spatial information for building boundary mapping. However, information of boundary features contained in point clouds is implicit. This study focuses on developing an automatic algorithm of building boundary line extraction from airborne LiDAR data. In an airborne LiDAR dataset, top surfaces of buildings, such as roofs, tend to have densely distributed points, but vertical surfaces, such as walls, usually have sparsely distributed points or even no points. The intersection lines of roof and wall planes are, therefore, not clearly defined in point clouds. This paper proposes a novel method to extract those boundary lines of building edges. The extracted line features can be used as fundamental data to generate topographic maps of 3D city model for an urban area. The proposed method includes two major process steps. The first step is to extract building boundary points from point clouds. Then the second step is followed to form building boundary line features based on the extracted boundary points. In this step, a line fitting algorithm is developed to improve the edge extraction from LiDAR data. Eight test objects, including 4 simple low buildings and 4 complicated tall buildings, were selected from the buildings in NCKU campus. The test results demonstrate the feasibility of the proposed method in extracting complicate building boundary lines. Some results which are not as good as expected suggest the need of further improvement of the method.

  9. Mechanism by Sambucus nigra Extract Improves Bone Mineral Density in Experimental Diabetes

    Directory of Open Access Journals (Sweden)

    Laurentiu Badescu

    2012-01-01

    Full Text Available The effects of polyphenols extracted from Sambucus nigra fruit were studied in streptozotocin- (STZ- induced hyperglycemic rats to evaluate its possible antioxidant, anti-inflammatory, antiglycosylation activity, and antiosteoporosis effects in diabetes. DEXA bone mineral density tests were performed in order to determine bone mineral density (BMD, bone mineral content (BMC, and fat (%Fat in control and diabetic animals, before and after polyphenol delivery. As compared to the normoglycemic group, the rats treated with STZ (60 mg/kg body weight revealed a significant malondialdehyde (MDA increase, as an index of the lipid peroxidation level, by 69%, while the total antioxidant activity (TAS dropped by 36%, with a consistently significant decrease (<0.05 in the activity of superoxide dismutase (SOD and glutathione peroxidase (GPX. Also, the treatment of rats with STZ revealed a significant increase of IL-6, glycosylated haemoglobin (HbA1c, and osteopenia detected by DEXA bone mineral density tests. The recorded results highlight a significant improvement (<0.001 in the antioxidative capacity of the serum in diabetic rats treated with natural polyphenols, bringing back to normal the concentration of reduced glutathione (GSH, as well as an important decrease in the serum concentration of MDA, with improved osteoporosis status. Knowing the effects of polyphenols could lead to the use of the polyphenolic extract of Sambucus nigra as a dietary supplement in diabetic osteoporosis.

  10. Information extraction from FN plots of tungsten microemitters

    Energy Technology Data Exchange (ETDEWEB)

    Mussa, Khalil O. [Department of Physics, Mu' tah University, Al-Karak (Jordan); Mousa, Marwan S., E-mail: mmousa@mutah.edu.jo [Department of Physics, Mu' tah University, Al-Karak (Jordan); Fischer, Andreas, E-mail: andreas.fischer@physik.tu-chemnitz.de [Institut für Physik, Technische Universität Chemnitz, Chemnitz (Germany)

    2013-09-15

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10{sup −8}mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  11. Information extraction from FN plots of tungsten microemitters

    International Nuclear Information System (INIS)

    Mussa, Khalil O.; Mousa, Marwan S.; Fischer, Andreas

    2013-01-01

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials—such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current–voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)–screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10 −8 mbar when baked at up to ∼180°C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler–Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in

  12. Evaluation of needle trap micro-extraction and solid-phase micro-extraction: Obtaining comprehensive information on volatile emissions from in vitro cultures.

    Science.gov (United States)

    Oertel, Peter; Bergmann, Andreas; Fischer, Sina; Trefz, Phillip; Küntzel, Anne; Reinhold, Petra; Köhler, Heike; Schubert, Jochen K; Miekisch, Wolfram

    2018-05-14

    Volatile organic compounds (VOCs) emitted from in vitro cultures may reveal information on species and metabolism. Owing to low nmol L -1 concentration ranges, pre-concentration techniques are required for gas chromatography-mass spectrometry (GC-MS) based analyses. This study was intended to compare the efficiency of established micro-extraction techniques - solid-phase micro-extraction (SPME) and needle-trap micro-extraction (NTME) - for the analysis of complex VOC patterns. For SPME, a 75 μm Carboxen®/polydimethylsiloxane fiber was used. The NTME needle was packed with divinylbenzene, Carbopack X and Carboxen 1000. The headspace was sampled bi-directionally. Seventy-two VOCs were calibrated by reference standard mixtures in the range of 0.041-62.24 nmol L -1 by means of GC-MS. Both pre-concentration methods were applied to profile VOCs from cultures of Mycobacterium avium ssp. paratuberculosis. Limits of detection ranged from 0.004 to 3.93 nmol L -1 (median = 0.030 nmol L -1 ) for NTME and from 0.001 to 5.684 nmol L -1 (median = 0.043 nmol L -1 ) for SPME. NTME showed advantages in assessing polar compounds such as alcohols. SPME showed advantages in reproducibility but disadvantages in sensitivity for N-containing compounds. Micro-extraction techniques such as SPME and NTME are well suited for trace VOC profiling over cultures if the limitations of each technique is taken into account. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Application and improvement of reciprocating-sieve plate extraction column in natural uranium extraction and purification process

    International Nuclear Information System (INIS)

    Wang Xuejun; Li Linyan; Liu Jing; Liu Xin; Yang Lifeng; Xiao Shaohua; Liu Hao

    2013-01-01

    Reciprocating-sieve plate extraction column is commonly used in the extraction process. Optimization and application were conducted successfully via production practice in some chemical and pharmaceutical plants, and good results are obtained while it is applied in the natural uranium extraction and purification process. The key component of reciprocating-sieve plate extraction column is gear-drive equipment in which drive motor serves as its core. Hence, it is important to select appropriate mode of speed regulation. In this paper, the principle and performance of several mode of speed regulation are compared. Both electromagnetic slip and frequency speed-regulation can be applied in general industrial process, but frequency speed-regulation with low energy cost can be used in wider operating range. The application of frequency speed-regulation mode used in reciprocating-sieve plate extraction column will increase the convenience and stability of natural uranium extraction and purification process. (authors)

  14. Nanoemulsion for improving solubility and permeability of Vitex agnus-castus extract: formulation and in vitro evaluation using PAMPA and Caco-2 approaches.

    Science.gov (United States)

    Piazzini, Vieri; Monteforte, Elena; Luceri, Cristina; Bigagli, Elisabetta; Bilia, Anna Rita; Bergonzi, Maria Camilla

    2017-11-01

    The purpose of this study was to develop new formulation for an improved oral delivery of Vitex agnus-castus (VAC) extract. After the optimization and validation of analytical method for quali-quantitative characterization of extract, nanoemulsion (NE) was selected as lipid-based nanocarrier. The composition of extract-loaded NE resulted in triacetin as oil phase, labrasol as surfactant, cremophor EL as co-surfactant and water. NE contains until 60 mg/mL of extract. It was characterized by DLS and TEM analyses and its droplets appear dark with an average diameter of 11.82 ± 0.125 nm and a polydispersity index (PdI) of 0.117 ± 0.019. The aqueous solubility of the extract was improved about 10 times: the extract is completely soluble in the NE at the concentration of 60 mg/mL, while its solubility in water results less than 6 mg. The passive intestinal permeation was tested by using parallel artificial membrane permeation assay (PAMPA) and the permeation across Caco-2 cells after preliminary cytotoxicity studies were also evaluated. NE shows a good solubilizing effect of the constituents of the extract, compared with aqueous solution. The total amount of constituents permeated from NE to acceptor compartment is greater than that permeated from saturated aqueous solution. Caco-2 test confirmed PAMPA results and they revealed that NE was successful in increasing the permeation of VAC extract. This formulation could improve oral bioavailability of extract due to enhanced solubility and permeability of phytocomplex.

  15. Machinery running state identification based on discriminant semi-supervised local tangent space alignment for feature fusion and extraction

    International Nuclear Information System (INIS)

    Su, Zuqiang; Xiao, Hong; Zhang, Yi; Tang, Baoping; Jiang, Yonghua

    2017-01-01

    Extraction of sensitive features is a challenging but key task in data-driven machinery running state identification. Aimed at solving this problem, a method for machinery running state identification that applies discriminant semi-supervised local tangent space alignment (DSS-LTSA) for feature fusion and extraction is proposed. Firstly, in order to extract more distinct features, the vibration signals are decomposed by wavelet packet decomposition WPD, and a mixed-domain feature set consisted of statistical features, autoregressive (AR) model coefficients, instantaneous amplitude Shannon entropy and WPD energy spectrum is extracted to comprehensively characterize the properties of machinery running state(s). Then, the mixed-dimension feature set is inputted into DSS-LTSA for feature fusion and extraction to eliminate redundant information and interference noise. The proposed DSS-LTSA can extract intrinsic structure information of both labeled and unlabeled state samples, and as a result the over-fitting problem of supervised manifold learning and blindness problem of unsupervised manifold learning are overcome. Simultaneously, class discrimination information is integrated within the dimension reduction process in a semi-supervised manner to improve sensitivity of the extracted fusion features. Lastly, the extracted fusion features are inputted into a pattern recognition algorithm to achieve the running state identification. The effectiveness of the proposed method is verified by a running state identification case in a gearbox, and the results confirm the improved accuracy of the running state identification. (paper)

  16. Approaching the largest ‘API’: extracting information from the Internet with Python

    Directory of Open Access Journals (Sweden)

    Jonathan E. Germann

    2018-02-01

    Full Text Available This article explores the need for libraries to algorithmically access and manipulate the world’s largest API: the Internet. The billions of pages on the ‘Internet API’ (HTTP, HTML, CSS, XPath, DOM, etc. are easily accessible and manipulable. Libraries can assist in creating meaning through the datafication of information on the world wide web. Because most information is created for human consumption, some programming is required for automated extraction. Python is an easy-to-learn programming language with extensive packages and community support for web page automation. Four packages (Urllib, Selenium, BeautifulSoup, Scrapy in Python can automate almost any web page for all sized projects. An example warrant data project is explained to illustrate how well Python packages can manipulate web pages to create meaning through assembling custom datasets.

  17. A research of road centerline extraction algorithm from high resolution remote sensing images

    Science.gov (United States)

    Zhang, Yushan; Xu, Tingfa

    2017-09-01

    Satellite remote sensing technology has become one of the most effective methods for land surface monitoring in recent years, due to its advantages such as short period, large scale and rich information. Meanwhile, road extraction is an important field in the applications of high resolution remote sensing images. An intelligent and automatic road extraction algorithm with high precision has great significance for transportation, road network updating and urban planning. The fuzzy c-means (FCM) clustering segmentation algorithms have been used in road extraction, but the traditional algorithms did not consider spatial information. An improved fuzzy C-means clustering algorithm combined with spatial information (SFCM) is proposed in this paper, which is proved to be effective for noisy image segmentation. Firstly, the image is segmented using the SFCM. Secondly, the segmentation result is processed by mathematical morphology to remover the joint region. Thirdly, the road centerlines are extracted by morphology thinning and burr trimming. The average integrity of the centerline extraction algorithm is 97.98%, the average accuracy is 95.36% and the average quality is 93.59%. Experimental results show that the proposed method in this paper is effective for road centerline extraction.

  18. Extracting and Using Photon Polarization Information in Radiative B Decays

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Yuval

    2000-05-09

    The authors discuss the uses of conversion electron pairs for extracting photon polarization information in weak radiative B decays. Both cases of leptons produced through a virtual and real photon are considered. Measurements of the angular correlation between the (K-pi) and (e{sup +}e{sup {minus}}) decay planes in B --> K*(--> K-pi)gamma (*)(--> e{sup +}e{sup {minus}}) decays can be used to determine the helicity amplitudes in the radiative B --> K*gamma decays. A large right-handed helicity amplitude in B-bar decays is a signal of new physics. The time-dependent CP asymmetry in the B{sup 0} decay angular correlation is shown to measure sin 2-beta and cos 2-beta with little hadronic uncertainty.

  19. Enhancing biomedical text summarization using semantic relation extraction.

    Directory of Open Access Journals (Sweden)

    Yue Shang

    Full Text Available Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1 We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2 We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3 For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  20. Enhancing biomedical text summarization using semantic relation extraction.

    Science.gov (United States)

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  1. How social information can improve estimation accuracy in human groups.

    Science.gov (United States)

    Jayles, Bertrand; Kim, Hye-Rin; Escobedo, Ramón; Cezera, Stéphane; Blanchet, Adrien; Kameda, Tatsuya; Sire, Clément; Theraulaz, Guy

    2017-11-21

    In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects' sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance. Copyright © 2017 the Author(s). Published by PNAS.

  2. SAR matrices: automated extraction of information-rich SAR tables from large compound data sets.

    Science.gov (United States)

    Wassermann, Anne Mai; Haebel, Peter; Weskamp, Nils; Bajorath, Jürgen

    2012-07-23

    We introduce the SAR matrix data structure that is designed to elucidate SAR patterns produced by groups of structurally related active compounds, which are extracted from large data sets. SAR matrices are systematically generated and sorted on the basis of SAR information content. Matrix generation is computationally efficient and enables processing of large compound sets. The matrix format is reminiscent of SAR tables, and SAR patterns revealed by different categories of matrices are easily interpretable. The structural organization underlying matrix formation is more flexible than standard R-group decomposition schemes. Hence, the resulting matrices capture SAR information in a comprehensive manner.

  3. Extracting chemical information from high-resolution Kβ X-ray emission spectroscopy

    Science.gov (United States)

    Limandri, S.; Robledo, J.; Tirao, G.

    2018-06-01

    High-resolution X-ray emission spectroscopy allows studying the chemical environment of a wide variety of materials. Chemical information can be obtained by fitting the X-ray spectra and observing the behavior of some spectral features. Spectral changes can also be quantified by means of statistical parameters calculated by considering the spectrum as a probability distribution. Another possibility is to perform statistical multivariate analysis, such as principal component analysis. In this work the performance of these procedures for extracting chemical information in X-ray emission spectroscopy spectra for mixtures of Mn2+ and Mn4+ oxides are studied. A detail analysis of the parameters obtained, as well as the associated uncertainties is shown. The methodologies are also applied for Mn oxidation state characterization of double perovskite oxides Ba1+xLa1-xMnSbO6 (with 0 ≤ x ≤ 0.7). The results show that statistical parameters and multivariate analysis are the most suitable for the analysis of this kind of spectra.

  4. Information Technology: Opportunities for Improving Acquisitions and Operations

    Science.gov (United States)

    2017-04-01

    1GAO, Federal Chief Information Officers : Opportunities Exist to Improve Role in Information Technology Management, GAO-11-634...approach and a collaborative relationship among agency executives (e.g., Chief Financial Officer and agency component leadership) had stopped 45...executives, including Chief Financial Officers and executives of major bureaus and component agencies for whom the technology is serving, to ensure that

  5. Improving the lipid stability and sensory characteristics of irradiated minced beef by using natural herbal extracts

    International Nuclear Information System (INIS)

    Mansour, H. A.; Moliarned, H.M.; El-Niely, H.F.G.

    2007-01-01

    The objective of the present work was to use natural herbal extracts to minimize lipid oxidation and improve the sensory characteristics of irradiated minced beef. Beef longissimus dorsi were minced, mixed with herbal extracts as appropriate and packed in polyethylene bags (50 g each). There were four treatment groups: (1) untreated controls, (2) irradiated with cobalt-60 gamma-rays to either 2 or 4.5 kGy, (3) addition of extracts of one of marjoram, rosemary or sage to a final concentration of 0.04 % (v/w), (4) combination treatment with either 2 or 4.5 kGy irradiation, plus herbal extract at 0.04 % (v/w) added pre-irradiation. Aerobically packaged samples were then placed into storage at 5 degree C. At specified time intervals samples were withdrawn to be analyzed for thiobarbituric acid reactive substances (TEARS), sensory characteristics and psychrotrophic bacterial counts, Results demonstrated a significant benefit of the addition of herbal extracts to the minced beef prior to irradiation. All three extracts generally lowered the TBARS values in both control and irradiated samples, with marjoram being the most effective, followed by sage and rosemary in that order of efficacy. As regards radiation effected off-odour, all three extracts generally lowered the off-odour score, with marjoram and sage being most effective, and rosemary being somewhat less so. All three extracts protected against radiation effected colour loss. Addition of herbal extracts prior to irradiation resulted in significant increase (p< 0.05) in the acceptability scores for all irradiated samples in the post-irradiation period, with rosemary being somewhat less effective than sage and marjoram. Addition of herbal extracts alone to the minced meat did not affect the psychrotrophic bacterial counts of treated samples. The combination treatment with herbal extracts plus

  6. Instruction in Information Structuring Improves Bayesian Judgment in Intelligence Analysts

    Directory of Open Access Journals (Sweden)

    David R. Mandel

    2015-04-01

    Full Text Available An experiment was conducted to test the effectiveness of brief instruction in information structuring (i.e., representing and integrating information for improving the coherence of probability judgments and binary choices among intelligence analysts. Forty-three analysts were presented with comparable sets of Bayesian judgment problems before and immediately after instruction. After instruction, analysts’ probability judgments were more coherent (i.e., more additive and compliant with Bayes theorem. Instruction also improved the coherence of binary choices regarding category membership: after instruction, subjects were more likely to invariably choose the category to which they assigned the higher probability of a target’s membership. The research provides a rare example of evidence-based validation of effectiveness in instruction to improve the statistical assessment skills of intelligence analysts. Such instruction could also be used to improve the assessment quality of other types of experts who are required to integrate statistical information or make probabilistic assessments.

  7. A New Method for Weak Fault Feature Extraction Based on Improved MED

    Directory of Open Access Journals (Sweden)

    Junlin Li

    2018-01-01

    Full Text Available Because of the characteristics of weak signal and strong noise, the low-speed vibration signal fault feature extraction has been a hot spot and difficult problem in the field of equipment fault diagnosis. Moreover, the traditional minimum entropy deconvolution (MED method has been proved to be used to detect such fault signals. The MED uses objective function method to design the filter coefficient, and the appropriate threshold value should be set in the calculation process to achieve the optimal iteration effect. It should be pointed out that the improper setting of the threshold will cause the target function to be recalculated, and the resulting error will eventually affect the distortion of the target function in the background of strong noise. This paper presents an improved MED based method of fault feature extraction from rolling bearing vibration signals that originate in high noise environments. The method uses the shuffled frog leaping algorithm (SFLA, finds the set of optimal filter coefficients, and eventually avoids the artificial error influence of selecting threshold parameter. Therefore, the fault bearing under the two rotating speeds of 60 rpm and 70 rpm is selected for verification with typical low-speed fault bearing as the research object; the results show that SFLA-MED extracts more obvious bearings and has a higher signal-to-noise ratio than the prior MED method.

  8. Remote Sensing Information Sciences Research Group, Santa Barbara Information Sciences Research Group, year 3

    Science.gov (United States)

    Estes, J. E.; Smith, T.; Star, J. L.

    1986-01-01

    Research continues to focus on improving the type, quantity, and quality of information which can be derived from remotely sensed data. The focus is on remote sensing and application for the Earth Observing System (Eos) and Space Station, including associated polar and co-orbiting platforms. The remote sensing research activities are being expanded, integrated, and extended into the areas of global science, georeferenced information systems, machine assissted information extraction from image data, and artificial intelligence. The accomplishments in these areas are examined.

  9. Chloroform-assisted phenol extraction improving proteome profiling of maize embryos through selective depletion of high-abundance storage proteins.

    Directory of Open Access Journals (Sweden)

    Erhui Xiong

    Full Text Available The presence of abundant storage proteins in plant embryos greatly impedes seed proteomics analysis. Vicilin (or globulin-1 is the most abundant storage protein in maize embryo. There is a need to deplete the vicilins from maize embryo extracts for enhanced proteomics analysis. We here reported a chloroform-assisted phenol extraction (CAPE method for vicilin depletion. By CAPE, maize embryo proteins were first extracted in an aqueous buffer, denatured by chloroform and then subjected to phenol extraction. We found that CAPE can effectively deplete the vicilins from maize embryo extract, allowing the detection of low-abundance proteins that were masked by vicilins in 2-DE gel. The novelty of CAPE is that it selectively depletes abundant storage proteins from embryo extracts of both monocot (maize and dicot (soybean and pea seeds, whereas other embryo proteins were not depleted. CAPE can significantly improve proteome profiling of embryos and extends the application of chloroform and phenol extraction in plant proteomics. In addition, the rationale behind CAPE depletion of abundant storage proteins was explored.

  10. Remote Sensing Information Sciences Research Group: Santa Barbara Information Sciences Research Group, year 4

    Science.gov (United States)

    Estes, John E.; Smith, Terence; Star, Jeffrey L.

    1987-01-01

    Information Sciences Research Group (ISRG) research continues to focus on improving the type, quantity, and quality of information which can be derived from remotely sensed data. Particular focus in on the needs of the remote sensing research and application science community which will be served by the Earth Observing System (EOS) and Space Station, including associated polar and co-orbiting platforms. The areas of georeferenced information systems, machine assisted information extraction from image data, artificial intelligence and both natural and cultural vegetation analysis and modeling research will be expanded.

  11. Improvement of the cloud point extraction of uranyl ions by the addition of ionic liquids.

    Science.gov (United States)

    Gao, Song; Sun, Taoxiang; Chen, Qingde; Shen, Xinghai

    2013-12-15

    The cloud point extraction (CPE) of uranyl ions by different kinds of extractants in Triton X-114 (TX-114) micellar solution was investigated upon the addition of ionic liquids (ILs) with various anions, i.e., bromide (Br(-)), tetrafluoroborate (BF4(-)), hexafluorophosphate (PF6(-)) and bis[(trifluoromethyl)sulfonyl]imide (NTf2(-)). A significant increase of the extraction efficiency was found on the addition of NTf2(-) based ILs when using neutral extractant tri-octylphosphine oxide (TOPO), and the extraction efficiency kept high at both nearly neutral and high acidity. However, the CPE with acidic extractants, e.g., bis(2-ethylhexyl) phosphoric acid (HDEHP) and 8-hydroxyquinoline (8-HQ) which are only effective at nearly neutral condition, was not improved by ILs. The results of zeta potential and (19)F NMR measurements indicated that the anion NTf2(-) penetrated into the TX-114 micelles and was enriched in the surfactant-rich phase during the CPE process. Meanwhile, NTf2(-) may act as a counterion in the CPE of UO2(2+) by TOPO. Furthermore, the addition of IL increased the separation factor of UO2(2+) and La(3+), which implied that in the micelle TOPO, NTf2(-) and NO3(-) established a soft template for UO2(2+). Therefore, the combination of CPE and IL provided a supramolecular recognition to concentrate UO2(2+) efficiently and selectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Improving Einstein–Podolsky–Rosen steering inequalities with state information

    International Nuclear Information System (INIS)

    Schneeloch, James; Broadbent, Curtis J.; Howell, John C.

    2014-01-01

    We discuss the relationship between entropic Einstein–Podolsky–Rosen (EPR)-steering inequalities and their underlying uncertainty relations along with the hypothesis that improved uncertainty relations lead to tighter EPR-steering inequalities. In particular, we discuss how using information about the state of a quantum system affects one's ability to witness EPR-steering. As an example, we consider the recent improvement to the entropic uncertainty relation between pairs of discrete observables (Berta et al., 2010 [10]). By considering the assumptions that enter into the development of a steering inequality, we derive correct steering inequalities from these improved uncertainty relations and find that they are identical to ones already developed (Schneeloch et al., 2013 [9]). In addition, we consider how one can use state information to improve our ability to witness EPR-steering, and develop a new continuous variable symmetric EPR-steering inequality as a result.

  13. Measuring covariation in RNA alignments: Physical realism improves information measures

    DEFF Research Database (Denmark)

    Lindgreen, Stinus; Gardner, Paul Phillip; Krogh, Anders

    2006-01-01

    Motivation: The importance of non-coding RNAs is becoming increasingly evident, and often the function of these molecules depends on the structure. It is common to use alignments of related RNA sequences to deduce the consensus secondary structure by detecting patterns of co-evolution. A central...... part of such an analysis is to measure covariation between two positions in an alignment. Here, we rank various measures ranging from simple mutual information to more advanced covariation measures. Results: Mutual information is still used for secondary structure prediction, but the results...... of this study indicate which measures are useful. Incorporating more structural information by considering e.g. indels and stacking improves accuracy, suggesting that physically realistic measures yield improved predictions. This can be used to improve both current and future programs for secondary structure...

  14. Approach to improve construction management utilizing information technology

    International Nuclear Information System (INIS)

    Lee, Woo Bang; Moon, Jin Yeong

    2003-01-01

    Korea Hydro and Nuclear Power (KHNP) has been managed Nuclear Power Plant (NPP) construction projects including basic project planning, design, procurement, construction and start-up for nearly 30 years, and taken the leading role in the self-reliance program for constructing NPP. To maintain its leading position of construction technology and export it to other countries, it is more likely required to build the strong and competitive business management system to improve internal business efficiency and transparency, and also to respond to the change of external business circumstances such as electricity market opening. KHNP is implementing Enterprise Resource Planning (ERP) System as a business innovation tool to improve business efficiency, which changes the way of performing work from organizational system to process-oriented system in order to optimize the company resources to achieve above goals. This change should be made based on the results of performing Business Process Re-engineering (BPR) to maximize overall business efficiency. For the construction project management area, the establishment of Integrated Construction Information Sharing System based on Information Technology (IT) is the most important part. It makes possible to build collaboration system with 'Win-Win strategy' between the project owner and all related entities and to contribute to secure transparency and cut the project cost down. We would like to introduce our NPP Construction Project Management System, Infrastructure for Information System, Information Sharing System among construction related entities and Implementation Practices in this paper and also include the suggestions on our customary practice and subject that should be improved

  15. Improvements to information management systems simulator

    Science.gov (United States)

    Bilek, R. W.

    1972-01-01

    The performance of personnel in the augmentation and improvement of the interactive IMSIM information management simulation model is summarized. With this augmented model, NASA now has even greater capabilities for the simulation of computer system configurations, data processing loads imposed on these configurations, and executive software to control system operations. Through these simulations, NASA has an extremely cost effective capability for the design and analysis of computer-based data management systems.

  16. Scholarly Information Extraction Is Going to Make a Quantum Leap with PubMed Central (PMC).

    Science.gov (United States)

    Matthies, Franz; Hahn, Udo

    2017-01-01

    With the increasing availability of complete full texts (journal articles), rather than their surrogates (titles, abstracts), as resources for text analytics, entirely new opportunities arise for information extraction and text mining from scholarly publications. Yet, we gathered evidence that a range of problems are encountered for full-text processing when biomedical text analytics simply reuse existing NLP pipelines which were developed on the basis of abstracts (rather than full texts). We conducted experiments with four different relation extraction engines all of which were top performers in previous BioNLP Event Extraction Challenges. We found that abstract-trained engines loose up to 6.6% F-score points when run on full-text data. Hence, the reuse of existing abstract-based NLP software in a full-text scenario is considered harmful because of heavy performance losses. Given the current lack of annotated full-text resources to train on, our study quantifies the price paid for this short cut.

  17. Leveraging information technology to drive improvement in patient satisfaction.

    Science.gov (United States)

    Nash, Mary; Pestrue, Justin; Geier, Peter; Sharp, Karen; Helder, Amy; McAlearney, Ann Scheck

    2010-01-01

    A healthcare organization's commitment to quality and the patient experience requires senior leader involvement in improvement strategies, and accountability for goals. Further, improvement strategies are most effective when driven by data, and in the world of patient satisfaction, evidence is growing that nurse leader rounding and discharge calls are strategic tactics that can improve patient satisfaction. This article describes how The Ohio State University Medical Center (OSUMC) leveraged health information technology (IT) to apply a data-driven strategy execution to improve the patient experience. Specifically, two IT-driven approaches were used: (1) business intelligence reporting tools were used to create a meaningful reporting system including dashboards, scorecards, and tracking reports and (2) an improvement plan was implemented that focused on two high-impact tactics and data to hardwire accountability. Targeted information from the IT systems enabled clinicians and administrators to execute these strategic tactics, and senior leaders to monitor achievement of strategic goals. As a result, OSUMC's inpatient satisfaction scores on the Hospital Consumer Assessment of Healthcare Providers and Systems survey improved from 56% nines and tens in 2006 to 71% in 2009. © 2010 National Association for Healthcare Quality.

  18. Usability of consumer-related information sources for design improvement

    NARCIS (Netherlands)

    Thiruvenkadam, G.; Brombacher, A.C.; Lu, Y.; Ouden, den P.H.

    2008-01-01

    In this paper we report the findings of a study intended to assess the usability of consumer related information sources in order to improve the design processes of innovative electronic products. Specifically, an evaluation is done of the quality and content of information that would help product

  19. The effect of informed consent on stress levels associated with extraction of impacted mandibular third molars.

    Science.gov (United States)

    Casap, Nardy; Alterman, Michael; Sharon, Guy; Samuni, Yuval

    2008-05-01

    To evaluate the effect of informed consent on stress levels associated with removal of impacted mandibular third molars. A total of 60 patients scheduled for extraction of impacted mandibular third molars participated in this study. The patients were unaware of the study's objectives. Data from 20 patients established the baseline levels of electrodermal activity (EDA). The remaining 40 patients were randomly assigned into 2 equal groups receiving either a detailed document of informed consent, disclosing the possible risks involved with the surgery, or a simplified version. Pulse, blood pressure, and EDA were monitored before, during, and after completion of the consent document. Changes in EDA, but not in blood pressure, were measured on completion of either version of the consent document. A greater increase in EDA was associated with the detailed version of the consent document (P = .004). A similar concomitant increase (although nonsignificant) in pulse values was monitored on completion of both versions. Completion of overdisclosed document of informed consent is associated with changes in physiological parameters. The results suggest that overdetailed listing and disclosure before extraction of impacted mandibular third molars can increase patient stress.

  20. Extracting Various Classes of Data From Biological Text Using the Concept of Existence Dependency.

    Science.gov (United States)

    Taha, Kamal

    2015-11-01

    One of the key goals of biological natural language processing (NLP) is the automatic information extraction from biomedical publications. Most current constituency and dependency parsers overlook the semantic relationships between the constituents comprising a sentence and may not be well suited for capturing complex long-distance dependences. We propose in this paper a hybrid constituency-dependency parser for biological NLP information extraction called EDCC. EDCC aims at enhancing the state of the art of biological text mining by applying novel linguistic computational techniques that overcome the limitations of current constituency and dependency parsers outlined earlier, as follows: 1) it determines the semantic relationship between each pair of constituents in a sentence using novel semantic rules; and 2) it applies a semantic relationship extraction model that extracts information from different structural forms of constituents in sentences. EDCC can be used to extract different types of data from biological texts for purposes such as protein function prediction, genetic network construction, and protein-protein interaction detection. We evaluated the quality of EDCC by comparing it experimentally with six systems. Results showed marked improvement.

  1. Method of extracting significant trouble information of nuclear power plants using probabilistic analysis technique

    International Nuclear Information System (INIS)

    Shimada, Yoshio; Miyazaki, Takamasa

    2005-01-01

    In order to analyze and evaluate large amounts of trouble information of overseas nuclear power plants, it is necessary to select information that is significant in terms of both safety and reliability. In this research, a method of efficiently and simply classifying degrees of importance of components in terms of safety and reliability while paying attention to root-cause components appearing in the information was developed. Regarding safety, the reactor core damage frequency (CDF), which is used in the probabilistic analysis of a reactor, was used. Regarding reliability, the automatic plant trip probability (APTP), which is used in the probabilistic analysis of automatic reactor trips, was used. These two aspects were reflected in the development of criteria for classifying degrees of importance of components. By applying these criteria, a simple method of extracting significant trouble information of overseas nuclear power plants was developed. (author)

  2. Clustering-based urbanisation to improve enterprise information systems agility

    Science.gov (United States)

    Imache, Rabah; Izza, Said; Ahmed-Nacer, Mohamed

    2015-11-01

    Enterprises are daily facing pressures to demonstrate their ability to adapt quickly to the unpredictable changes of their dynamic in terms of technology, social, legislative, competitiveness and globalisation. Thus, to ensure its place in this hard context, enterprise must always be agile and must ensure its sustainability by a continuous improvement of its information system (IS). Therefore, the agility of enterprise information systems (EISs) can be considered today as a primary objective of any enterprise. One way of achieving this objective is by the urbanisation of the EIS in the context of continuous improvement to make it a real asset servicing enterprise strategy. This paper investigates the benefits of EISs urbanisation based on clustering techniques as a driver for agility production and/or improvement to help managers and IT management departments to improve continuously the performance of the enterprise and make appropriate decisions in the scope of the enterprise objectives and strategy. This approach is applied to the urbanisation of a tour operator EIS.

  3. A Study on Improving Information Processing Abilities Based on PBL

    Science.gov (United States)

    Kim, Du Gyu; Lee, JaeMu

    2014-01-01

    This study examined an instruction method for the improvement of information processing abilities in elementary school students. Current elementary students are required to develop information processing abilities to create new knowledge for this digital age. There is, however, a shortage of instruction strategies for these information processing…

  4. Improvements to the extraction of an AlGaN/GaN HEMT small-signal model

    International Nuclear Information System (INIS)

    Pu Yan; Pang Lei; Wang Liang; Chen Xiaojuan; Li Chengzhan; Liu Xinyu

    2009-01-01

    The accurate extraction of AlGaN/GaN HEMT small-signal models, which is an important step in large-signal modeling, can exactly reflect the microwave performance of the physical structure of the device. A new method of extracting the parasitic elements is presented, and an open dummy structure is introduced to obtain the parasitic capacitances. With a Schottky resistor in the gate, a new method is developed to extract R g . In order to characterize the changes of the depletion region under various drain voltages, the drain delay factor is involved in the output conductance of the device. Compared to the traditional method, the fitting of S 11 and S 22 is improved, and f T and f max can be better predicted. The validity of the proposed method is verified with excellent correlation between the measured and simulated S-parameters in the range of 0.1 to 26.1 GHz. (semiconductor devices)

  5. Point Cloud Classification of Tesserae from Terrestrial Laser Data Combined with Dense Image Matching for Archaeological Information Extraction

    Science.gov (United States)

    Poux, F.; Neuville, R.; Billen, R.

    2017-08-01

    Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.

  6. Investigating the feasibility of using partial least squares as a method of extracting salient information for the evaluation of digital breast tomosynthesis

    Science.gov (United States)

    Zhang, George Z.; Myers, Kyle J.; Park, Subok

    2013-03-01

    Digital breast tomosynthesis (DBT) has shown promise for improving the detection of breast cancer, but it has not yet been fully optimized due to a large space of system parameters to explore. A task-based statistical approach1 is a rigorous method for evaluating and optimizing this promising imaging technique with the use of optimal observers such as the Hotelling observer (HO). However, the high data dimensionality found in DBT has been the bottleneck for the use of a task-based approach in DBT evaluation. To reduce data dimensionality while extracting salient information for performing a given task, efficient channels have to be used for the HO. In the past few years, 2D Laguerre-Gauss (LG) channels, which are a complete basis for stationary backgrounds and rotationally symmetric signals, have been utilized for DBT evaluation2, 3 . But since background and signal statistics from DBT data are neither stationary nor rotationally symmetric, LG channels may not be efficient in providing reliable performance trends as a function of system parameters. Recently, partial least squares (PLS) has been shown to generate efficient channels for the Hotelling observer in detection tasks involving random backgrounds and signals.4 In this study, we investigate the use of PLS as a method for extracting salient information from DBT in order to better evaluate such systems.

  7. Improving face image extraction by using deep learning technique

    Science.gov (United States)

    Xue, Zhiyun; Antani, Sameer; Long, L. R.; Demner-Fushman, Dina; Thoma, George R.

    2016-03-01

    The National Library of Medicine (NLM) has made a collection of over a 1.2 million research articles containing 3.2 million figure images searchable using the Open-iSM multimodal (text+image) search engine. Many images are visible light photographs, some of which are images containing faces ("face images"). Some of these face images are acquired in unconstrained settings, while others are studio photos. To extract the face regions in the images, we first applied one of the most widely-used face detectors, a pre-trained Viola-Jones detector implemented in Matlab and OpenCV. The Viola-Jones detector was trained for unconstrained face image detection, but the results for the NLM database included many false positives, which resulted in a very low precision. To improve this performance, we applied a deep learning technique, which reduced the number of false positives and as a result, the detection precision was improved significantly. (For example, the classification accuracy for identifying whether the face regions output by this Viola- Jones detector are true positives or not in a test set is about 96%.) By combining these two techniques (Viola-Jones and deep learning) we were able to increase the system precision considerably, while avoiding the need to manually construct a large training set by manual delineation of the face regions.

  8. Proposal for Efficiency Improvement of Beam Extraction from the AIC-l44 Beam Formation During Its Acceleration

    International Nuclear Information System (INIS)

    Schwabe, J.; Godunowa, H.

    1998-10-01

    The computer simulations of the beam dynamics both in the radial and vertical phase planes for the AIC-144 cyclotron are presented. The calculation results show how it is possible to improve the beam extraction efficiency

  9. Automatic Mapping Extraction from Multiecho T2-Star Weighted Magnetic Resonance Images for Improving Morphological Evaluations in Human Brain

    Directory of Open Access Journals (Sweden)

    Shaode Yu

    2013-01-01

    Full Text Available Mapping extraction is useful in medical image analysis. Similarity coefficient mapping (SCM replaced signal response to time course in tissue similarity mapping with signal response to TE changes in multiecho T2-star weighted magnetic resonance imaging without contrast agent. Since different tissues are with different sensitivities to reference signals, a new algorithm is proposed by adding a sensitivity index to SCM. It generates two mappings. One measures relative signal strength (SSM and the other depicts fluctuation magnitude (FMM. Meanwhile, the new method is adaptive to generate a proper reference signal by maximizing the sum of contrast index (CI from SSM and FMM without manual delineation. Based on four groups of images from multiecho T2-star weighted magnetic resonance imaging, the capacity of SSM and FMM in enhancing image contrast and morphological evaluation is validated. Average contrast improvement index (CII of SSM is 1.57, 1.38, 1.34, and 1.41. Average CII of FMM is 2.42, 2.30, 2.24, and 2.35. Visual analysis of regions of interest demonstrates that SSM and FMM show better morphological structures than original images, T2-star mapping and SCM. These extracted mappings can be further applied in information fusion, signal investigation, and tissue segmentation.

  10. WEB STRUCTURE MINING USING PAGERANK, IMPROVED PAGERANK – AN OVERVIEW

    Directory of Open Access Journals (Sweden)

    V. Lakshmi Praba

    2011-03-01

    Full Text Available Web Mining is the extraction of interesting and potentially useful patterns and information from Web. It includes Web documents, hyperlinks between documents, and usage logs of web sites. The significant task for web mining can be listed out as Information Retrieval, Information Selection / Extraction, Generalization and Analysis. Web information retrieval tools consider only the text on pages and ignore information in the links. The goal of Web structure mining is to explore structural summary about web. Web structure mining focusing on link information is an important aspect of web data. This paper presents an overview of the PageRank, Improved Page Rank and its working functionality in web structure mining.

  11. Information Extraction for Social Media

    NARCIS (Netherlands)

    Habib, M. B.; Keulen, M. van

    2014-01-01

    The rapid growth in IT in the last two decades has led to a growth in the amount of information available online. A new style for sharing information is social media. Social media is a continuously instantly updated source of information. In this position paper, we propose a framework for

  12. Improving Treatment Response for Paediatric Anxiety Disorders: An Information-Processing Perspective.

    Science.gov (United States)

    Ege, Sarah; Reinholdt-Dunne, Marie Louise

    2016-12-01

    Cognitive behavioural therapy (CBT) is considered the treatment of choice for paediatric anxiety disorders, yet there remains substantial room for improvement in treatment outcomes. This paper examines whether theory and research into the role of information-processing in the underlying psychopathology of paediatric anxiety disorders indicate possibilities for improving treatment response. Using a critical review of recent theoretical, empirical and academic literature, the paper examines the role of information-processing biases in paediatric anxiety disorders, the extent to which CBT targets information-processing biases, and possibilities for improving treatment response. The literature reviewed indicates a role for attentional and interpretational biases in anxious psychopathology. While there is theoretical grounding and limited empirical evidence to indicate that CBT ameliorates interpretational biases, evidence regarding the effects of CBT on attentional biases is mixed. Novel treatment methods including attention bias modification training, attention feedback awareness and control training, and mindfulness-based therapy may hold potential in targeting attentional biases, and thereby in improving treatment response. The integration of novel interventions into an existing evidence-based protocol is a complex issue and faces important challenges with regard to determining the optimal treatment package. Novel interventions targeting information-processing biases may hold potential in improving response to CBT for paediatric anxiety disorders. Many important questions remain to be answered.

  13. A construction scheme of web page comment information extraction system based on frequent subtree mining

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    Based on the frequent sub-tree mining algorithm, this paper proposes a construction scheme of web page comment information extraction system based on frequent subtree mining, referred to as FSM system. The entire system architecture and the various modules to do a brief introduction, and then the core of the system to do a detailed description, and finally give the system prototype.

  14. Ensuring and Improving Information Quality for Earth Science Data and Products: Role of the ESIP Information Quality Cluster

    Science.gov (United States)

    Ramapriyan, Hampapuram; Peng, Ge; Moroni, David; Shie, Chung-Lin

    2016-01-01

    Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality - scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.

  15. Ensuring and Improving Information Quality for Earth Science Data and Products Role of the ESIP Information Quality Cluster

    Science.gov (United States)

    Ramapriyan, H. K. (Rama); Peng, Ge; Moroni, David; Shie, Chung-Lin

    2016-01-01

    Quality of products is always of concern to users regardless of the type of products. The focus of this paper is on the quality of Earth science data products. There are four different aspects of quality scientific, product, stewardship and service. All these aspects taken together constitute Information Quality. With increasing requirement on ensuring and improving information quality, there has been considerable work related to information quality during the last several years. Given this rich background of prior work, the Information Quality Cluster (IQC), established within the Federation of Earth Science Information Partners (ESIP) has been active with membership from multiple organizations. Its objectives and activities, aimed at ensuring and improving information quality for Earth science data and products, are discussed briefly.

  16. Guava leaf extracts promote glucose metabolism in SHRSP.Z-Leprfa/Izm rats by improving insulin resistance in skeletal muscle.

    Science.gov (United States)

    Guo, Xiangyu; Yoshitomi, Hisae; Gao, Ming; Qin, Lingling; Duan, Ying; Sun, Wen; Xu, Tunhai; Xie, Peifeng; Zhou, Jingxin; Huang, Liansha; Liu, Tonghua

    2013-03-01

    Metabolic syndrome (MS) and type 2 diabetes mellitus (T2DM) have been associated with insulin-resistance; however, the effective therapies in improving insulin sensitivity are limited. This study is aimed at investigating the effect of Guava Leaf (GL) extracts on glucose tolerance and insulin resistance in SHRSP.Z-Leprfa/Izm rats (SHRSP/ZF), a model of spontaneously metabolic syndrome. Male rats at 7 weeks of age were administered with vehicle water or treated by gavage with 2 g/kg GL extracts daily for six weeks, and their body weights, water and food consumption, glucose tolerance, and insulin resistance were measured. Compared with the controls, treatment with GL extracts did not modulate the amounts of water and food consumption, but significantly reduced the body weights at six weeks post treatment. Treatment with GL extracts did not alter the levels of fasting plasma glucose and insulin, but significantly reduced the levels of plasma glucose at 60 and 120 min post glucose challenge, also reduced the values of AUC and quantitative insulin sensitivity check index (QUICKI) at 42 days post treatment. Furthermore, treatment with GL extracts promoted IRS-1, AKT, PI3Kp85 expression, then IRS-1, AMKP, and AKT308, but not AKT473, phosphorylation, accompanied by increasing the ratios of membrane to total Glut 4 expression and adiponectin receptor 1 transcription in the skeletal muscles. These data indicated that GL extracts improved glucose metabolism and insulin sensitivity in the skeletal muscles of rats by modulating the insulin-related signaling.

  17. Stability and solubility improvement of Sompoi (Acacia concinna Linn. pod extract by topical microemulsion

    Directory of Open Access Journals (Sweden)

    Worrapan Poomanee

    2017-07-01

    Full Text Available The aim of this study was to enhance the solubility and stability of Acacia concinna extract by loading in a microemulsion for topical application. Both physical appearance and biological activities of the extract-loaded microemulsion were determined in comparison with the extract solution. Pseudoternary phase diagrams of three oil types including tea seed oil, grape seed oil, and sesame oil, together with polysorbate 85 or the mixture of polysorbate 85 and sorbitan oleate as surfactants, and absolute ethanol as a co-surfactant were constructed to optimize the microemulsion area. The selected microemulsion was then characterized for droplet size, polydispersity index, and viscosity. Tea seed oil exhibited the highest microemulsion area in the phase diagram because it had the highest unsaturated fatty acid content. The microemulsion composed of tea seed oil (5%, polysorbate 85 (40%, ethanol (20%, and water (35% exhibited Newtonian flow behavior with the droplet size and polydispersity index of 68.03 ± 1.09 nm and 0.44 ± 0.04, respectively. After 4% w/w of the extract was incorporated into the microemulsion, larger droplets size was observed (239.77 ± 12.69 nm with a lower polydispersity index (0.37 ± 0.02. After storage in various conditions, both physical appearances and the stability of biological activity of the extract-loaded microemulsion were improved compared to the solution. Therefore, the A. concinna loaded microemulsion may be a promising carrier for further development into a topical formulation and clinical trials for pharmaceutical and cosmeceutical applications are also suggested.

  18. Information Retrieval Using Hadoop Big Data Analysis

    Science.gov (United States)

    Motwani, Deepak; Madan, Madan Lal

    This paper concern on big data analysis which is the cognitive operation of probing huge amounts of information in an attempt to get uncovers unseen patterns. Through Big Data Analytics Applications such as public and private organization sectors have formed a strategic determination to turn big data into cut throat benefit. The primary occupation of extracting value from big data give rise to a process applied to pull information from multiple different sources; this process is known as extract transforms and lode. This paper approach extract information from log files and Research Paper, awareness reduces the efforts for blueprint finding and summarization of document from several positions. The work is able to understand better Hadoop basic concept and increase the user experience for research. In this paper, we propose an approach for analysis log files for finding concise information which is useful and time saving by using Hadoop. Our proposed approach will be applied on different research papers on a specific domain and applied for getting summarized content for further improvement and make the new content.

  19. Medicaid Analytic eXtract (MAX) General Information

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Medicaid Analytic eXtract (MAX) data is a set of person-level data files on Medicaid eligibility, service utilization, and payments. The MAX data are created to...

  20. Improved penetration of wild ginseng extracts into the skin using low-temperature atmospheric pressure plasma

    Science.gov (United States)

    Nam, Seoul Hee; Hae Choi, Jeong; Song, Yeon Suk; Lee, Hae-June; Hong, Jin-Woo; Kim, Gyoo Cheon

    2018-04-01

    Wild ginseng (WG) is a well-known traditional medicinal plant that grows in natural environments in deep mountains. WG has been thought to exert potent physiological and medicinal effects, and, recently, its use in skin care has attracted much interest. This study investigated the efficient penetration of WG extracts into the skin by means of low-temperature atmospheric pressure plasma (LTAPP), and its effects on the skin at the cellular and tissue levels. NIH3T3 mouse embryonic fibroblasts and HRM-2 hairless mice were used to confirm the improved absorption of WG extracts into the skin using LTAPP. The gene expression levels in NIH3T3 cells and morphological changes in skin tissues after WG treatment were monitored using both in vitro and in vivo experiments. Although WG extracts did not show any significant effects on proliferative activity and cytotoxicity, at a concentration of 1:800, it significantly increased the expression of fibronectin and vascular endothelial growth factor. In the in vivo study, the combinational treatment of LTAPP and WG markedly induced the expression of fibronectin and integrin α6, and it thickened. Our results showed that LTAPP treatment safely and effectively accelerated the penetration of the WG extracts into the skin, thereby increasing the effects of WG on the skin.

  1. Effect of montmorillonite on carboxylated styrene butadiene rubber/hindered phenol damping material with improved extraction resistance

    International Nuclear Information System (INIS)

    Gao, Yuan; Wang, Xiaoping; Liu, Meijun; Xi, Xue; Zhang, Xin; Jia, Demin

    2014-01-01

    Highlights: • MMT and XSBR display synergic effect on protecting HP1098 from being extracted. • A new hindered phenol HP1098 was used to prepare damping material. • Effects of three preparation methods on the material properties were studied. - Abstract: Three methods of blending, including direct blending, melt blending and latex blending, were introduced to disperse sodium based montmorillonite (Na-MMT) and N,N′-hexane-1,6-diylbis{3-(5-di-tert-butyl-4-hydroxyphenyl-propionamide)} (HP1098) into the carboxylated styrene butadiene (XSBR) matrix. Small angle X-ray Diffraction testing indicated that melting Na-MMT with HP1098 enlarged the d-spacing of Na-MMT, which was further enlarged by mechanical blending with XSBR, and this led to homogeneous dispersion of Na-MMT and HP1098, which was indicated by Transmission Electronic Microscopy; latex blending was found most advantageous in dispersing HP1098 which was essential for improved damping performance. Dynamic Mechanical Analysis was utilized to characterize damping properties, and enhanced static mechanical properties were presumably originated from molecule chains being intercalated into the enlarged galleries of Na-MMT by mechanical blending. Formation of hydrogen bonds was observed by Fourier Transformation Infrared Spectrum and was supposed to be responsible for exceptional damping performance at elevated temperature. Extraction measurement of XSBR/Na-MMT/HP1098 composite indicated that XSBR and Na-MMT showed synergic effect in protecting HP1098 molecules from being extracted, which is a promising method in preparing rubber/hindered phenol damping materials with improved extraction resistance, whereby increasing the performance stability and lifespan of the composite materials. Additional advantage of this type of materials is better processability and shortened vulcanization process

  2. Comparison of the Effect of New Spice Freon Extracts Towards Ground Spices and Antioxidants for Improving the Quality of Bulgarian-Type Dry-Cured Sausage

    Directory of Open Access Journals (Sweden)

    Balev Dessislav Kostadinov

    2017-03-01

    Full Text Available Ground spices are a source of hazards for dry-fermented meat products. Since dry-cured sausages are not subjected to heat treatment, there is a high risk of microbial cross-contamination and physical impurities. The aim of this study was to determine effects of the replacement of 3 g/kg of ground black pepper (Piper nigrum L., and cumin (Cuminum cyminum with their aliquots of new freon extracts, and compare them with the effect of 0.2 g/kg antioxidant addition (taxifolin extract from Siberian larch (Larix sibirica Ledeb, rosemary (Rosmarinus officinalis L. extract, and butylated hydroxytoluene on sensory properties, color stability, proximate composition, free amino nitrogen and pH of Bulgarian-type dry-cured „Sudjuk“ sausages. The replacement of natural ground spices with aliquots of their extracts improved sensory properties and stabilized the color characteristics of the final product during 30 days of storage at 0–4°C. The addition of 0.2 g/kg rosemary extract was as effective as the addition of freon extracts on the overall assessment to the 14th day of the experiment. It was determined that the addition of antioxidants or spice extracts had no significant effect on proximate composition, pH, and free amino nitrogen accumulation of the “Sudjuk”. The addition of 0.2 g/kg, taxifolin or rosemary extracts and butylated hydroxytoluene was not so efficient in improving the sensory properties and color stabilization in comparison to the new freon spice extracts. The examined spice extracts can be successfully used to improve the quality of “Sudjuk” sausages.

  3. Dicranostiga leptopodu (Maxim.) Fedde extracts attenuated CCl4-induced acute liver damage in mice through increasing anti-oxidative enzyme activity to improve mitochondrial function.

    Science.gov (United States)

    Tang, Deping; Wang, Fang; Tang, Jinzhou; Mao, Aihong; Liao, Shiqi; Wang, Qin

    2017-01-01

    Dicranostiga Leptodu (Maxim.) fedde (DLF), a poppy plant, has been reported have many benefits and medicinal properties, including free radicals scavenging and detoxifying. However, the protective effect of DLF extracts against carbon tetrachloride (CCl 4 )-induced damage in mice liver has not been elucidated. Here, we demonstrated that DLF extracts attenuated CCl 4 -induced liver damage in mice through increasing anti-oxidative enzyme activity to improve mitochondrial function. In this study, the mice liver damage evoked by CCl 4 was marked by morphology changes, significant rise in lipid peroxidation, as well as alterations of mitochondrial respiratory function. Interestingly, pretreatment with DLF extracts attenuated CCl 4 -induced morphological damage and increasing of lipid peroxidation in mice liver. Additionally, DLF extracts improved mitochondrial function by preventing the disruption of respiratory chain and suppression of mitochondrial Na + K + -ATPase and Ca 2+ -ATPase activity. Furthermore, administration with DLF extracts elevated superoxide dismutase (SOD), catalase (CAT) and glutathione peroxidase (GPx) levels and maintained the balance of redox status. This results showed that toxic protection effect of DLF extracts on mice liver is mediated by improving mitochondrial respiratory function and keeping the balance of redox status, which suggesting that DLF extracts could be used as potential toxic protection agent for the liver against hepatotoxic agent. Copyright © 2016. Published by Elsevier Masson SAS.

  4. Unsupervised Symbolization of Signal Time Series for Extraction of the Embedded Information

    Directory of Open Access Journals (Sweden)

    Yue Li

    2017-03-01

    Full Text Available This paper formulates an unsupervised algorithm for symbolization of signal time series to capture the embedded dynamic behavior. The key idea is to convert time series of the digital signal into a string of (spatially discrete symbols from which the embedded dynamic information can be extracted in an unsupervised manner (i.e., no requirement for labeling of time series. The main challenges here are: (1 definition of the symbol assignment for the time series; (2 identification of the partitioning segment locations in the signal space of time series; and (3 construction of probabilistic finite-state automata (PFSA from the symbol strings that contain temporal patterns. The reported work addresses these challenges by maximizing the mutual information measures between symbol strings and PFSA states. The proposed symbolization method has been validated by numerical simulation as well as by experimentation in a laboratory environment. Performance of the proposed algorithm has been compared to that of two commonly used algorithms of time series partitioning.

  5. Hibiscus sabdariffa extract lowers blood pressure and improves endothelial function.

    Science.gov (United States)

    Joven, Jorge; March, Isabel; Espinel, Eugenia; Fernández-Arroyo, Salvador; Rodríguez-Gallego, Esther; Aragonès, Gerard; Beltrán-Debón, Raúl; Alonso-Villaverde, Carlos; Rios, Lidia; Martin-Paredero, Vicente; Menendez, Javier A; Micol, Vicente; Segura-Carretero, Antonio; Camps, Jordi

    2014-06-01

    Polyphenols from Hibiscus sabdariffa calices were administered to patients with metabolic syndrome (125 mg/kg/day for 4 wk, n = 31) and spontaneously hypertensive rats (125 or 60 mg/kg in a single dose or daily for 1 wk, n = 8 for each experimental group). The H. sabdariffa extract improved metabolism, displayed potent anti-inflammatory and antioxidant activities, and significantly reduced blood pressure in both humans and rats. Diuresis and inhibition of the angiotensin I-converting enzyme were found to be less important mechanisms than those related to the antioxidant, anti-inflammatory, and endothelium-dependent effects to explain the beneficial actions. Notably, polyphenols induced a favorable endothelial response that should be considered in the management of metabolic cardiovascular risks. © 2014 The Authors. Molecular Nutrition & Food Research published by Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  6. Polyphenol-rich strawberry extract protects human dermal fibroblasts against hydrogen peroxide oxidative damage and improves mitochondrial functionality.

    Science.gov (United States)

    Giampieri, Francesca; Alvarez-Suarez, José M; Mazzoni, Luca; Forbes-Hernandez, Tamara Y; Gasparrini, Massimiliano; Gonzàlez-Paramàs, Ana M; Santos-Buelga, Celestino; Quiles, José L; Bompadre, Stefano; Mezzetti, Bruno; Battino, Maurizio

    2014-06-11

    Strawberry bioactive compounds are widely known to be powerful antioxidants. In this study, the antioxidant and anti-aging activities of a polyphenol-rich strawberry extract were evaluated using human dermal fibroblasts exposed to H2O2. Firstly, the phenol and flavonoid contents of strawberry extract were studied, as well as the antioxidant capacity. HPLC-DAD analysis was performed to determine the vitamin C and β-carotene concentration, while HPLC-DAD/ESI-MS analysis was used for anthocyanin identification. Strawberry extract presented a high antioxidant capacity, and a relevant concentration of vitamins and phenolics. Pelargonidin- and cyanidin-glycosides were the most representative anthocyanin components of the fruits. Fibroblasts incubated with strawberry extract and stressed with H2O2 showed an increase in cell viability, a smaller intracellular amount of ROS, and a reduction of membrane lipid peroxidation and DNA damage. Strawberry extract was also able to improve mitochondrial functionality, increasing the basal respiration of mitochondria and to promote a regenerative capacity of cells after exposure to pro-oxidant stimuli. These findings confirm that strawberries possess antioxidant properties and provide new insights into the beneficial role of strawberry bioactive compounds on protecting skin from oxidative stress and aging.

  7. Algorithm based on regional separation for automatic grain boundary extraction using improved mean shift method

    Science.gov (United States)

    Zhenying, Xu; Jiandong, Zhu; Qi, Zhang; Yamba, Philip

    2018-06-01

    Metallographic microscopy shows that the vast majority of metal materials are composed of many small grains; the grain size of a metal is important for determining the tensile strength, toughness, plasticity, and other mechanical properties. In order to quantitatively evaluate grain size in metals, grain boundaries must be identified in metallographic images. Based on the phenomenon of grain boundary blurring or disconnection in metallographic images, this study develops an algorithm based on regional separation for automatically extracting grain boundaries by an improved mean shift method. Experimental observation shows that the grain boundaries obtained by the proposed algorithm are highly complete and accurate. This research has practical value because the proposed algorithm is suitable for grain boundary extraction from most metallographic images.

  8. Mate extract as feed additive for improvement of beef quality

    DEFF Research Database (Denmark)

    de Zawadzki, Andressa; Arrivetti, Leandro de O.R.; Vidal, Marília P.

    2017-01-01

    Mate (Ilex paraguariensis A.St.-Hil.) is generally recognized as safe (GRAS status) and has a high content of alkaloids, saponins, and phenolic acids. Addition of mate extract to broilers feed has been shown to increase the oxidative stability of chicken meat, however, its effect on beef quality...... from animals supplemented with mate extract has not been investigated so far. Addition of extract of mate to a standard maize/soy feed at a level of 0.5, 1.0 or 1.5% w/w to the diet of feedlot for cattle resulted in increased levels of inosine monophosphate, creatine and carnosine in the fresh meat....... The content of total conjugated linoleic acid increased in the meat as mate extract concentration was increased in the feed. The tendency to radical formation in meat slurries as quantified by EPR spin-trapping decreased as increasing mate extract addition to feed, especially after storage of the meat...

  9. Research on Techniques of Multifeatures Extraction for Tongue Image and Its Application in Retrieval

    Directory of Open Access Journals (Sweden)

    Liyan Chen

    2017-01-01

    Full Text Available Tongue diagnosis is one of the important methods in the Chinese traditional medicine. Doctors can judge the disease’s situation by observing patient’s tongue color and texture. This paper presents a novel approach to extract color and texture features of tongue images. First, we use improved GLA (Generalized Lloyd Algorithm to extract the main color of tongue image. Considering that the color feature cannot fully express tongue image information, the paper analyzes tongue edge’s texture features and proposes an algorithm to extract them. Then, we integrate the two features in retrieval by different weight. Experimental results show that the proposed method can improve the detection rate of lesion in tongue image relative to single feature retrieval.

  10. Addressing Risk Assessment for Patient Safety in Hospitals through Information Extraction in Medical Reports

    Science.gov (United States)

    Proux, Denys; Segond, Frédérique; Gerbier, Solweig; Metzger, Marie Hélène

    Hospital Acquired Infections (HAI) is a real burden for doctors and risk surveillance experts. The impact on patients' health and related healthcare cost is very significant and a major concern even for rich countries. Furthermore required data to evaluate the threat is generally not available to experts and that prevents from fast reaction. However, recent advances in Computational Intelligence Techniques such as Information Extraction, Risk Patterns Detection in documents and Decision Support Systems allow now to address this problem.

  11. Information Distribution in Complex Systems to Improve Team Performance

    National Research Council Canada - National Science Library

    Sperling, Brian K; Pritchett, Amy; Estrada, Arthur; Adam, Gina E

    2006-01-01

    .... Specifically, this study hypothesizes that providing task specific information to individual team members will improve coordination and decision-making, and therefore team performance, at time-critical tasks...

  12. [Improvement of symptoms in mild hyperthyroidism with an extract of Lycopus europaeus (Thyreogutt® mono)].

    Science.gov (United States)

    Eiling, Rudolf; Wieland, Veronika; Niestroj, Michael

    2013-02-01

    Extracts of Lycopus europaeus are used clinically for the control of vegetative and irritative symptoms in mild hyperthyroidism. This study assessed the effects and safety of an extract of Lycopus europaeus (Thyreogutt® mono tablets or drops) in a general practice setting. The study was conducted as an open post-marketing surveillance study consisting of three cohorts, i.e. a prolective assessment in patients receiving Thyreogutt® mono for 4 weeks, a retrolective documentation of data from patients who had received at least one course (4 weeks) of Thyreogutt® mono therapy during the previous 2 years, and a control cohort receiving no drug treatment. Assessments comprised symptoms of mild hyperthyroidism, laboratory tests of thyroid function and adverse events surveillance. Response was defined as normal thyroid hormone values at the end of therapy or a reduction of at least 20% in the number of symptoms after treatment. Responder rates were calculated. Four hundred and three patients with mild symptomatic hyperthyroidism were observed. The prolective assessment included 146 patients, the retrolective assessment 171 patients, and the control cohort 86 untreated patients. The responder rate was 72.6% in the prolective assessment and 96.5% in the retrolective assessment whereas the responder rate in the untreated control cohort amounted to 41.2%. No adverse events were reported. The extract of Lycopus europaeus was well tolerated and associated with a statistically significant and clinically relevant improvement of the symptoms in mild hyperthyroidism. The improvement was markedly better in both Thyreogutt® mono cohorts than in the control cohort.

  13. Synergistic improvement of gas sensing performance by micro-gravimetrically extracted kinetic/thermodynamic parameters

    International Nuclear Information System (INIS)

    Guo, Shuanbao; Xu, Pengcheng; Yu, Haitao; Cheng, Zhenxing; Li, Xinxin

    2015-01-01

    Highlights: • Sensing material can be comprehensively optimized by using gravimetric cantilever. • Kinetic-thermodynamic model parameters are quantitatively extracted by experiment • Sensing-material performance is synergistically optimized by extracted parameters. - Abstract: A novel method is explored for comprehensive design/optimization of organophosphorus sensing material, which is loaded on mass-type microcantilever sensor. Conventionally, by directly observing the gas sensing response, it is difficult to build quantitative relationship with the intrinsic structure of the material. To break through this difficulty, resonant cantilever is employed as gravimetric tool to implement molecule adsorption experiment. Based on the sensing data, key kinetic/thermodynamic parameters of the material to the molecule, including adsorption heat −ΔH°, adsorption/desorption rate constants K a and K d , active-site number per unit mass N′ and surface coverage θ, can be quantitatively extracted according to physical–chemistry theories. With gaseous DMMP (simulant of organophosphorus agents) as sensing target, the optimization route for three sensing materials is successfully demonstrated. Firstly, a hyper-branched polymer is evaluated. Though suffering low sensitivity due to insufficient N′, the bis(4-hydroxyphenyl)-hexafluoropropane (BHPF) sensing-group exhibits satisfactory reproducibility due to appropriate −ΔH°. To achieve more sensing-sites, KIT-5 mesoporous-silica with higher surface-area is assessed, resulting in good sensitivity but too high −ΔH° that brings poor repeatability. After comprehensive consideration, the confirmed BHPF sensing-group is grafted on the KIT-5 carrier to form an optimized DMMP sensing nanomaterial. Experimental results indicate that, featuring appropriate kinetic/thermodynamic parameters of −ΔH°, K a , K d , N′ and θ, the BHPF-functionalized KIT-5 mesoporous silica exhibits synergistic improvement among

  14. Information improves lives in the Philippines | IDRC - International ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2011-04-14

    Apr 14, 2011 ... Information improves lives in the Philippines ... “Without a reliable source of data, local planners opted to shoot an arrow and hit few or none of all eligible targets. ... in the Philippines · PEP CBMS Website · IDRC Digital Library ...

  15. Strategies for the extraction and analysis of non-extractable polyphenols from plants.

    Science.gov (United States)

    Domínguez-Rodríguez, Gloria; Marina, María Luisa; Plaza, Merichel

    2017-09-08

    The majority of studies based on phenolic compounds from plants are focused on the extractable fraction derived from an aqueous or aqueous-organic extraction. However, an important fraction of polyphenols is ignored due to the fact that they remain retained in the residue of extraction. They are the so-called non-extractable polyphenols (NEPs) which are high molecular weight polymeric polyphenols or individual low molecular weight phenolics associated to macromolecules. The scarce information available about NEPs shows that these compounds possess interesting biological activities. That is why the interest about the study of these compounds has been increasing in the last years. Furthermore, the extraction and characterization of NEPs are considered a challenge because the developed analytical methodologies present some limitations. Thus, the present literature review summarizes current knowledge of NEPs and the different methodologies for the extraction of these compounds, with a particular focus on hydrolysis treatments. Besides, this review provides information on the most recent developments in the purification, separation, identification and quantification of NEPs from plants. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Application of gamma radiation and physicochemical treatment to improve the bioactive properties of chitosan extracted from shrimp shell

    Directory of Open Access Journals (Sweden)

    Aktar Jesmin

    2017-12-01

    Full Text Available The aim of this study is to exploit a suitable chitosan extraction method from the chitin of indigenous shrimp shells by employing different physicochemical treatments and to improve different bioactive properties of this extracted chitosan (CS by applying gamma radiation. Chitin was prepared from shrimp shell by pretreatment (deproteination, demineralization and oxidation. Chitosan was extracted from chitin by eight different methods varying different physicochemical parameters (reagent concentration, temperature and time and assessed with respect to the degree of deacetylation, requirement of time and reagents. The method where chitin was repeatedly treated with 121°C for 30 min with 20 M NaOH, produced the highest degree of deacetylation (DD value (92% as measured by potentiometric titration, with the least consumption of time and chemicals, and thus, selected as the best suitable extraction method. For further quality improvement, chitosan with highest DD value was irradiated with different doses (i.e., 5, 10, 15, 20 and 50 kGy of gamma radiation from cobalt-60 gamma irradiator. As the radiation dose was increased, the molecular weight of the wet irradiated chitosan, as measured by the viscosimetric method, decreased from 1.16 × 105 to 1.786 × 103, 1.518 × 103, 1.134 × 103, 1.046 × 103 and 8.23 × 102 dalton, respectively. The radiation treatment of chitosan samples increased the antimicrobial activity significantly in concentration dependent manner on both gram-positive (Staphylococcus aureus and gram-negative (Escherichia coli bacteria, as determined by the well-diffusion method. Four to five percent wet chitosan treated with a radiation dose range of 5.0–10.0 kGy rendered the highest antimicrobial activity with least energy and time consumption. Solubility, water binding capacity (WBC and fat binding capacity (FBC also improved due to irradiation of chitosan.

  17. Energy extraction from atmospheric turbulence to improve flight vehicle performance

    Science.gov (United States)

    Patel, Chinmay Karsandas

    Small 'bird-sized' Unmanned Aerial Vehicles (UAVs) have now become practical due to technological advances in embedded electronics, miniature sensors and actuators, and propulsion systems. Birds are known to take advantage of wind currents to conserve energy and fly long distances without flapping their wings. This dissertation explores the possibility of improving the performance of small UAVs by extracting the energy available in atmospheric turbulence. An aircraft can gain energy from vertical gusts by increasing its lift in regions of updraft and reducing its lift in downdrafts - a concept that has been known for decades. Starting with a simple model of a glider flying through a sinusoidal gust, a parametric optimization approach is used to compute the minimum gust amplitude and optimal control input required for the glider to sustain flight without losing energy. For small UAVs using optimal control inputs, sinusoidal gusts with amplitude of 10--15% of the cruise speed are sufficient to keep the aircraft aloft. The method is then modified and extended to include random gusts that are representative of natural turbulence. A procedure to design optimal control laws for energy extraction from realistic gust profiles is developed using a Genetic Algorithm (GA). A feedback control law is designed to perform well over a variety of random gusts, and not be tailored for one particular gust. A small UAV flying in vertical turbulence is shown to obtain average energy savings of 35--40% with the use of a simple control law. The design procedure is also extended to determine optimal control laws for sinusoidal as well as turbulent lateral gusts. The theoretical work is complemented by experimental validation using a small autonomous UAV. The development of a lightweight autopilot and UAV platform is presented. Flight test results show that active control of the lift of an autonomous glider resulted in approximately 46% average energy savings compared to glides with fixed

  18. A participatory model for improving occupational health and safety: improving informal sector working conditions in Thailand.

    Science.gov (United States)

    Manothum, Aniruth; Rukijkanpanich, Jittra; Thawesaengskulthai, Damrong; Thampitakkul, Boonwa; Chaikittiporn, Chalermchai; Arphorn, Sara

    2009-01-01

    The purpose of this study was to evaluate the implementation of an Occupational Health and Safety Management Model for informal sector workers in Thailand. The studied model was characterized by participatory approaches to preliminary assessment, observation of informal business practices, group discussion and participation, and the use of environmental measurements and samples. This model consisted of four processes: capacity building, risk analysis, problem solving, and monitoring and control. The participants consisted of four local labor groups from different regions, including wood carving, hand-weaving, artificial flower making, and batik processing workers. The results demonstrated that, as a result of applying the model, the working conditions of the informal sector workers had improved to meet necessary standards. This model encouraged the use of local networks, which led to cooperation within the groups to create appropriate technologies to solve their problems. The authors suggest that this model could effectively be applied elsewhere to improve informal sector working conditions on a broader scale.

  19. 42 CFR 422.153 - Use of quality improvement organization review information.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 3 2010-10-01 2010-10-01 false Use of quality improvement organization review... HEALTH AND HUMAN SERVICES (CONTINUED) MEDICARE PROGRAM MEDICARE ADVANTAGE PROGRAM Quality Improvement § 422.153 Use of quality improvement organization review information. CMS will acquire from quality...

  20. Mate tea (Ilex paraguariensis) improves bone formation in the alveolar socket healing after tooth extraction in rats.

    Science.gov (United States)

    Brasilino, Matheus da Silva; Stringhetta-Garcia, Camila Tami; Pereira, Camila Scacco; Pereira, Ariana Aparecida Ferreira; Stringhetta, Karina; Leopoldino, Andréia Machado; Crivelini, Marcelo Macedo; Ervolino, Edilson; Dornelles, Rita Cássia Menegati; de Melo Stevanato Nakamune, Ana Cláudia; Chaves-Neto, Antonio Hernandes

    2018-04-01

    The objective of this study was to investigate the effects of mate tea (MT) [Ilex paraguariensis] on alveolar socket healing after tooth extraction. Sixteen male rats were divided into MT and control groups. MT was administered by intragastric gavage at a dose of 20 mg/kg/day for 28 days before and 28 days after right maxillary incisor extraction. The control group received an equal volume of water. Histopathological and histometric analysis of the neoformed bone area and osteocyte density were performed, as well as immunohistochemical analysis of osteocalcin (OCN), receptor activator of nuclear factor kappa-B ligand (RANKL), osteoprotegerin (OPG), tartrate-resistant acid phosphatase (TRAP), and manganese superoxide dismutase (MnSOD) in the alveolar socket. Calcium, phosphorus, alkaline phosphatase (ALP) activity, total antioxidant capacity (TAC), and malondialdehyde (MDA) were measured in plasma, whereas TRAP activity was determined in serum. Histometry evidenced an increase in bone area (P alveolar socket healing on day 28 after tooth extraction. Regular MT ingestion improves the antioxidant defenses and bone formation, which is beneficial for alveolar socket bone healing after tooth extraction.

  1. Improvements in or relating to information display arrangements

    International Nuclear Information System (INIS)

    Blay, A.G.

    1975-01-01

    An information display arrangement for use in biomedical radiography with either X- or γ-radiation or ultrasonic waves is described. This invention overcomes the drawbacks of conventional three-dimensional displays where certain features may be partially or completely obscured from view by matter which is not of interest. This improvement is achieved by interactive means using a light pen on a cathode ray tube display to effectively draw boundaries between relevant and irrelevant features. The computerised display is then up-dated from the information store containing the transmission data. (U.K.)

  2. Sleep promotes the extraction of grammatical rules.

    Directory of Open Access Journals (Sweden)

    Ingrid L C Nieuwenhuis

    Full Text Available Grammar acquisition is a high level cognitive function that requires the extraction of complex rules. While it has been proposed that offline time might benefit this type of rule extraction, this remains to be tested. Here, we addressed this question using an artificial grammar learning paradigm. During a short-term memory cover task, eighty-one human participants were exposed to letter sequences generated according to an unknown artificial grammar. Following a time delay of 15 min, 12 h (wake or sleep or 24 h, participants classified novel test sequences as Grammatical or Non-Grammatical. Previous behavioral and functional neuroimaging work has shown that classification can be guided by two distinct underlying processes: (1 the holistic abstraction of the underlying grammar rules and (2 the detection of sequence chunks that appear at varying frequencies during exposure. Here, we show that classification performance improved after sleep. Moreover, this improvement was due to an enhancement of rule abstraction, while the effect of chunk frequency was unaltered by sleep. These findings suggest that sleep plays a critical role in extracting complex structure from separate but related items during integrative memory processing. Our findings stress the importance of alternating periods of learning with sleep in settings in which complex information must be acquired.

  3. Lavandula angustifolia extract improves deteriorated synaptic plasticity in an animal model of Alzheimer's disease.

    Science.gov (United States)

    Soheili, Masoud; Tavirani, Mostafa Rezaei; Salami, Mahmoud

    2015-11-01

    Neurodegenerative Alzheimer's disease (AD) is associated with profound deficits in synaptic transmission and synaptic plasticity. Long-term potentiation (LTP), an experimental form of synaptic plasticity, is intensively examined in hippocampus. In this study we evaluated the effect of aqueous extract of lavender (Lavandula angustifolia) on induction of LTP in the CA1 area of hippocampus. In response to stimulation of the Schaffer collaterals the baseline or tetanized field extracellular postsynaptic potentials (fEPSPs) were recorded in the CA1 area. The electrophysiological recordings were carried out in four groups of rats; two control groups including the vehicle (CON) and lavender (CE) treated rats and two Alzheimeric groups including the vehicle (ALZ) and lavender (AE) treated animals. The extract inefficiently affected the baseline responses in the four testing groups. While the fEPSPs displayed a considerable LTP in the CON animals, no potentiation was evident in the tetanized responses in the ALZ rats. The herbal medicine effectively restored LTP in the AE group and further potentiated fEPSPs in the CE group. The positive effect of the lavender extract on the plasticity of synaptic transmission supports its previously reported behavioral effects on improvement of impaired spatial memory in the Alzheimeric animals.

  4. Information extraction from FN plots of tungsten microemitters.

    Science.gov (United States)

    Mussa, Khalil O; Mousa, Marwan S; Fischer, Andreas

    2013-09-01

    Tungsten based microemitter tips have been prepared both clean and coated with dielectric materials. For clean tungsten tips, apex radii have been varied ranging from 25 to 500 nm. These tips were manufactured by electrochemical etching a 0.1 mm diameter high purity (99.95%) tungsten wire at the meniscus of two molar NaOH solution. Composite micro-emitters considered here are consisting of a tungsten core coated with different dielectric materials-such as magnesium oxide (MgO), sodium hydroxide (NaOH), tetracyanoethylene (TCNE), and zinc oxide (ZnO). It is worthwhile noting here, that the rather unconventional NaOH coating has shown several interesting properties. Various properties of these emitters were measured including current-voltage (IV) characteristics and the physical shape of the tips. A conventional field emission microscope (FEM) with a tip (cathode)-screen (anode) separation standardized at 10 mm was used to electrically characterize the electron emitters. The system was evacuated down to a base pressure of ∼10(-8) mbar when baked at up to ∼180 °C overnight. This allowed measurements of typical field electron emission (FE) characteristics, namely the IV characteristics and the emission images on a conductive phosphorus screen (the anode). Mechanical characterization has been performed through a FEI scanning electron microscope (SEM). Within this work, the mentioned experimental results are connected to the theory for analyzing Fowler-Nordheim (FN) plots. We compared and evaluated the data extracted from clean tungsten tips of different radii and determined deviations between the results of different extraction methods applied. In particular, we derived the apex radii of several clean and coated tungsten tips by both SEM imaging and analyzing FN plots. The aim of this analysis is to support the ongoing discussion on recently developed improvements of the theory for analyzing FN plots related to metal field electron emitters, which in particular

  5. Improved adsorption-desorption extraction applied to the partial characterization of the antilisterial bacteriocin produced by Carnobacterium maltaromaticum C2

    Directory of Open Access Journals (Sweden)

    F. L Tulini

    2010-06-01

    Full Text Available Bacteriocins are ribosomally produced peptides useful for food biopreservation. An improved adsorption-desorption process is proposed for the partial purification of the bacteriocin produced by the fish isolate Carnobacterium maltaromaticum C2. Analyzis of extract by SDS-PAGE indicated this method may offer an alternative to improve the yield of purification of bacteriocins.

  6. Automated Text Markup for Information Retrieval from an Electronic Textbook of Infectious Disease

    Science.gov (United States)

    Berrios, Daniel C.; Kehler, Andrew; Kim, David K.; Yu, Victor L.; Fagan, Lawrence M.

    1998-01-01

    The information needs of practicing clinicians frequently require textbook or journal searches. Making these sources available in electronic form improves the speed of these searches, but precision (i.e., the fraction of relevant to total documents retrieved) remains low. Improving the traditional keyword search by transforming search terms into canonical concepts does not improve search precision greatly. Kim et al. have designed and built a prototype system (MYCIN II) for computer-based information retrieval from a forthcoming electronic textbook of infectious disease. The system requires manual indexing by experts in the form of complex text markup. However, this mark-up process is time consuming (about 3 person-hours to generate, review, and transcribe the index for each of 218 chapters). We have designed and implemented a system to semiautomate the markup process. The system, information extraction for semiautomated indexing of documents (ISAID), uses query models and existing information-extraction tools to provide support for any user, including the author of the source material, to mark up tertiary information sources quickly and accurately.

  7. Videomicroscopic extraction of specific information on cell proliferation and migration in vitro

    International Nuclear Information System (INIS)

    Debeir, Olivier; Megalizzi, Veronique; Warzee, Nadine; Kiss, Robert; Decaestecker, Christine

    2008-01-01

    In vitro cell imaging is a useful exploratory tool for cell behavior monitoring with a wide range of applications in cell biology and pharmacology. Combined with appropriate image analysis techniques, this approach has been shown to provide useful information on the detection and dynamic analysis of cell events. In this context, numerous efforts have been focused on cell migration analysis. In contrast, the cell division process has been the subject of fewer investigations. The present work focuses on this latter aspect and shows that, in complement to cell migration data, interesting information related to cell division can be extracted from phase-contrast time-lapse image series, in particular cell division duration, which is not provided by standard cell assays using endpoint analyses. We illustrate our approach by analyzing the effects induced by two sigma-1 receptor ligands (haloperidol and 4-IBP) on the behavior of two glioma cell lines using two in vitro cell models, i.e., the low-density individual cell model and the high-density scratch wound model. This illustration also shows that the data provided by our approach are suggestive as to the mechanism of action of compounds, and are thus capable of informing the appropriate selection of further time-consuming and more expensive biological evaluations required to elucidate a mechanism

  8. Polyphenol-Rich Strawberry Extract Protects Human Dermal Fibroblasts against Hydrogen Peroxide Oxidative Damage and Improves Mitochondrial Functionality

    Directory of Open Access Journals (Sweden)

    Francesca Giampieri

    2014-06-01

    Full Text Available Strawberry bioactive compounds are widely known to be powerful antioxidants. In this study, the antioxidant and anti-aging activities of a polyphenol-rich strawberry extract were evaluated using human dermal fibroblasts exposed to H2O2. Firstly, the phenol and flavonoid contents of strawberry extract were studied, as well as the antioxidant capacity. HPLC-DAD analysis was performed to determine the vitamin C and β-carotene concentration, while HPLC-DAD/ESI-MS analysis was used for anthocyanin identification. Strawberry extract presented a high antioxidant capacity, and a relevant concentration of vitamins and phenolics. Pelargonidin- and cyanidin-glycosides were the most representative anthocyanin components of the fruits. Fibroblasts incubated with strawberry extract and stressed with H2O2 showed an increase in cell viability, a smaller intracellular amount of ROS, and a reduction of membrane lipid peroxidation and DNA damage. Strawberry extract was also able to improve mitochondrial functionality, increasing the basal respiration of mitochondria and to promote a regenerative capacity of cells after exposure to pro-oxidant stimuli. These findings confirm that strawberries possess antioxidant properties and provide new insights into the beneficial role of strawberry bioactive compounds on protecting skin from oxidative stress and aging.

  9. Memorandum on the use of information technology to improve medication safety.

    Science.gov (United States)

    Ammenwerth, E; Aly, A-F; Bürkle, T; Christ, P; Dormann, H; Friesdorf, W; Haas, C; Haefeli, W E; Jeske, M; Kaltschmidt, J; Menges, K; Möller, H; Neubert, A; Rascher, W; Reichert, H; Schuler, J; Schreier, G; Schulz, S; Seidling, H M; Stühlinger, W; Criegee-Rieck, M

    2014-01-01

    Information technology in health care has a clear potential to improve the quality and efficiency of health care, especially in the area of medication processes. On the other hand, existing studies show possible adverse effects on patient safety when IT for medication-related processes is developed, introduced or used inappropriately. To summarize definitions and observations on IT usage in pharmacotherapy and to derive recommendations and future research priorities for decision makers and domain experts. This memorandum was developed in a consensus-based iterative process that included workshops and e-mail discussions among 21 experts coordinated by the Drug Information Systems Working Group of the German Society for Medical Informatics, Biometry and Epidemiology (GMDS). The recommendations address, among other things, a stepwise and comprehensive strategy for IT usage in medication processes, the integration of contextual information for alert generation, the involvement of patients, the semantic integration of information resources, usability and adaptability of IT solutions, and the need for their continuous evaluation. Information technology can help to improve medication safety. However, challenges remain regarding access to information, quality of information, and measurable benefits.

  10. When Information Improves Information Security

    Science.gov (United States)

    Grossklags, Jens; Johnson, Benjamin; Christin, Nicolas

    This paper presents a formal, quantitative evaluation of the impact of bounded-rational security decision-making subject to limited information and externalities. We investigate a mixed economy of an individual rational expert and several naïve near-sighted agents. We further model three canonical types of negative externalities (weakest-link, best shot and total effort), and study the impact of two information regimes on the threat level agents are facing.

  11. Ultraviolet light assisted extraction of flavonoids and allantoin from aqueous and alcoholic extracts of Symphytum officinale.

    Science.gov (United States)

    Al-Nimer, Marwan S M; Wahbee, Zainab

    2017-01-01

    Symphytum officinale (comfrey) is a medicinal plant commonly used in decoction and to treat ailments. It protects the skin against ultraviolet (UV)-irradiation. UV irradiation may induce variable effects on the constituents of herbal extracts and thereby may limit or improve the advantages of using these extracts as medicinal supplements. This study aimed to assess the effect of UV radiations including UV-A, UV-B, and UV-C on the constituents of S. officinale aqueous and alcoholic extracts. Comfrey extracts (1% w/v) were prepared using distilled water, ethanol, and methanol. They were exposed to wavelengths of UV-A, UV-B, and UV-C for 10 min. The principal peak on the UV-spectroscopy scanning, the flavonoids, reducing power, and the allantoin levels were determined before and after irradiation. UV irradiation reduces the magnitude of the principle peak at 355 nm wavelength of the aqueous infusion and methanol extracts. It improves the levels of flavonoids and reducing power of the aqueous extracts and increases the levels of allanotoin in aqueous and methanol extracts. UV-radiation enhances the yields of active ingredient of comfrey extracted with methanol, whereas improves the flavonoids, reducing power, and allantoin levels of comfrey extracted by the aqueous infusion method. UV-radiation reduces the levels of flavonoids, reducing power and allantoin when the comfrey extracted by alcohols.

  12. Optimization of pressurized liquid extraction (PLE) of dioxin-furans and dioxin-like PCBs from environmental samples.

    Science.gov (United States)

    Antunes, Pedro; Viana, Paula; Vinhas, Tereza; Capelo, J L; Rivera, J; Gaspar, Elvira M S M

    2008-05-30

    Pressurized liquid extraction (PLE) applying three extraction cycles, temperature and pressure, improved the efficiency of solvent extraction when compared with the classical Soxhlet extraction. Polychlorinated-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like PCBs (coplanar polychlorinated biphenyls (Co-PCBs)) in two Certified Reference Materials [DX-1 (sediment) and BCR 529 (soil)] and in two contaminated environmental samples (sediment and soil) were extracted by ASE and Soxhlet methods. Unlike data previously reported by other authors, results demonstrated that ASE using n-hexane as solvent and three extraction cycles, 12.4 MPa (1800 psi) and 150 degrees C achieves similar recovery results than the classical Soxhlet extraction for PCDFs and Co-PCBs, and better recovery results for PCDDs. ASE extraction, performed in less time and with less solvent proved to be, under optimized conditions, an excellent extraction technique for the simultaneous analysis of PCDD/PCDFs and Co-PCBs from environmental samples. Such fast analytical methodology, having the best cost-efficiency ratio, will improve the control and will provide more information about the occurrence of dioxins and the levels of toxicity and thereby will contribute to increase human health.

  13. Using the DOM Tree for Content Extraction

    Directory of Open Access Journals (Sweden)

    David Insa

    2012-10-01

    Full Text Available The main information of a webpage is usually mixed between menus, advertisements, panels, and other not necessarily related information; and it is often difficult to automatically isolate this information. This is precisely the objective of content extraction, a research area of widely interest due to its many applications. Content extraction is useful not only for the final human user, but it is also frequently used as a preprocessing stage of different systems that need to extract the main content in a web document to avoid the treatment and processing of other useless information. Other interesting application where content extraction is particularly used is displaying webpages in small screens such as mobile phones or PDAs. In this work we present a new technique for content extraction that uses the DOM tree of the webpage to analyze the hierarchical relations of the elements in the webpage. Thanks to this information, the technique achieves a considerable recall and precision. Using the DOM structure for content extraction gives us the benefits of other approaches based on the syntax of the webpage (such as characters, words and tags, but it also gives us a very precise information regarding the related components in a block, thus, producing very cohesive blocks.

  14. ROAD AND ROADSIDE FEATURE EXTRACTION USING IMAGERY AND LIDAR DATA FOR TRANSPORTATION OPERATION

    Directory of Open Access Journals (Sweden)

    S. Ural

    2015-03-01

    Full Text Available Transportation agencies require up-to-date, reliable, and feasibly acquired information on road geometry and features within proximity to the roads as input for evaluating and prioritizing new or improvement road projects. The information needed for a robust evaluation of road projects includes road centerline, width, and extent together with the average grade, cross-sections, and obstructions near the travelled way. Remote sensing is equipped with a large collection of data and well-established tools for acquiring the information and extracting aforementioned various road features at various levels and scopes. Even with many remote sensing data and methods available for road extraction, transportation operation requires more than the centerlines. Acquiring information that is spatially coherent at the operational level for the entire road system is challenging and needs multiple data sources to be integrated. In the presented study, we established a framework that used data from multiple sources, including one-foot resolution color infrared orthophotos, airborne LiDAR point clouds, and existing spatially non-accurate ancillary road networks. We were able to extract 90.25% of a total of 23.6 miles of road networks together with estimated road width, average grade along the road, and cross sections at specified intervals. Also, we have extracted buildings and vegetation within a predetermined proximity to the extracted road extent. 90.6% of 107 existing buildings were correctly identified with 31% false detection rate.

  15. Comparative exergy analyses of Jatropha curcas oil extraction methods: Solvent and mechanical extraction processes

    International Nuclear Information System (INIS)

    Ofori-Boateng, Cynthia; Keat Teong, Lee; JitKang, Lim

    2012-01-01

    Highlights: ► Exergy analysis detects locations of resource degradation within a process. ► Solvent extraction is six times exergetically destructive than mechanical extraction. ► Mechanical extraction of jatropha oil is 95.93% exergetically efficient. ► Solvent extraction of jatropha oil is 79.35% exergetically efficient. ► Exergy analysis of oil extraction processes allow room for improvements. - Abstract: Vegetable oil extraction processes are found to be energy intensive. Thermodynamically, any energy intensive process is considered to degrade the most useful part of energy that is available to produce work. This study uses literature values to compare the efficiencies and degradation of the useful energy within Jatropha curcas oil during oil extraction taking into account solvent and mechanical extraction methods. According to this study, J. curcas seeds on processing into J. curcas oil is upgraded with mechanical extraction but degraded with solvent extraction processes. For mechanical extraction, the total internal exergy destroyed is 3006 MJ which is about six times less than that for solvent extraction (18,072 MJ) for 1 ton J. curcas oil produced. The pretreatment processes of the J. curcas seeds recorded a total internal exergy destructions of 5768 MJ accounting for 24% of the total internal exergy destroyed for solvent extraction processes and 66% for mechanical extraction. The exergetic efficiencies recorded are 79.35% and 95.93% for solvent and mechanical extraction processes of J. curcas oil respectively. Hence, mechanical oil extraction processes are exergetically efficient than solvent extraction processes. Possible improvement methods are also elaborated in this study.

  16. Capture and exploration of sample quality data to inform and improve the management of a screening collection.

    Science.gov (United States)

    Charles, Isabel; Sinclair, Ian; Addison, Daniel H

    2014-04-01

    A new approach to the storage, processing, and interrogation of the quality data for screening samples has improved analytical throughput and confidence and enhanced the opportunities for learning from the accumulating records. The approach has entailed the design, development, and implementation of a database-oriented system, capturing information from the liquid chromatography-mass spectrometry capabilities used for assessing the integrity of samples in AstraZeneca's screening collection. A Web application has been developed to enable the visualization and interactive annotation of the analytical data, monitor the current sample queue, and report the throughput rate. Sample purity and identity are certified automatically on the chromatographic peaks of interest if predetermined thresholds are reached on key parameters. Using information extracted in parallel from the compound registration and container inventory databases, the chromatographic and spectroscopic profiles for each vessel are linked to the sample structures and storage histories. A search engine facilitates the direct comparison of results for multiple vessels of the same or similar compounds, for single vessels analyzed at different time points, or for vessels related by their origin or process flow. Access to this network of information has provided a deeper understanding of the multiple factors contributing to sample quality assurance.

  17. Nanoemulsion as a carrier to improve the topical anti-inflammatory activity of stem bark extract of Rapanea ferruginea

    Directory of Open Access Journals (Sweden)

    Dal Mas J

    2016-09-01

    Full Text Available Juarana Dal Mas,1 Tailyn Zermiani,1 Liliani C Thiesen,1 Joana LM Silveira,2 Kathryn ABS da Silva,1 Márcia M de Souza,1 Angela Malheiros,1 Tania MB Bresolin,1 Ruth M Lucinda-Silva1 1NIQFAR, Graduate Program in Pharmaceutical Sciences, University of Vale do Itajaí, Itajaí, Santa Catarina, Brazil; 2Department of Biochemistry and Molecular Biology, Federal University of Paraná, Curitiba, Paraná, Brazil Abstract: The aim of this study was to develop nanoemulsion containing soft extract of stem bark of Rapanea ferruginea to improve the topical delivery and anti-inflammatory activity. The extract of R. ferruginea stem bark was incorporated into the oily phase of the nanoemulsion by the method of phase inversion at low energy. The developed nanoemulsion had an average droplet size of 47.88±8.20 nm and a polydispersibility index of 0.228. Uniformity of size, spherical shape of droplet, and absence of clusters were confirmed by transmission electronic microscopy. The zeta potential was -34.7±1.15 mV. The nanoemulsion showed a moderate degree of skin irritation in the agarose overlay assay in vitro. The content of the extract markers, myrsinoic acids A and B, was 54.10±0.08 and 53.03 µg/g in the formulation, respectively. The formulation demonstrated pseudoplastic and thixotropic rheological behavior. In vitro release of chemical markers was controlled by diffusion mechanism. An extract-loaded nanoemulsion showed a topical anti-inflammatory activity in a croton oil-induced edema ear model, with a decrease in tumor necrosis factor release and myeloperoxidase activity. The nanoemulsion was 160% more efficient than the conventional cream containing 0.13% of the extract. The nanoemulsion showed suitable properties as a carrier for topical use of R. ferruginea extract and the approach for improving the topical anti-inflammatory activity. Keywords: nanotechnology, nanoemulsion, Rapanea ferruginea, anti-inflammatory, phytomedicine

  18. Ludwigia octovalvis extract improves glycemic control and memory performance in diabetic mice.

    Science.gov (United States)

    Lin, Wei-Sheng; Lo, Jung-Hsin; Yang, Jo-Hsuan; Wang, Hao-Wei; Fan, Shou-Zen; Yen, Jui-Hung; Wang, Pei-Yu

    2017-07-31

    Ludwigia octovalvis (Jacq.) P.H. Raven (Onagraceae) extracts have historically been consumed as a healthful drink for treating various conditions, including edema, nephritis, hypotension and diabetes. We have previously shown that Ludwigia octovalvis extract (LOE) can significantly extend lifespan and improve age-related memory deficits in Drosophila melanogaster through activating AMP-activated protein kinase (AMPK). Since AMPK has become a critical target for treating diabetes, we herein investigate the anti-hyperglycemic potential of LOE. Differentiated C2C12 muscle cells, HepG2 hepatocellular cells, streptozotocin (STZ)-induced diabetic mice and high fat diet (HFD)-induced diabetic mice were used to investigate the anti-hyperglycemic potential of LOE. The open field test and novel object recognition test were used to evaluate spontaneous motor activity and memory performance of HFD-induced diabetic mice. In differentiated C2C12 muscle cells and HepG2 hepatocellular cells, treatments with LOE and its active component (β-sitosterol) induced significant AMPK phosphorylation. LOE also enhanced uptake of a fluorescent glucose derivative (2-NBDG) and inhibited glucose production in these cells. The beneficial effects of LOE were completely abolished when an AMPK inhibitor, dorsomorphin, was added to the culture system, suggesting that LOE requires AMPK activation for its action in vitro. In streptozotocin (STZ)-induced diabetic mice, we found that both LOE and β-sitosterol induced an anti-hyperglycemic effect comparable to that of metformin, a drug that is commonly prescribed to treat diabetes. Moreover, LOE also improved glycemic control and memory performance of mice fed a HFD. These results indicate that LOE is a potent anti-diabetic intervention that may have potential for future clinical applications. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  19. Improvement of spatial memory of male parkinsonian rats after treatment with adipose stem cells and rosemary leaf extract

    Directory of Open Access Journals (Sweden)

    Mahdieh Ramezanihossienabadi

    2018-01-01

    Full Text Available Background: Due to the neuroprotective effect of rosemary extract, this study aimed at examining the effect of co-treatment of adipose stem cells transplantation and the extract on memory disability of parkinsonian rats. Materials and Methods: In this experimental study, male parkinsonian rats were prepared by bilateral injection of 6-OHDA. The sham group was injected normal saline into the substantia nigra. The extract+medium group was gavaged with the extract 14 days before until 8 weeks after the injury, and the medium was intravenously injected. The extract+cell group was orally gavaged with the extract and the cells were injected. Morris water maze training was conducted one week before and after the lesion and also a retrieval test was performed 4 and 8 weeks after the lesion. Results: There was no significant difference in distance moved and escape latency at training days, before the injury, between the groups. However, a week after the injury, learning ability in lesioned animals was significantly decreased as compared to the sham group (P<0.05. Results of retention tests in four and eight weeks were similar. Duration of escape latency and time spent in target quadrant of lesioned rats were significantly increased and decreased respectively as compared to the sham (P<0.05. The extract+medium and extract+cell groups showed significant decrease and increase in escape latency and time spent in target quadrant as compared to the lesioned group (P<0.05, respectively. Conclusion: The cell therapy accompanied with orally administration of the rosemary extract can improve memory deficit in Parkinson’s disease.

  20. Inexperienced clinicians can extract pathoanatomic information from MRI narrative reports with high reproducability for use in research/quality assurance

    DEFF Research Database (Denmark)

    Kent, Peter; Briggs, Andrew M; Albert, Hanne Birgit

    2011-01-01

    Background Although reproducibility in reading MRI images amongst radiologists and clinicians has been studied previously, no studies have examined the reproducibility of inexperienced clinicians in extracting pathoanatomic information from magnetic resonance imaging (MRI) narrative reports and t...

  1. Information Extraction and Interpretation Analysis of Mineral Potential Targets Based on ETM+ Data and GIS technology: A Case Study of Copper and Gold Mineralization in Burma

    International Nuclear Information System (INIS)

    Wenhui, Du; Yongqing, Chen; Nana, Guo; Yinglong, Hao; Pengfei, Zhao; Gongwen, Wang

    2014-01-01

    Mineralization-alteration and structure information extraction plays important roles in mineral resource prospecting and assessment using remote sensing data and the Geographical Information System (GIS) technology. Choosing copper and gold mines in Burma as example, the authors adopt band ratio, threshold segmentation and principal component analysis (PCA) to extract the hydroxyl alteration information using ETM+ remote sensing images. Digital elevation model (DEM) (30m spatial resolution) and ETM+ data was used to extract linear and circular faults that are associated with copper and gold mineralization. Combining geological data and the above information, the weights of evidence method and the C-A fractal model was used to integrate and identify the ore-forming favourable zones in this area. Research results show that the high grade potential targets are located with the known copper and gold deposits, and the integrated information can be used to the next exploration for the mineral resource decision-making

  2. Real time information management for improving productivity in metallurgical complexes

    International Nuclear Information System (INIS)

    Bascur, O.A.; Kennedy, J.P.

    1999-01-01

    Applying the latest information technologies in industrial plants has become a serious challenge to management and technical teams. The availability of real time and historical operations information to identify the most critical part of the processing system from mechanical integrity is a must for global plant optimization. Expanded use of plant information on the desktop is a standard tool for revenue improvement, cost reduction, and adherence to production constraints. The industrial component desktop supports access to information for process troubleshooting, continuous improvement and innovation by plant and staff personnel. Collaboration between groups enables the implementation of an overall process effectiveness index based on losses due to equipment availability, production and product quality. The key to designing technology is to use the Internet based technologies created by Microsoft for its marketplace-office automation and the Web. Time derived variables are used for process analysis, troubleshooting and performance assessment. Connectivity between metallurgical complexes, research centers and their business system has become a reality. Two case studies of large integrated mining/metallurgical complexes are highlighted. (author)

  3. Usage of information safety requirements in improving tube bending process

    Science.gov (United States)

    Livshitz, I. I.; Kunakov, E.; Lontsikh, P. A.

    2018-05-01

    This article is devoted to an improvement of the technological process's analysis with the information security requirements implementation. The aim of this research is the competition increase analysis in aircraft industry enterprises due to the information technology implementation by the example of the tube bending technological process. The article analyzes tube bending kinds and current technique. In addition, a potential risks analysis in a tube bending technological process is carried out in terms of information security.

  4. Statistical techniques to extract information during SMAP soil moisture assimilation

    Science.gov (United States)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-12-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, the need for bias correction prior to an assimilation of these estimates is reduced, which could result in a more effective use of the independent information provided by the satellite observations. In this study, a statistical neural network (NN) retrieval algorithm is calibrated using SMAP brightness temperature observations and modeled soil moisture estimates (similar to those used to calibrate the SMAP Level 4 DA system). Daily values of surface soil moisture are estimated using the NN and then assimilated into the NASA Catchment model. The skill of the assimilation estimates is assessed based on a comprehensive comparison to in situ measurements from the SMAP core and sparse network sites as well as the International Soil Moisture Network. The NN retrieval assimilation is found to significantly improve the model skill, particularly in areas where the model does not represent processes related to agricultural practices. Additionally, the NN method is compared to assimilation experiments using traditional bias correction techniques. The NN retrieval assimilation is found to more effectively use the independent information provided by SMAP resulting in larger model skill improvements than assimilation experiments using traditional bias correction techniques.

  5. How could health information be improved? Recommended actions from the Victorian Consultation on Health Literacy.

    Science.gov (United States)

    Hill, Sophie J; Sofra, Tanya A

    2017-03-07

    Objective Health literacy is on the policy agenda. Accessible, high-quality health information is a major component of health literacy. Health information materials include print, electronic or other media-based information enabling people to understand health and make health-related decisions. The aim of the present study was to present the findings and recommended actions as they relate to health information of the Victorian Consultation on Health Literacy. Methods Notes and submissions from the 2014 Victorian Consultation workshops and submissions were analysed thematically and a report prepared with input from an advisory committee. Results Health information needs to improve and recommendations are grouped into two overarching themes. First, the quality of information needs to be increased and this can be done by developing a principle-based framework to inform updating guidance for information production, formulating standards to raise quality and improving the systems for delivering information to people. Second, there needs to be a focus on users of health information. Recommendation actions were for information that promoted active participation in health encounters, resources to encourage critical users of health information and increased availability of information tailored to population diversity. Conclusion A framework to improve health information would underpin the efforts to meet literacy needs in a more consistent way, improving standards and ultimately increasing the participation by consumers and carers in health decision making and self-management. What is known about the topic? Health information is a critical component of the concept of health literacy. Poorer health literacy is associated with poorer health outcomes across a range of measures. Improving access to and the use of quality sources of health information is an important strategy for meeting the health literacy needs of the population. In recent years, health services and

  6. Supercritical fluid extraction of 2-alkylcyclobutanones formed from triglycerides by irradiation

    International Nuclear Information System (INIS)

    Horvatovich, P.; Farkas, J.; Hasselmann, C.; Marchioni, E.

    1998-01-01

    Complete text of publication follows. Radiation processing is employed to improve the microbiological safety of foodstuffs, and at the same time to suit the 'minimal processing' principle. However adequate information for consumers to enable their free choices requires specific detection methods of irradiation processes. For this purpose one of the most suitable methods is the detection of 2-alkylcyclobutanones which are formed - according to the present knowledge - only by irradiation from the fatty acid part of triglycerides. For detection of these compounds a European Norm (EN 1785) has been established. The method consists of Sohxlet extraction of fatty acids from the food sample, separation of 2-alkylcyclobutanones from other fatty components with liquid chromatography on Florisil TM , and the GC-MS analysis of the appropriate fraction with single ion monitoring (SIM) monitoring of 98 and 112 ions. But this method has a relatively high detection limit (∼1 kGy), it is time consuming and needs costly and sophisticated apparates. To improve the detection of 2-alkylcyclobutanones we replaced the Sohxlet extraction step with a supercritical fluid extraction. We optimised trapping and extraction parameters. It was found that supercritical fluid extraction is more selective than Sohxlet extraction used in the standard protocol. The extract obtained by supercritical fluid extraction contains less quantity and number of detection-disturbing components. This work is the first step towards decreasing the detection limit which will be the derivatization of 2-alkylcyclobutanones with halogen-containing reagent, and detection of derivatives with electron-capture detector (ECD)

  7. Meteorological information in GPS-RO reflected signals

    Directory of Open Access Journals (Sweden)

    K. Boniface

    2011-07-01

    Full Text Available Vertical profiles of the atmosphere can be obtained globally with the radio-occultation technique. However, the lowest layers of the atmosphere are less accurately extracted. A good description of these layers is important for the good performance of Numerical Weather Prediction (NWP systems, and an improvement of the observational data available for the low troposphere would thus be of great interest for data assimilation. We outline here how supplemental meteorological information close to the surface can be extracted whenever reflected signals are available. We separate the reflected signal through a radioholographic filter, and we interpret it with a ray tracing procedure, analyzing the trajectories of the electromagnetic waves over a 3-D field of refractive index. A perturbation approach is then used to perform an inversion, identifying the relevant contribution of the lowest layers of the atmosphere to the properties of the reflected signal, and extracting some supplemental information to the solution of the inversion of the direct propagation signals. It is found that there is a significant amount of useful information in the reflected signal, which is sufficient to extract a stand-alone profile of the low atmosphere, with a precision of approximately 0.1 %. The methodology is applied to one reflection case.

  8. Using Local Grammar for Entity Extraction from Clinical Reports

    Directory of Open Access Journals (Sweden)

    Aicha Ghoulam

    2015-06-01

    Full Text Available Information Extraction (IE is a natural language processing (NLP task whose aim is to analyze texts written in natural language to extract structured and useful information such as named entities and semantic relations linking these entities. Information extraction is an important task for many applications such as bio-medical literature mining, customer care, community websites, and personal information management. The increasing information available in patient clinical reports is difficult to access. As it is often in an unstructured text form, doctors need tools to enable them access to this information and the ability to search it. Hence, a system for extracting this information in a structured form can benefits healthcare professionals. The work presented in this paper uses a local grammar approach to extract medical named entities from French patient clinical reports. Experimental results show that the proposed approach achieved an F-Measure of 90. 06%.

  9. CRYPTO-STEG: A Hybrid Cryptology - Steganography Approach for Improved Data Security

    Directory of Open Access Journals (Sweden)

    Atif Bin Mansoor

    2012-04-01

    Full Text Available Internet is a widely used medium for transfer of information due to its reach and ease of availability. However, internet is an insecure medium and any information might be easily intercepted and viewed during its transfer. Different mechanisms like cryptology and steganography are adopted to secure the data communication over an inherently insecure medium like internet. Cryptology scrambles the information in a manner that an unintended recipient cannot easily extract the information, while steganography hides the information in a cover object so that it is transferred unnoticed in the cover. Encrypted data may not be extracted easily but causes a direct suspicion to any observer, while data hidden using steganographic techniques go inconspicuous. Cryptanalysis is the process of attacking the encrypted text to extract the information, while steganalysis is the process of detecting the disguised messages. In literature, both cryptology and steganography are treated separately. In this paper, we present our research on an improved data security paradigm, where data is first encrypted using AES (Advanced Encryption Standard and DES (Data Encryption Standard cryptology algorithms. Both plain and encrypted data is hidden in the images using Model Based and F5 steganographic techniques. Features are extracted in DWT (Discrete Wavelet Transform and DCT (Discrete Cosine Transform domains using higher order statistics for steganalysis, and subsequently used to train a FLD (Fisher Linear Discriminant classifier which is employed to categorize a separate set of images as clean or stego (containing hidden messages. Experimental results demonstrate improved data security using proposed CRYPTO-STEG approach compared to plain text steganography. Results also demonstrate that the Model Based steganography is more secure than the F5 steganography.

  10. Improving readability of informed consents for research at an academic medical institution.

    Science.gov (United States)

    Hadden, Kristie B; Prince, Latrina Y; Moore, Tina D; James, Laura P; Holland, Jennifer R; Trudeau, Christopher R

    2017-12-01

    The final rule for the protection of human subjects requires that informed consent be "in language understandable to the subject" and mandates that "the informed consent must be organized in such a way that facilitates comprehension." This study assessed the readability of Institutional Review Board-approved informed consent forms at our institution, implemented an intervention to improve the readability of consent forms, and measured the first year impact of the intervention. Readability assessment was conducted on a sample of 217 Institutional Review Board-approved informed consents from 2013 to 2015. A plain language informed consent template was developed and implemented and readability was assessed again after 1 year. The mean readability of the baseline sample was 10th grade. The mean readability of the post-intervention sample (n=82) was seventh grade. Providing investigators with a plain language informed consent template and training can promote improved readability of informed consents for research.

  11. An automatic extraction algorithm of three dimensional shape of brain parenchyma from MR images

    International Nuclear Information System (INIS)

    Matozaki, Takeshi

    2000-01-01

    For the simulation of surgical operations, the extraction of the selected region using MR images is useful. However, this segmentation requires a high level of skill and experience from the technicians. We have developed an unique automatic extraction algorithm for extracting three dimensional brain parenchyma using MR head images. It is named the ''three dimensional gray scale clumsy painter method''. In this method, a template having the shape of a pseudo-circle, a so called clumsy painter (CP), moves along the contour of the selected region and extracts the region surrounded by the contour. This method has advantages compared with the morphological filtering and the region growing method. Previously, this method was applied to binary images, but there were some problems in that the results of the extractions were varied by the value of the threshold level. We introduced gray level information of images to decide the threshold, and depend upon the change of image density between the brain parenchyma and CSF. We decided the threshold level by the vector of a map of templates, and changed the map according to the change of image density. As a result, the over extracted ratio was improved by 36%, and the under extracted ratio was improved by 20%. (author)

  12. A hybrid approach for robust multilingual toponym extraction and disambiguation

    NARCIS (Netherlands)

    Habib, Mena Badieh; van Keulen, Maurice

    Toponym extraction and disambiguation are key topics recently addressed by fields of Information Extraction and Geographical Information Retrieval. Toponym extraction and disambiguation are highly dependent processes. Not only toponym extraction effectiveness affects disambiguation, but also

  13. IMPROVING THE QUALITY OF MAINTENANCE PROCESSES USING INFORMATION TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Zora Arsovski

    2008-06-01

    Full Text Available In essence, process of maintaining equipment is a support process, because it indirectly contributes to operational ability of the production process necessary for the supply chain of the new value. Taking into account increased levels of automatization and quality, this proces s becomes more and more significant and for some branches of industry, even crucial. Due to the fact that the quality of the entire process is more and more dependent on the maintenance process, these processes must be carefully designed and effectively im plemented. There are various techniques and approaches at our disposal, such as technical, logistical and intensive application of the information - communication technologies. This last approach is presented in this work. It begins with organizational goa ls, especially quality objectives. Then, maintenance processes and integrated information system structures are defined. Maintenance process quality and improvement processes are defined using a set of performances, with a special emphasis placed on effectiveness and quality economics. At the end of the work, information system for improving maintenance economics is structured. Besides theoretical analysis, work also presents results authors obtained analyzing food industry, metal processing industry an d building materials industry.

  14. Multisensor multiresolution data fusion for improvement in classification

    Science.gov (United States)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  15. Rosmarinus officinalis L. leaf extract improves memory impairment and affects acetylcholinesterase and butyrylcholinesterase activities in rat brain.

    Science.gov (United States)

    Ozarowski, Marcin; Mikolajczak, Przemyslaw L; Bogacz, Anna; Gryszczynska, Agnieszka; Kujawska, Malgorzata; Jodynis-Liebert, Jadwiga; Piasecka, Anna; Napieczynska, Hanna; Szulc, Michał; Kujawski, Radoslaw; Bartkowiak-Wieczorek, Joanna; Cichocka, Joanna; Bobkiewicz-Kozlowska, Teresa; Czerny, Boguslaw; Mrozikiewicz, Przemyslaw M

    2013-12-01

    Rosmarinus officinalis L. leaf as part of a diet and medication can be a valuable proposal for the prevention and treatment of dementia. The aim of the study was to assess the effects of subchronic (28-fold) administration of a plant extract (RE) (200 mg/kg, p.o.) on behavioral and cognitive responses of rats linked with acetylcholinesterase (AChE) and butyrylcholinesterase (BuChE) activity and their mRNA expression level in the hippocampus and frontal cortex. The passive avoidance test results showed that RE improved long-term memory in scopolamine-induced rats. The extract inhibited the AChE activity and showed a stimulatory effect on BuChE in both parts of rat brain. Moreover, RE produced a lower mRNA BuChE expression in the cortex and simultaneously an increase in the hippocampus. The study suggests that RE led to improved long-term memory in rats, which can be partially explained by its inhibition of AChE activity in rat brain. © 2013. Published by Elsevier B.V. All rights reserved.

  16. Improving Water Resources System Operation by Direct Use of Hydroclimatic Information

    Science.gov (United States)

    Castelletti, A.; Pianosi, F.

    2011-12-01

    It is generally agreed that more information translates into better decisions. For instance, the availability of inflow predictions can improve reservoir operation; soil moisture data can be exploited to increase irrigation efficiency; etc. However, beyond this general statement, many theoretical and practical questions remain open. Provided that not all information sources are equally relevant, how does their value depend on the physical features of the water system and on the purposes of the system operation? What is the minimum lead time needed for anticipatory management to be effective? How does uncertainty in the information propagates through the modelling chain from hydroclimatic data through descriptive and decision models, and finally affect the decision? Is the data-predictions-decision paradigm truly effective or would it be better to directly use hydroclimatic data to take optimal decisions, skipping the intermediate step of hydrological forecasting? In this work we investigate these issues by application to the management of a complex water system in Northern Vietnam, characterized by multiple, conflicting objectives including hydropower production, flood control and water supply. First, we quantify the value of hydroclimatic information as the improvement in the system performances that could be attained under the (ideal) assumption of perfect knowledge of all future meteorological and hydrological input. Then, we assess and compare the relevance of different candidate information (meteorological or hydrological observations; ground or remote data; etc.) for the purpose of system operation by novel Input Variable Selection techniques. Finally, we evaluate the performance improvement made possible by the use of such information in re-designing the system operation.

  17. The BEL information extraction workflow (BELIEF): evaluation in the BioCreative V BEL and IAT track

    OpenAIRE

    Madan, Sumit; Hodapp, Sven; Senger, Philipp; Ansari, Sam; Szostak, Justyna; Hoeng, Julia; Peitsch, Manuel; Fluck, Juliane

    2016-01-01

    Network-based approaches have become extremely important in systems biology to achieve a better understanding of biological mechanisms. For network representation, the Biological Expression Language (BEL) is well designed to collate findings from the scientific literature into biological network models. To facilitate encoding and biocuration of such findings in BEL, a BEL Information Extraction Workflow (BELIEF) was developed. BELIEF provides a web-based curation interface, the BELIEF Dashboa...

  18. Extraction and fusion of spectral parameters for face recognition

    Science.gov (United States)

    Boisier, B.; Billiot, B.; Abdessalem, Z.; Gouton, P.; Hardeberg, J. Y.

    2011-03-01

    Many methods have been developed in image processing for face recognition, especially in recent years with the increase of biometric technologies. However, most of these techniques are used on grayscale images acquired in the visible range of the electromagnetic spectrum. The aims of our study are to improve existing tools and to develop new methods for face recognition. The techniques used take advantage of the different spectral ranges, the visible, optical infrared and thermal infrared, by either combining them or analyzing them separately in order to extract the most appropriate information for face recognition. We also verify the consistency of several keypoints extraction techniques in the Near Infrared (NIR) and in the Visible Spectrum.

  19. An Improved Method for High Quality Metagenomics DNA Extraction from Human and Environmental Samples

    DEFF Research Database (Denmark)

    Bag, Satyabrata; Saha, Bipasa; Mehta, Ojasvi

    2016-01-01

    and human origin samples. We introduced a combination of physical, chemical and mechanical lysis methods for proper lysis of microbial inhabitants. The community microbial DNA was precipitated by using salt and organic solvent. Both the quality and quantity of isolated DNA was compared with the existing...... methodologies and the supremacy of our method was confirmed. Maximum recovery of genomic DNA in the absence of substantial amount of impurities made the method convenient for nucleic acid extraction. The nucleic acids obtained using this method are suitable for different downstream applications. This improved...

  20. Improved annular centrifugal contactor for solvent extraction reprocessing of nuclear reactor fuel

    International Nuclear Information System (INIS)

    Bernstein, G.J.; Leonard, R.A.; Ziegler, A.A.; Steindler, M.J.

    1978-01-01

    An improved annular centrifugal contactor has been developed for solvent extraction reprocessing of spent nuclear reactor fuel. The design is an extension of a contactor developed several years ago at Argonne National Laboratory. Its distinguishing features are high throughput, high stage efficiency and the ability to handle a broad range of aqueous-to-organic phase flow ratios and density ratios. Direct coupling of the mixing and separating rotor to a motorized spindle simplifies the design and makes the contactor particularly suitable for remote maintenance. A unit that is critically safe by geometry is under test and a larger unit is being fabricated. Multi-stage miniature contactors operating on the annular mixing principle are being used for laboratory flow sheet studies. 8 figures

  1. Lavandula angustifolia extract improves deteriorated synaptic plasticity in an animal model of Alzheimer’s disease

    Science.gov (United States)

    Soheili, Masoud; Tavirani, Mostafa Rezaei; Salami, Mahmoud

    2015-01-01

    Objective(s): Neurodegenerative Alzheimer’s disease (AD) is associated with profound deficits in synaptic transmission and synaptic plasticity. Long-term potentiation (LTP), an experimental form of synaptic plasticity, is intensively examined in hippocampus. In this study we evaluated the effect of aqueous extract of lavender (Lavandula angustifolia) on induction of LTP in the CA1 area of hippocampus. In response to stimulation of the Schaffer collaterals the baseline or tetanized field extracellular postsynaptic potentials (fEPSPs) were recorded in the CA1 area. Materials and Methods: The electrophysiological recordings were carried out in four groups of rats; two control groups including the vehicle (CON) and lavender (CE) treated rats and two Alzheimeric groups including the vehicle (ALZ) and lavender (AE) treated animals. Results: The extract inefficiently affected the baseline responses in the four testing groups. While the fEPSPs displayed a considerable LTP in the CON animals, no potentiation was evident in the tetanized responses in the ALZ rats. The herbal medicine effectively restored LTP in the AE group and further potentiated fEPSPs in the CE group. Conclusion: The positive effect of the lavender extract on the plasticity of synaptic transmission supports its previously reported behavioral effects on improvement of impaired spatial memory in the Alzheimeric animals. PMID:26949505

  2. Lavandula angustifolia extract improves deteriorated synaptic plasticity in an animal model of Alzheimer’s disease

    Directory of Open Access Journals (Sweden)

    Masoud Soheili

    2015-11-01

    Full Text Available Objective(s:Neurodegenerative Alzheimer’s disease (AD is associated with profound deficits in synaptic transmission and synaptic plasticity. Long-term potentiation (LTP, an experimental form of synaptic plasticity, is intensively examined in hippocampus. In this study we evaluated the effect of aqueous extract of lavender (Lavandula angustifolia on induction of LTP in the CA1 area of hippocampus. In response to stimulation of the Schaffer collaterals the baseline or tetanized field extracellular postsynaptic potentials (fEPSPs were recorded in the CA1 area. Materials and Methods: The electrophysiological recordings were carried out in four groups of rats; two control groups including the vehicle (CON and lavender (CE treated rats and two Alzheimeric groups including the vehicle (ALZ and lavender (AE treated animals. Results: The extract inefficiently affected the baseline responses in the four testing groups. While the fEPSPs displayed a considerable LTP in the CON animals, no potentiation was evident in the tetanized responses in the ALZ rats. The herbal medicine effectively restored LTP in the AE group and further potentiated fEPSPs in the CE group. Conclusion:The positive effect of the lavender extract on the plasticity of synaptic transmission supports its previously reported behavioral effects on improvement of impaired spatial memory in the Alzheimeric animals.

  3. Improving the extraction of complex regulatory events from scientific text by using ontology-based inference.

    Science.gov (United States)

    Kim, Jung-Jae; Rebholz-Schuhmann, Dietrich

    2011-10-06

    The extraction of complex events from biomedical text is a challenging task and requires in-depth semantic analysis. Previous approaches associate lexical and syntactic resources with ontologies for the semantic analysis, but fall short in testing the benefits from the use of domain knowledge. We developed a system that deduces implicit events from explicitly expressed events by using inference rules that encode domain knowledge. We evaluated the system with the inference module on three tasks: First, when tested against a corpus with manually annotated events, the inference module of our system contributes 53.2% of correct extractions, but does not cause any incorrect results. Second, the system overall reproduces 33.1% of the transcription regulatory events contained in RegulonDB (up to 85.0% precision) and the inference module is required for 93.8% of the reproduced events. Third, we applied the system with minimum adaptations to the identification of cell activity regulation events, confirming that the inference improves the performance of the system also on this task. Our research shows that the inference based on domain knowledge plays a significant role in extracting complex events from text. This approach has great potential in recognizing the complex concepts of such biomedical ontologies as Gene Ontology in the literature.

  4. Improving the extraction of complex regulatory events from scientific text by using ontology-based inference

    Directory of Open Access Journals (Sweden)

    Kim Jung-jae

    2011-10-01

    Full Text Available Abstract Background The extraction of complex events from biomedical text is a challenging task and requires in-depth semantic analysis. Previous approaches associate lexical and syntactic resources with ontologies for the semantic analysis, but fall short in testing the benefits from the use of domain knowledge. Results We developed a system that deduces implicit events from explicitly expressed events by using inference rules that encode domain knowledge. We evaluated the system with the inference module on three tasks: First, when tested against a corpus with manually annotated events, the inference module of our system contributes 53.2% of correct extractions, but does not cause any incorrect results. Second, the system overall reproduces 33.1% of the transcription regulatory events contained in RegulonDB (up to 85.0% precision and the inference module is required for 93.8% of the reproduced events. Third, we applied the system with minimum adaptations to the identification of cell activity regulation events, confirming that the inference improves the performance of the system also on this task. Conclusions Our research shows that the inference based on domain knowledge plays a significant role in extracting complex events from text. This approach has great potential in recognizing the complex concepts of such biomedical ontologies as Gene Ontology in the literature.

  5. Improving protein fold recognition and structural class prediction accuracies using physicochemical properties of amino acids.

    Science.gov (United States)

    Raicar, Gaurav; Saini, Harsh; Dehzangi, Abdollah; Lal, Sunil; Sharma, Alok

    2016-08-07

    Predicting the three-dimensional (3-D) structure of a protein is an important task in the field of bioinformatics and biological sciences. However, directly predicting the 3-D structure from the primary structure is hard to achieve. Therefore, predicting the fold or structural class of a protein sequence is generally used as an intermediate step in determining the protein's 3-D structure. For protein fold recognition (PFR) and structural class prediction (SCP), two steps are required - feature extraction step and classification step. Feature extraction techniques generally utilize syntactical-based information, evolutionary-based information and physicochemical-based information to extract features. In this study, we explore the importance of utilizing the physicochemical properties of amino acids for improving PFR and SCP accuracies. For this, we propose a Forward Consecutive Search (FCS) scheme which aims to strategically select physicochemical attributes that will supplement the existing feature extraction techniques for PFR and SCP. An exhaustive search is conducted on all the existing 544 physicochemical attributes using the proposed FCS scheme and a subset of physicochemical attributes is identified. Features extracted from these selected attributes are then combined with existing syntactical-based and evolutionary-based features, to show an improvement in the recognition and prediction performance on benchmark datasets. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Educational Information Quantization for Improving Content Quality in Learning Management Systems

    Science.gov (United States)

    Rybanov, Alexander Aleksandrovich

    2014-01-01

    The article offers the educational information quantization method for improving content quality in Learning Management Systems. The paper considers questions concerning analysis of quality of quantized presentation of educational information, based on quantitative text parameters: average frequencies of parts of speech, used in the text; formal…

  7. Mangifera indica Fruit Extract Improves Memory Impairment, Cholinergic Dysfunction, and Oxidative Stress Damage in Animal Model of Mild Cognitive Impairment

    Science.gov (United States)

    Wattanathorn, Jintanaporn; Muchimapura, Supaporn; Thukham-Mee, Wipawee; Ingkaninan, Kornkanok; Wittaya-Areekul, Sakchai

    2014-01-01

    To date, the effective preventive paradigm against mild cognitive impairment (MCI) is required. Therefore, we aimed to determine whether Mangifera indica fruit extract, a substance possessing antioxidant and cognitive enhancing effects, could improve memory impairment, cholinergic dysfunction, and oxidative stress damage in animal model of mild cognitive impairment. Male Wistar rats, weighing 180–200 g, were orally given the extract at doses of 12.5, 50, and 200 mg·kg−1 BW for 2 weeks before and 1 week after the bilateral injection of AF64A (icv). At the end of study, spatial memory, cholinergic neurons density, MDA level, and the activities of SOD, CAT, and GSH-Px enzymes in hippocampus were determined. The results showed that all doses of extract could improve memory together with the decreased MDA level and the increased SOD and GSH-Px enzymes activities. The increased cholinergic neurons density in CA1 and CA3 of hippocampus was also observed in rats treated with the extract at doses of 50 and 200 mg·kg−1 BW. Therefore, our results suggested that M. indica, the potential protective agent against MCI, increased cholinergic function and the decreased oxidative stress which in turn enhanced memory. However, further researches are essential to elucidate the possible active ingredients and detail mechanism. PMID:24672632

  8. Mangifera indica Fruit Extract Improves Memory Impairment, Cholinergic Dysfunction, and Oxidative Stress Damage in Animal Model of Mild Cognitive Impairment

    Directory of Open Access Journals (Sweden)

    Jintanaporn Wattanathorn

    2014-01-01

    Full Text Available To date, the effective preventive paradigm against mild cognitive impairment (MCI is required. Therefore, we aimed to determine whether Mangifera indica fruit extract, a substance possessing antioxidant and cognitive enhancing effects, could improve memory impairment, cholinergic dysfunction, and oxidative stress damage in animal model of mild cognitive impairment. Male Wistar rats, weighing 180–200 g, were orally given the extract at doses of 12.5, 50, and 200 mg·kg−1 BW for 2 weeks before and 1 week after the bilateral injection of AF64A (icv. At the end of study, spatial memory, cholinergic neurons density, MDA level, and the activities of SOD, CAT, and GSH-Px enzymes in hippocampus were determined. The results showed that all doses of extract could improve memory together with the decreased MDA level and the increased SOD and GSH-Px enzymes activities. The increased cholinergic neurons density in CA1 and CA3 of hippocampus was also observed in rats treated with the extract at doses of 50 and 200 mg·kg−1 BW. Therefore, our results suggested that M. indica, the potential protective agent against MCI, increased cholinergic function and the decreased oxidative stress which in turn enhanced memory. However, further researches are essential to elucidate the possible active ingredients and detail mechanism.

  9. Information Literacy and technology to improve learning and education

    NARCIS (Netherlands)

    Mooij, Ton; Smeets, Ed

    2011-01-01

    Mooij, T., & Smeets, E. (2011, 13-16 September). Information Literacy and technology to improve learning and education. Presentation and discussion in a cross-network symposium of networks 16 and 12 at the ‘European Conference on Educational Research’ of the “European Educational Research

  10. Geopositioning with a quadcopter: Extracted feature locations and predicted accuracy without a priori sensor attitude information

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Edwards, David; Theiss, Henry; Braun, Aaron

    2017-05-01

    This paper presents an overview of the Full Motion Video-Geopositioning Test Bed (FMV-GTB) developed to investigate algorithm performance and issues related to the registration of motion imagery and subsequent extraction of feature locations along with predicted accuracy. A case study is included corresponding to a video taken from a quadcopter. Registration of the corresponding video frames is performed without the benefit of a priori sensor attitude (pointing) information. In particular, tie points are automatically measured between adjacent frames using standard optical flow matching techniques from computer vision, an a priori estimate of sensor attitude is then computed based on supplied GPS sensor positions contained in the video metadata and a photogrammetric/search-based structure from motion algorithm, and then a Weighted Least Squares adjustment of all a priori metadata across the frames is performed. Extraction of absolute 3D feature locations, including their predicted accuracy based on the principles of rigorous error propagation, is then performed using a subset of the registered frames. Results are compared to known locations (check points) over a test site. Throughout this entire process, no external control information (e.g. surveyed points) is used other than for evaluation of solution errors and corresponding accuracy.

  11. Expected value information improves financial risk taking across the adult life span.

    Science.gov (United States)

    Samanez-Larkin, Gregory R; Wagner, Anthony D; Knutson, Brian

    2011-04-01

    When making decisions, individuals must often compensate for cognitive limitations, particularly in the face of advanced age. Recent findings suggest that age-related variability in striatal activity may increase financial risk-taking mistakes in older adults. In two studies, we sought to further characterize neural contributions to optimal financial risk taking and to determine whether decision aids could improve financial risk taking. In Study 1, neuroimaging analyses revealed that individuals whose mesolimbic activation correlated with the expected value estimates of a rational actor made more optimal financial decisions. In Study 2, presentation of expected value information improved decision making in both younger and older adults, but the addition of a distracting secondary task had little impact on decision quality. Remarkably, provision of expected value information improved the performance of older adults to match that of younger adults at baseline. These findings are consistent with the notion that mesolimbic circuits play a critical role in optimal choice, and imply that providing simplified information about expected value may improve financial risk taking across the adult life span.

  12. Soy Pulp Extract Inhibits Angiotensin I-Converting Enzyme (ACE) Activity In Vitro: Evidence for Its Potential Hypertension-Improving Action.

    Science.gov (United States)

    Nishibori, Naoyoshi; Kishibuchi, Reina; Morita, Kyoji

    2017-05-04

    Soy pulp, called "okara" in Japanese, is known as a by-product of the production of bean curd (tofu), and expected to contain a variety of biologically active substances derived from soybean. However, the biological activities of okara ingredients have not yet been fully understood, and the effectiveness of okara as a functional food seems necessary to be further evaluated. Then the effect of okara extract on angiotensin I-converting enzyme (ACE) activity was examined in vitro, and the extract was shown to cause the inhibition of ACE activity in a manner depending on its concentration. Kinetic analysis indicated that this enzyme inhibition was accompanied by an increase in the Km value without any change in Vmax. Further studies suggested that putative inhibitory substances contained in the extract might be heat stable and dialyzable, and recovered mostly in the peptide fraction obtained by a spin-column separation and a high performance liquid chromatography (HPLC) fractionation. Therefore, the extract was speculated to contain small-size peptides responsible for the inhibitory effect of okara extract on ACE activity, and could be expected to improve the hypertensive conditions by reducing the production of hypertensive peptide.

  13. FacetGist: Collective Extraction of Document Facets in Large Technical Corpora.

    Science.gov (United States)

    Siddiqui, Tarique; Ren, Xiang; Parameswaran, Aditya; Han, Jiawei

    2016-10-01

    Given the large volume of technical documents available, it is crucial to automatically organize and categorize these documents to be able to understand and extract value from them. Towards this end, we introduce a new research problem called Facet Extraction. Given a collection of technical documents, the goal of Facet Extraction is to automatically label each document with a set of concepts for the key facets ( e.g. , application, technique, evaluation metrics, and dataset) that people may be interested in. Facet Extraction has numerous applications, including document summarization, literature search, patent search and business intelligence. The major challenge in performing Facet Extraction arises from multiple sources: concept extraction, concept to facet matching, and facet disambiguation. To tackle these challenges, we develop FacetGist, a framework for facet extraction. Facet Extraction involves constructing a graph-based heterogeneous network to capture information available across multiple local sentence-level features, as well as global context features. We then formulate a joint optimization problem, and propose an efficient algorithm for graph-based label propagation to estimate the facet of each concept mention. Experimental results on technical corpora from two domains demonstrate that Facet Extraction can lead to an improvement of over 25% in both precision and recall over competing schemes.

  14. Multi-Paradigm and Multi-Lingual Information Extraction as Support for Medical Web Labelling Authorities

    Directory of Open Access Journals (Sweden)

    Martin Labsky

    2010-10-01

    Full Text Available Until recently, quality labelling of medical web content has been a pre-dominantly manual activity. However, the advances in automated text processing opened the way to computerised support of this activity. The core enabling technology is information extraction (IE. However, the heterogeneity of websites offering medical content imposes particular requirements on the IE techniques to be applied. In the paper we discuss these requirements and describe a multi-paradigm approach to IE addressing them. Experiments on multi-lingual data are reported. The research has been carried out within the EU MedIEQ project.

  15. Information Technology Management: Social Security Administration Practices Can Be Improved

    National Research Council Canada - National Science Library

    Shaw, Clay

    2001-01-01

    To improve SSAs IT management practices, we recommend that the Acting Commissioner of Social Security direct the Chief Information Officer and the Deputy Commissioner for Systems to complete the following actions...

  16. A semi-supervised learning framework for biomedical event extraction based on hidden topics.

    Science.gov (United States)

    Zhou, Deyu; Zhong, Dayou

    2015-05-01

    Scientists have devoted decades of efforts to understanding the interaction between proteins or RNA production. The information might empower the current knowledge on drug reactions or the development of certain diseases. Nevertheless, due to the lack of explicit structure, literature in life science, one of the most important sources of this information, prevents computer-based systems from accessing. Therefore, biomedical event extraction, automatically acquiring knowledge of molecular events in research articles, has attracted community-wide efforts recently. Most approaches are based on statistical models, requiring large-scale annotated corpora to precisely estimate models' parameters. However, it is usually difficult to obtain in practice. Therefore, employing un-annotated data based on semi-supervised learning for biomedical event extraction is a feasible solution and attracts more interests. In this paper, a semi-supervised learning framework based on hidden topics for biomedical event extraction is presented. In this framework, sentences in the un-annotated corpus are elaborately and automatically assigned with event annotations based on their distances to these sentences in the annotated corpus. More specifically, not only the structures of the sentences, but also the hidden topics embedded in the sentences are used for describing the distance. The sentences and newly assigned event annotations, together with the annotated corpus, are employed for training. Experiments were conducted on the multi-level event extraction corpus, a golden standard corpus. Experimental results show that more than 2.2% improvement on F-score on biomedical event extraction is achieved by the proposed framework when compared to the state-of-the-art approach. The results suggest that by incorporating un-annotated data, the proposed framework indeed improves the performance of the state-of-the-art event extraction system and the similarity between sentences might be precisely

  17. Leveraging Health Information Technology to Improve Quality in Federal Healthcare.

    Science.gov (United States)

    Weigel, Fred K; Switaj, Timothy L; Hamilton, Jessica

    2015-01-01

    Healthcare delivery in America is extremely complex because it is comprised of a fragmented and nonsystematic mix of stakeholders, components, and processes. Within the US healthcare structure, the federal healthcare system is poised to lead American medicine in leveraging health information technology to improve the quality of healthcare. We posit that through developing, adopting, and refining health information technology, the federal healthcare system has the potential to transform federal healthcare quality by managing the complexities associated with healthcare delivery. Although federal mandates have spurred the widespread use of electronic health records, other beneficial technologies have yet to be adopted in federal healthcare settings. The use of health information technology is fundamental in providing the highest quality, safest healthcare possible. In addition, health information technology is valuable in achieving the Agency for Healthcare Research and Quality's implementation goals. We conducted a comprehensive literature search using the Google Scholar, PubMed, and Cochrane databases to identify an initial list of articles. Through a thorough review of the titles and abstracts, we identified 42 articles as having relevance to health information technology and quality. Through our exclusion criteria of currency of the article, citation frequency, applicability to the federal health system, and quality of research supporting conclusions, we refined the list to 11 references from which we performed our analysis. The literature shows that the use of computerized physician order entry has significantly increased accurate medication dosage and decreased medication errors. The use of clinical decision support systems have significantly increased physician adherence to guidelines, although there is little evidence that indicates any significant correlation to patient outcomes. Research shows that interoperability and usability are continuing challenges for

  18. Improved Scheduling Mechanisms for Synchronous Information and Energy Transmission.

    Science.gov (United States)

    Qin, Danyang; Yang, Songxiang; Zhang, Yan; Ma, Jingya; Ding, Qun

    2017-06-09

    Wireless energy collecting technology can effectively reduce the network time overhead and prolong the wireless sensor network (WSN) lifetime. However, the traditional energy collecting technology cannot achieve the balance between ergodic channel capacity and average collected energy. In order to solve the problem of the network transmission efficiency and the limited energy of wireless devices, three improved scheduling mechanisms are proposed: improved signal noise ratio (SNR) scheduling mechanism (IS2M), improved N-SNR scheduling mechanism (INS2M) and an improved Equal Throughput scheduling mechanism (IETSM) for different channel conditions to improve the whole network performance. Meanwhile, the average collected energy of single users and the ergodic channel capacity of three scheduling mechanisms can be obtained through the order statistical theory in Rayleig, Ricean, Nakagami- m and Weibull fading channels. It is concluded that the proposed scheduling mechanisms can achieve better balance between energy collection and data transmission, so as to provide a new solution to realize synchronous information and energy transmission for WSNs.

  19. Liquid and solid self-microemulsifying drug delivery systems for improving the oral bioavailability of andrographolide from a crude extract of Andrographis paniculata.

    Science.gov (United States)

    Sermkaew, Namfa; Ketjinda, Wichan; Boonme, Prapaporn; Phadoongsombut, Narubodee; Wiwattanapatapee, Ruedeekorn

    2013-11-20

    The purpose of this study was to develop self-microemulsifying formulations of an Andrographis paniculata extract in liquid and pellet forms for an improved oral delivery of andrographolide. The optimized liquid self-microemulsifying drug delivery system (SMEDDS) was composed of A. paniculata extract (11.1%), Capryol 90 (40%), Cremophor RH 40 (40%) and Labrasol (8.9%). This liquid SMEDDS was further adsorbed onto colloidal silicon dioxide and microcrystalline cellulose, and converted to SMEDDS pellets by the extrusion/spheronization technique. The microemulsion droplet sizes of the liquid and pellet formulations after dilution with water were in the range of 23.4 and 30.3 nm. The in vitro release of andrographolide from the liquid SMEDDS and SMEDDS pellets was 97.64% (SD 1.97%) and 97.74% (SD 3.36%) within 15 min, respectively while the release from the initial extract was only 10%. The oral absorption of andrographolide was determined in rabbits. The C(max) value of andrographolide from the A. paniculata extract liquid SMEDDS and SMEDDS pellet formulations (equivalent to 17.5mg/kg of andrographolide) was 6-fold and 5-fold greater than the value from the initial extract in aqueous suspension (equivalent to 35 mg/kg of andrographolide), respectively. In addition, the AUC(0-12h) was increased 15-fold by the liquid SMEDDS and 13-fold by the SMEDDS pellets compared to the extract in aqueous suspension, respectively. The results clearly indicated that the liquid and solid SMEDDS could be effectively used to improve the dissolution and oral bioavailability that would also enable a reduction in the dose of the poorly water soluble A. paniculata extract. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Aqueous and hydroalcoholic extracts of Black Maca (Lepidium meyenii) improve scopolamine-induced memory impairment in mice.

    Science.gov (United States)

    Rubio, Julio; Dang, Haixia; Gong, Mengjuan; Liu, Xinmin; Chen, Shi-Lin; Gonzales, Gustavo F

    2007-10-01

    Lepidium meyenii Walp. (Brassicaceae), known as Maca, is a Peruvian hypocotyl growing exclusively between 4,000 and 4,500 m altitude in the central Peruvian Andes, particularly in Junin plateau. Previously, Black variety of Maca showed to be more beneficial than other varieties of Maca on learning and memory in ovariectomized mice on the water finding test. The present study aimed to test two different doses of aqueous (0.50 and 2.00 g/kg) and hydroalcoholic (0.25 and 1.00 g/kg) extracts of Black Maca administered for 35 days on memory impairment induced by scopolamine (1mg/kg body weight i.p.) in male mice. Memory and learning were evaluated using the water Morris maze and the step-down avoidance test. Brain acetylcholinesterase (AChE) and monoamine oxidase (MAO) activities in brain were also determined. Both extracts of Black Maca significantly ameliorated the scopolamine-induced memory impairment as measured in both the water Morris maze and the step-down avoidance tests. Black Maca extracts inhibited AChE activity, whereas MAO activity was not affected. These results indicate that Black Maca improves scopolamine-induced memory deficits.

  1. Disparity, motion, and color information improve gloss constancy performance.

    Science.gov (United States)

    Wendt, Gunnar; Faul, Franz; Ekroll, Vebjørn; Mausfeld, Rainer

    2010-09-01

    S. Nishida and M. Shinya (1998) found that observers have only a limited ability to recover surface-reflectance properties under changes in surface shape. Our aim in the present study was to investigate how the degree of surface-reflectance constancy depends on the availability of information that may help to infer the reflectance and shape properties of surfaces. To this end, we manipulated the availability of (i) motion-induced information (static vs. dynamic presentation), (ii) disparity information (with the levels "monocular," "surface disparity," and "surface + highlight disparity"), and (iii) color information (grayscale stimuli vs. hue differences between diffuse and specular reflections). The task of the subjects was to match the perceived lightness and glossiness between two surfaces with different spatial frequency and amplitude by manipulating the diffuse component and the exponent of the Phong lighting model in one of the surfaces. Our results indicate that all three types of information improve the constancy of glossiness matches--both in isolation and in combination. The lightness matching data only revealed an influence of motion and color information. Our results indicate, somewhat counterintuitively, that motion information has a detrimental effect on lightness constancy.

  2. An image-processing strategy to extract important information suitable for a low-size stimulus pattern in a retinal prosthesis.

    Science.gov (United States)

    Chen, Yili; Fu, Jixiang; Chu, Dawei; Li, Rongmao; Xie, Yaoqin

    2017-11-27

    A retinal prosthesis is designed to help the blind to obtain some sight. It consists of an external part and an internal part. The external part is made up of a camera, an image processor and an RF transmitter. The internal part is made up of an RF receiver, implant chip and microelectrode. Currently, the number of microelectrodes is in the hundreds, and we do not know the mechanism for using an electrode to stimulate the optic nerve. A simple hypothesis is that the pixels in an image correspond to the electrode. The images captured by the camera should be processed by suitable strategies to correspond to stimulation from the electrode. Thus, it is a question of how to obtain the important information from the image captured in the picture. Here, we use the region of interest (ROI), a useful algorithm for extracting the ROI, to retain the important information, and to remove the redundant information. This paper explains the details of the principles and functions of the ROI. Because we are investigating a real-time system, we need a fast processing ROI as a useful algorithm to extract the ROI. Thus, we simplified the ROI algorithm and used it in an outside image-processing digital signal processing (DSP) system of the retinal prosthesis. The results show that our image-processing strategies are suitable for a real-time retinal prosthesis and can eliminate redundant information and provide useful information for expression in a low-size image.

  3. Using task analysis to improve the requirements elicitation in health information system.

    Science.gov (United States)

    Teixeira, Leonor; Ferreira, Carlos; Santos, Beatriz Sousa

    2007-01-01

    This paper describes the application of task analysis within the design process of a Web-based information system for managing clinical information in hemophilia care, in order to improve the requirements elicitation and, consequently, to validate the domain model obtained in a previous phase of the design process (system analysis). The use of task analysis in this case proved to be a practical and efficient way to improve the requirements engineering process by involving users in the design process.

  4. Extractive Summarisation of Medical Documents

    OpenAIRE

    Abeed Sarker; Diego Molla; Cecile Paris

    2012-01-01

    Background Evidence Based Medicine (EBM) practice requires practitioners to extract evidence from published medical research when answering clinical queries. Due to the time-consuming nature of this practice, there is a strong motivation for systems that can automatically summarise medical documents and help practitioners find relevant information. Aim The aim of this work is to propose an automatic query-focused, extractive summarisation approach that selects informative sentences from medic...

  5. The Design of Case Products’ Shape Form Information Database Based on NURBS Surface

    Science.gov (United States)

    Liu, Xing; Liu, Guo-zhong; Xu, Nuo-qi; Zhang, Wei-she

    2017-07-01

    In order to improve the computer design of product shape design,applying the Non-uniform Rational B-splines(NURBS) of curves and surfaces surface to the representation of the product shape helps designers to design the product effectively.On the basis of the typical product image contour extraction and using Pro/Engineer(Pro/E) to extract the geometric feature of scanning mold,in order to structure the information data base system of value point,control point and node vector parameter information,this paper put forward a unified expression method of using NURBS curves and surfaces to describe products’ geometric shape and using matrix laboratory(MATLAB) to simulate when products have the same or similar function.A case study of electric vehicle’s front cover illustrates the access process of geometric shape information of case product in this paper.This method can not only greatly reduce the capacity of information debate,but also improve the effectiveness of computer aided geometric innovation modeling.

  6. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  7. Improvements mineral dressing and extraction processes of gold-silver ores from San Pedro Frio Mining District, Colombia

    International Nuclear Information System (INIS)

    Yanez Traslavina, J. J.; Vargas Avila, M. A.; Garcia Paez, I. H.; Pedraza Rosas, J. E.

    2005-01-01

    The San Pedro Frio district mining, Colombia, is a rich region production gold-silver ores. Nowadays, the extraction processes used are amalgamation, percolation cyanidation and precipitation with zinc wood. Due to the ignorance of the ore characteristics, gold and silver treatment processes are inadequate and not efficient. In addition the inappropriate use of mercury and cyanide cause environmental contamination. In this research the ore characterization was carried out obtained fundamental parameters for the technical selection of more efficient gold and silver extraction processes. Experimental work was addressed to the study of both processes the agitation cyanidation and the adsorption on activated carbon in pulp. As a final result proposed a flowsheet to improve the precious metals recovery and reduce the environment contamination. (Author)

  8. Improvement of intestinal absorption of forsythoside A in weeping forsythia extract by various absorption enhancers based on tight junctions.

    Science.gov (United States)

    Zhou, Wei; Qin, Kun Ming; Shan, Jin Jun; Ju, Wen Zheng; Liu, Shi Jia; Cai, Bao Chang; Di, Liu Qing

    2012-12-15

    Forsythoside A (FTA), one of the main active ingredients in weeping forsythia extract, possesses strong antibacterial, antioxidant and antiviral effects, and its content was about 8% of totally, higher largely than that of other ingredients, but the absolute bioavailability orally was approximately 0.5%, which is significant low influencing clinical efficacies of its oral preparations. In the present study, in vitro Caco-2 cell, in situ single-pass intestinal perfusion and in vivo pharmacokinetics study were performed to investigate the effects of absorption enhancers based on tight junctions: sodium caprate and water-soluble chitosan on the intestinal absorption of FTA, and the eventual mucosal epithelial damage resulted from absorption enhancers was evaluated by MTT test, measurement of total amount of protein and the activity of LDH and morphology observation, respectively. The pharmacological effects such as antioxidant activity improvement by absorption enhancers were verified by PC12 cell damage inhibition rate after H₂O₂ insults. The observations from in vitro Caco-2 cell showed that the absorption of FTA in weeping forsythia extract could be improved by absorption enhancers. Meanwhile, the absorption enhancing effect of water-soluble chitosan may be almost saturable up to 0.0032% (w/v), and sodium caprate at concentrations up to 0.64 mg/ml was safe for the Caco-2 cells, but water-soluble chitosan at different concentrations was all safe for these cells. The observations from single-pass intestinal perfusion in situ model showed that duodenum, jejunum, ileum and colon showed significantly concentration-dependent increase in P(eff)-value, and that P(eff)-value in the ileum and colon groups, where sodium caprate was added, was higher than that of duodenum and jejunum groups, but P(eff)-value in the jejunum group was higher than that of duodenum, ileum and colon groups where water-soluble chitosan was added. Intestinal mucosal toxicity studies showed no

  9. Stability, structure and scale: improvements in multi-modal vessel extraction for SEEG trajectory planning.

    Science.gov (United States)

    Zuluaga, Maria A; Rodionov, Roman; Nowell, Mark; Achhala, Sufyan; Zombori, Gergely; Mendelson, Alex F; Cardoso, M Jorge; Miserocchi, Anna; McEvoy, Andrew W; Duncan, John S; Ourselin, Sébastien

    2015-08-01

    Brain vessels are among the most critical landmarks that need to be assessed for mitigating surgical risks in stereo-electroencephalography (SEEG) implantation. Intracranial haemorrhage is the most common complication associated with implantation, carrying significantly associated morbidity. SEEG planning is done pre-operatively to identify avascular trajectories for the electrodes. In current practice, neurosurgeons have no assistance in the planning of electrode trajectories. There is great interest in developing computer-assisted planning systems that can optimise the safety profile of electrode trajectories, maximising the distance to critical structures. This paper presents a method that integrates the concepts of scale, neighbourhood structure and feature stability with the aim of improving robustness and accuracy of vessel extraction within a SEEG planning system. The developed method accounts for scale and vicinity of a voxel by formulating the problem within a multi-scale tensor voting framework. Feature stability is achieved through a similarity measure that evaluates the multi-modal consistency in vesselness responses. The proposed measurement allows the combination of multiple images modalities into a single image that is used within the planning system to visualise critical vessels. Twelve paired data sets from two image modalities available within the planning system were used for evaluation. The mean Dice similarity coefficient was 0.89 ± 0.04, representing a statistically significantly improvement when compared to a semi-automated single human rater, single-modality segmentation protocol used in clinical practice (0.80 ± 0.03). Multi-modal vessel extraction is superior to semi-automated single-modality segmentation, indicating the possibility of safer SEEG planning, with reduced patient morbidity.

  10. Information Literacy and technology to improve learning and education

    OpenAIRE

    Mooij, Ton; Smeets, Ed

    2011-01-01

    Mooij, T., & Smeets, E. (2011, 13-16 September). Information Literacy and technology to improve learning and education. Presentation and discussion in a cross-network symposium of networks 16 and 12 at the ‘European Conference on Educational Research’ of the “European Educational Research Association” (EERA), Berlin, Germany.

  11. [Evaluation and improvement of the management of informed consent in the emergency department].

    Science.gov (United States)

    del Pozo, P; García, J A; Escribano, M; Soria, V; Campillo-Soto, A; Aguayo-Albasini, J L

    2009-01-01

    To assess the preoperative management in our emergency surgical service and to improve the quality of the care provided to patients. In order to find the causes of non-compliance, the Ishikawa Fishbone diagram was used and eight assessment criteria were chosen. The first assessment includes 120 patients operated on from January to April 2007. Corrective measures were implemented, which consisted of meetings and conferences with doctors and nurses, insisting on the importance of the informed consent as a legal document which must be signed by patients, and the obligation of giving a copy to patients or relatives. The second assessment includes the period from July to October 2007 (n=120). We observed a high non-compliance of C1 signing of surgical consent (CRITERION 1: all patients or relatives have to sign the surgical informed consent for the operation to be performed [27.5%]) and C2 giving a copy of the surgical consent (CRITERION 2: all patients or relatives must have received a copy of the surgical informed consent for the Surgery to be performed [72.5%]) and C4 anaesthetic consent copy (CRITERION 4: all patients or relatives must have received a copy of the Anaesthesia informed consent corresponding to the operation performed [90%]). After implementing corrective measures a significant improvement was observed in the compliance of C2 and C4. In C1 there was an improvement without statistical significance. The carrying out of an improvement cycle enabled the main objective of this paper to be achieved: to improve the management of informed consent and the quality of the care and information provided to our patients.

  12. Semantics-based information extraction for detecting economic events

    NARCIS (Netherlands)

    A.C. Hogenboom (Alexander); F. Frasincar (Flavius); K. Schouten (Kim); O. van der Meer

    2013-01-01

    textabstractAs today's financial markets are sensitive to breaking news on economic events, accurate and timely automatic identification of events in news items is crucial. Unstructured news items originating from many heterogeneous sources have to be mined in order to extract knowledge useful for

  13. Active learning for ontological event extraction incorporating named entity recognition and unknown word handling.

    Science.gov (United States)

    Han, Xu; Kim, Jung-jae; Kwoh, Chee Keong

    2016-01-01

    Biomedical text mining may target various kinds of valuable information embedded in the literature, but a critical obstacle to the extension of the mining targets is the cost of manual construction of labeled data, which are required for state-of-the-art supervised learning systems. Active learning is to choose the most informative documents for the supervised learning in order to reduce the amount of required manual annotations. Previous works of active learning, however, focused on the tasks of entity recognition and protein-protein interactions, but not on event extraction tasks for multiple event types. They also did not consider the evidence of event participants, which might be a clue for the presence of events in unlabeled documents. Moreover, the confidence scores of events produced by event extraction systems are not reliable for ranking documents in terms of informativity for supervised learning. We here propose a novel committee-based active learning method that supports multi-event extraction tasks and employs a new statistical method for informativity estimation instead of using the confidence scores from event extraction systems. Our method is based on a committee of two systems as follows: We first employ an event extraction system to filter potential false negatives among unlabeled documents, from which the system does not extract any event. We then develop a statistical method to rank the potential false negatives of unlabeled documents 1) by using a language model that measures the probabilities of the expression of multiple events in documents and 2) by using a named entity recognition system that locates the named entities that can be event arguments (e.g. proteins). The proposed method further deals with unknown words in test data by using word similarity measures. We also apply our active learning method for the task of named entity recognition. We evaluate the proposed method against the BioNLP Shared Tasks datasets, and show that our method

  14. Lung region extraction based on the model information and the inversed MIP method by using chest CT images

    International Nuclear Information System (INIS)

    Tomita, Toshihiro; Miguchi, Ryosuke; Okumura, Toshiaki; Yamamoto, Shinji; Matsumoto, Mitsuomi; Tateno, Yukio; Iinuma, Takeshi; Matsumoto, Toru.

    1997-01-01

    We developed a lung region extraction method based on the model information and the inversed MIP method in the Lung Cancer Screening CT (LSCT). Original model is composed of typical 3-D lung contour lines, a body axis, an apical point, and a convex hull. First, the body axis. the apical point, and the convex hull are automatically extracted from the input image Next, the model is properly transformed to fit to those of input image by the affine transformation. Using the same affine transformation coefficients, typical lung contour lines are also transferred, which correspond to rough contour lines of input image. Experimental results applied for 68 samples showed this method quite promising. (author)

  15. Improving understanding in the research informed consent process: a systematic review of 54 interventions tested in randomized control trials.

    Science.gov (United States)

    Nishimura, Adam; Carey, Jantey; Erwin, Patricia J; Tilburt, Jon C; Murad, M Hassan; McCormick, Jennifer B

    2013-07-23

    Obtaining informed consent is a cornerstone of biomedical research, yet participants comprehension of presented information is often low. The most effective interventions to improve understanding rates have not been identified. To systematically analyze the random controlled trials testing interventions to research informed consent process. The primary outcome of interest was quantitative rates of participant understanding; secondary outcomes were rates of information retention, satisfaction, and accrual. Interventional categories included multimedia, enhanced consent documents, extended discussions, test/feedback quizzes, and miscellaneous methods. The search spanned from database inception through September 2010. It was run on Ovid MEDLINE, Ovid EMBASE, Ovid CINAHL, Ovid PsycInfo and Cochrane CENTRAL, ISI Web of Science and Scopus. Five reviewers working independently and in duplicate screened full abstract text to determine eligibility. We included only RCTs. 39 out of 1523 articles fulfilled review criteria (2.6%), with a total of 54 interventions. A data extraction form was created in Distiller, an online reference management system, through an iterative process. One author collected data on study design, population, demographics, intervention, and analytical technique. Meta-analysis was possible on 22 interventions: multimedia, enhanced form, and extended discussion categories; all 54 interventions were assessed by review. Meta-analysis of multimedia approaches was associated with a non-significant increase in understanding scores (SMD 0.30, 95% CI, -0.23 to 0.84); enhanced consent form, with significant increase (SMD 1.73, 95% CI, 0.99 to 2.47); and extended discussion, with significant increase (SMD 0.53, 95% CI, 0.21 to 0.84). By review, 31% of multimedia interventions showed significant improvement in understanding; 41% for enhanced consent form; 50% for extended discussion; 33% for test/feedback; and 29% for miscellaneous.Multiple sources of variation

  16. Effectiveness of training intervention to improve medical student's information literacy skills.

    Science.gov (United States)

    Abdekhoda, Mohammadhiwa; Dehnad, Afsaneh; Yousefi, Mahmood

    2016-12-01

    This study aimed to assess the efficiency of delivering a 4-month course of "effective literature search" among medical postgraduate students for improving information literacy skills. This was a cross-sectional study in which 90 postgraduate students were randomly selected and participated in 12 training sessions. Effective search strategies were presented and the students' attitude and competency concerning online search were measured by a pre- and post-questionnaires and skill tests. Data were analyzed by SPSS version 16 using t-test. There was a significant improvement (p=0.00), in student's attitude. The mean (standard deviation [SD]) was 2.9 (0.8) before intervention versus the mean (SD) 3.9 (0.7) after intervention. Students' familiarity with medical resources and databases improved significantly. The data showed a significant increase (p=0.03), in students' competency score concerning search strategy design and conducting a search. The mean (SD) was 2.04 (0.7) before intervention versus the mean (SD) 3.07 (0.8) after intervention. Also, students' ability in applying search and meta search engine improved significantly. This study clearly acknowledges that the training intervention provides considerable opportunity to improve medical student's information literacy skills.

  17. The reprocessing of irradiated fuels improvement and extension of the solvent extraction process

    International Nuclear Information System (INIS)

    Faugeras, P.; Chesne, A.

    1964-01-01

    Improvements made in the conventional tri-butylphosphate process are described, in particular. the concentration and the purification of plutonium by one extraction cycle using tri-butyl-phosphate with reflux; and the use of an apparatus working continuously for precipitating plutonium oxalate, for calcining the oxalate, and for fluorinating the oxide. The modifications proposed for the treatment of irradiated uranium - molybdenum alloys are described, in particular, the dissolution of the fuel, and the concentration of the fission product solutions. The solvent extraction treatment is used also for the plutonium fuels utilized for the fast breeder reactor (Rapsodie) An outline of the process is presented and discussed, as well as the first experimental results and the plans for a pilot plant having a capacity of 1 kg/day. The possible use of tn-lauryl-amine in the plutonium purification cycle is now under consideration for the processing plant at La Hague. The flowsheet for this process and its performance are presented. The possibility of vitrification is considered for the final treatment of the concentrated radioactive wastes from the Marcoule (irradiated uranium) and La Hague (irradiated uranium-molybdenum) Centers. Three possible processes are described and discussed, as well as the results obtained from the operation of the corresponding experimental units using tracers. (authors) [fr

  18. Improving public information with an interactive lecture approach

    International Nuclear Information System (INIS)

    Tkavc, M.

    2003-01-01

    Providing public information is one of the main activities of The Nuclear Training Centre (ICJT) at the Jozef Stefan Institute. Our primary target is students of primary and secondary schools. The lecture they listen to during their visit to our centre was old fashioned since we used classic overhead projector. We have modernized it with an LCD projector and computer-based interactive presentation in order to improve students' comprehension. (author)

  19. The Role of Information Security Management Systems in Supply Chain Performance Improvement

    Directory of Open Access Journals (Sweden)

    Mohammad Reza Taghva

    2012-02-01

    Full Text Available In recent years, the researchers have emphasized on positive effect of information system on supply chain performance such as organizational processes integration, information sharing, information technology, etc. In other hand, information security management system is one of the subjects that researches considered its effects on increase accuracy and effective information exchange, access to accurate and timely information and reduce errors of information system. Since, any research has not been done on this ground (the importance of ISMS on supply chain performance. Therefore, it was felt that a research should have done on these approaches on supply chain. In this respect, current research was seeking that how ISMS had impact on supply chain performance in automotive industry and this was the innovative aspect of this paper. So first of all, after the review of the information security management system literature, supply chain performance was considered by the balanced scorecard approach then the most important factors of these two subjects was extracted by correlation analysis. In this way, it was considered that how ISMS had impact on supply chain performance by correlation analysis. The results showed that different dimensions of ISMS (information uniformity, prevent the human and machine mistake, information be accuracy, and rectitude and instruction for users had impact on four dimensions of supply chain performance (customers, financial, internal processes and learning and growth in three levels (strategic, technical, and operational in supply chain. At the end, it was showed that ISMS lays the ground for increase supply chain performance.

  20. Multineuronal vectorization is more efficient than time-segmental vectorization for information extraction from neuronal activities in the inferior temporal cortex.

    Science.gov (United States)

    Kaneko, Hidekazu; Tamura, Hiroshi; Tate, Shunta; Kawashima, Takahiro; Suzuki, Shinya S; Fujita, Ichiro

    2010-08-01

    In order for patients with disabilities to control assistive devices with their own neural activity, multineuronal spike trains must be efficiently decoded because only limited computational resources can be used to generate prosthetic control signals in portable real-time applications. In this study, we compare the abilities of two vectorizing procedures (multineuronal and time-segmental) to extract information from spike trains during the same total neuron-seconds. In the multineuronal vectorizing procedure, we defined a response vector whose components represented the spike counts of one to five neurons. In the time-segmental vectorizing procedure, a response vector consisted of components representing a neuron's spike counts for one to five time-segment(s) of a response period of 1 s. Spike trains were recorded from neurons in the inferior temporal cortex of monkeys presented with visual stimuli. We examined whether the amount of information of the visual stimuli carried by these neurons differed between the two vectorizing procedures. The amount of information calculated with the multineuronal vectorizing procedure, but not the time-segmental vectorizing procedure, significantly increased with the dimensions of the response vector. We conclude that the multineuronal vectorizing procedure is superior to the time-segmental vectorizing procedure in efficiently extracting information from neuronal signals. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  1. Bioactivity Improvement of Olea europaea Leaf Extract Biotransformed by Wickerhamomyces anomalus Enzymes.

    Science.gov (United States)

    Palmeri, Rosa; Restuccia, Cristina; Monteleone, Julieta Ines; Sperlinga, Elisa; Siracusa, Laura; Serafini, Mauro; Finamore, Alberto; Spagna, Giovanni

    2017-06-01

    Olive leaves represent a quantitatively significant by-product of agroindustry. They are rich in phenols, mainly oleuropein, which can be hydrolyzed into several bioactive compounds, including hydroxytyrosol. In this study, water extract from olive leaves 'Biancolilla' was analyzed for polyphenol profile, DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging activity and protective effect on differentiated Caco-2 cells. The efficacy of two enzymatic treatments in promoting the release of bioactive phenols was investigated: a) enzymatic extract from Wickerhamomyces anomalus, characterized by β-glucosidase and esterase activities; b) commercial β-glucosidase. Composition and bioactivity of the resulting extracts were compared. The results showed that the yeast-treated extract presented hydroxytyrosol content and DPPH radical scavenging activity comparable to those obtained using commercial β-glucosidase; however, it was showed the additional presence of hydroxycinnamic acids. In experiments on Caco-2 cells, the leaf extracts promoted the recovery of cell membrane barrier at different minimum effective concentrations. The high specificity of W. anomalus enzymatic extract may represent an effective tool for the release of bioactive phenols from olive by-products.

  2. Improving CANDU plant operation and maintenance through retrofit information technology systems

    International Nuclear Information System (INIS)

    Lupton, L.R.; Judd, R.A.; MacBeth, M.J.

    1998-01-01

    CANDU plant owners are facing an increasingly competitive environment for the generation of electricity. To meet this challenge, all owners have identified that information technology offers opportunities for significant improvements in CANDU operation, maintenance and administration (OM and A) costs. Targeted information technology application areas include instrumentation and control, engineering, construction, operations and plant information management. These opportunities also pose challenges and issues that must be addressed if the full benefits of the advances in information technology are to be achieved. Key among these are system hardware and software maintenance, and obsolescence protection; AECL has been supporting CANDU stations with the initial development and evaluation of systems to improve plant performance and cost. Key initiatives that have been implemented or are in the process of being implemented in some CANDU plants to achieve operational benefits include: critical safety parameter monitor system; advanced computerized annunciation system; plant historical data system; and plant display system. Each system will be described in terms of its role in enhancing current CANDU plant performance and how they will contribute to future CANDU plant performance

  3. aqueous leaf extract of rothmannia longiflora improves basal

    African Journals Online (AJOL)

    Daniel Owu

    E-mail: ikpidanielewa@yahoo.com. Summary: This study evaluated the action of aqueous leaf extract of Rothmannia longiflora on basal metabolic .... Animals and Induction of Diabetes. Fifteen male rats of Wistar strain weighing .... lipids have a higher concentration of energy than do carbohydrates. Therefore in their ...

  4. Information Management Processes for Extraction of Student Dropout Indicators in Courses in Distance Mode

    Directory of Open Access Journals (Sweden)

    Renata Maria Abrantes Baracho

    2016-04-01

    Full Text Available This research addresses the use of information management processes in order to extract student dropout indicators in distance mode courses. Distance education in Brazil aims to facilitate access to information. The MEC (Ministry of Education announced, in the second semester of 2013, that the main obstacles faced by institutions offering courses in this mode were students dropping out and the resistance of both educators and students to this mode. The research used a mixed methodology, qualitative and quantitative, to obtain student dropout indicators. The factors found and validated in this research were: the lack of interest from students, insufficient training in the use of the virtual learning environment for students, structural problems in the schools that were chosen to offer the course, students without e-mail, incoherent answers to activities to the course, lack of knowledge on the part of the student when using the computer tool. The scenario considered was a course offered in distance mode called Aluno Integrado (Integrated Student

  5. Improved Methods of Carnivore Faecal Sample Preservation, DNA Extraction and Quantification for Accurate Genotyping of Wild Tigers

    Science.gov (United States)

    Harika, Katakam; Mahla, Ranjeet Singh; Shivaji, Sisinthy

    2012-01-01

    Background Non-invasively collected samples allow a variety of genetic studies on endangered and elusive species. However due to low amplification success and high genotyping error rates fewer samples can be identified up to the individual level. Number of PCRs needed to obtain reliable genotypes also noticeably increase. Methods We developed a quantitative PCR assay to measure and grade amplifiable nuclear DNA in feline faecal extracts. We determined DNA degradation in experimentally aged faecal samples and tested a suite of pre-PCR protocols to considerably improve DNA retrieval. Results Average DNA concentrations of Grade I, II and III extracts were 982pg/µl, 9.5pg/µl and 0.4pg/µl respectively. Nearly 10% of extracts had no amplifiable DNA. Microsatellite PCR success and allelic dropout rates were 92% and 1.5% in Grade I, 79% and 5% in Grade II, and 54% and 16% in Grade III respectively. Our results on experimentally aged faecal samples showed that ageing has a significant effect on quantity and quality of amplifiable DNA (pDNA degradation occurs within 3 days of exposure to direct sunlight. DNA concentrations of Day 1 samples stored by ethanol and silica methods for a month varied significantly from fresh Day 1 extracts (p0.05). DNA concentrations of fresh tiger and leopard faecal extracts without addition of carrier RNA were 816.5pg/µl (±115.5) and 690.1pg/µl (±207.1), while concentrations with addition of carrier RNA were 49414.5pg/µl (±9370.6) and 20982.7pg/µl (±6835.8) respectively. Conclusions Our results indicate that carnivore faecal samples should be collected as freshly as possible, are better preserved by two-step method and should be extracted with addition of carrier RNA. We recommend quantification of template DNA as this facilitates several downstream protocols. PMID:23071624

  6. Hexane extracts of Polygonum multiflorum improve tissue and functional outcome following focal cerebral ischemia in mice.

    Science.gov (United States)

    Lee, Soo Vin; Choi, Kyung Ha; Choi, Young Whan; Hong, Jin Woo; Baek, Jin Ung; Choi, Byung Tae; Shin, Hwa Kyoung

    2014-04-01

    Polygonum multiflorum is a traditional Korean medicine that has been utilized widely in East Asian countries as a longevity agent. Clinical studies have demonstrated that Polygonum multiflorum improves hypercholesterolemia, coronary heart disease, neurosis and other diseases commonly associated with aging. However, scientific evidence defining the protective effects and mechanisms of Polygonum multiflorum against ischemic stroke is incomplete. In the present study, we investigated the cerebrovascular protective effects of Polygonum multiflorum against ischemic brain injury using an in vivo photothrombotic mouse model. To examine the underlying mechanism of action, we utilized an in vitro human brain microvascular endothelial cell (HBMEC) culture system. Hexane extracts (HEPM), ethyl acetate extracts (EAEPM) and methanol extracts (MEPM) of Polygonum multiflorum (100 mg/kg) were administered intraperitoneally 30 min prior to ischemic insult. Focal cerebral ischemia was induced in C57BL/6J mice and endothelial nitric oxide synthase knockout (eNOS KO) mice by photothrombotic cortical occlusion. We evaluated the infarct volume, as well as neurological and motor function, 24 h after ischemic brain injury. Following ischemic insult, HEPM induced a significant reduction in infarct volume and subsequent neurological deficits, compared with EAEPM and MEPM. HEPM significantly decreased infarct size and improved neurological and motor function, which was not observed in eNOS KO mice, suggesting that this cerebroprotective effect is primarily an eNOS-dependent mechanism. In vitro, HEPM effectively promoted NO production, however these effects were inhibited by the NOS inhibitor, L-NAME and the PI3K/Akt inhibitor, LY-294002. Furthermore, HEPM treatment resulted in increased phosphorylation-dependent activation of Akt and eNOS in HBMEC, suggesting that HEPM increased NO production via phosphorylation-dependent activation of Akt and eNOS. In conclusion, HEPM prevents cerebral

  7. Consent for third molar tooth extractions in Australia and New Zealand: a review of current practice.

    Science.gov (United States)

    Badenoch-Jones, E K; Lynham, A J; Loessner, D

    2016-06-01

    Informed consent is the legal requirement to educate a patient about a proposed medical treatment or procedure so that he or she can make informed decisions. The purpose of the study was to examine the current practice for obtaining informed consent for third molar tooth extractions (wisdom teeth) by oral and maxillofacial surgeons in Australia and New Zealand. An online survey was sent to 180 consultant oral and maxillofacial surgeons in Australia and New Zealand. Surgeons were asked to answer (yes/no) whether they routinely warned of a specific risk of third molar tooth extraction in their written consent. Seventy-one replies were received (39%). The only risks that surgeons agreed should be routinely included in written consent were a general warning of infection (not alveolar osteitis), inferior alveolar nerve damage (temporary and permanent) and lingual nerve damage (temporary and permanent). There is significant variability among Australian and New Zealand oral and maxillofacial surgeons regarding risk disclosure for third molar tooth extractions. We aim to improve consistency in consent for third molar extractions by developing an evidence-based consent form. © 2016 Australian Dental Association.

  8. Informed consent recall and comprehension in orthodontics: traditional vs improved readability and processability methods.

    Science.gov (United States)

    Kang, Edith Y; Fields, Henry W; Kiyak, Asuman; Beck, F Michael; Firestone, Allen R

    2009-10-01

    Low general and health literacy in the United States means informed consent documents are not well understood by most adults. Methods to improve recall and comprehension of informed consent have not been tested in orthodontics. The purposes of this study were to evaluate (1) recall and comprehension among patients and parents by using the American Association of Orthodontists' (AAO) informed consent form and new forms incorporating improved readability and processability; (2) the association between reading ability, anxiety, and sociodemographic variables and recall and comprehension; and (3) how various domains (treatment, risk, and responsibility) of information are affected by the forms. Three treatment groups (30 patient-parent pairs in each) received an orthodontic case presentation and either the AAO form, an improved readability form (MIC), or an improved readability and processability (pairing audio and visual cues) form (MIC + SS). Structured interviews were transcribed and coded to evaluate recall and comprehension. Significant relationships among patient-related variables and recall and comprehension explained little of the variance. The MIC + SS form significantly improved patient recall and parent recall and comprehension. Recall was better than comprehension, and parents performed better than patients. The MIC + SS form significantly improved patient treatment comprehension and risk recall and parent treatment recall and comprehension. Patients and parents both overestimated their understanding of the materials. Improving the readability of consent materials made little difference, but combining improved readability and processability benefited both patients' recall and parents' recall and comprehension compared with the AAO form.

  9. Liquid–liquid extraction combined with differential isotope dimethylaminophenacyl labeling for improved metabolomic profiling of organic acids

    International Nuclear Information System (INIS)

    Peng, Jun; Li, Liang

    2013-01-01

    Graphical abstract: -- Highlights: •An improved method for profiling the carboxylic acid sub-metabolome is reported. •Liquid–liquid extraction was used for separating the organic acids from the amines. • 12 C/ 13 C-p-dimethylaminophenacyl (DmPA) labeling of the organic acids was carried out on the extract. •Detection interference by amines and labeling efficiency reduction by water were reduced. •About 2500 12 C/ 13 C-peak pairs or putative metabolites could be detected from 20 μL of human urine. -- Abstract: A large fraction of the known human metabolome belong to organic acids. However, comprehensive profiling of the organic acid sub-metabolome is a major analytical challenge. In this work, we report an improved method for detecting organic acid metabolites. This method is based on the use of liquid–liquid extraction (LLE) to selectively extract the organic acids, followed by using differential isotope p-dimethylaminophenacyl (DmPA) labeling of the acid metabolites. The 12 C-/ 13 C-labeled samples are analyzed by liquid chromatography Fourier-transform ion cyclotron resonance mass spectrometry (LC–FTICR–MS). It is shown that this LLE DmPA labeling method offers superior performance over the method of direct DmPA labeling of biofluids such as human urine. LLE of organic acids reduces the interference of amine-containing metabolites that may also react with DmPA. It can also remove water in a biofluid that can reduce the labeling efficiency. Using human urine as an example, it is demonstrated that about 2500 peak pairs or putative metabolites could be detected in a 30-min gradient LC–MS run, which is about 3 times more than that detected in a sample prepared using direct DmPA labeling. About 95% of the 1000 or so matched metabolites to the Human Metabolome Database (HMDB) are organic acids. It is further shown that this method can be used to handle as small as 10 μL of urine. We believe that this method opens the possibility of generating a

  10. Liquid–liquid extraction combined with differential isotope dimethylaminophenacyl labeling for improved metabolomic profiling of organic acids

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Jun; Li, Liang, E-mail: Liang.Li@ualberta.ca

    2013-11-25

    Graphical abstract: -- Highlights: •An improved method for profiling the carboxylic acid sub-metabolome is reported. •Liquid–liquid extraction was used for separating the organic acids from the amines. •{sup 12}C/{sup 13}C-p-dimethylaminophenacyl (DmPA) labeling of the organic acids was carried out on the extract. •Detection interference by amines and labeling efficiency reduction by water were reduced. •About 2500 {sup 12}C/{sup 13}C-peak pairs or putative metabolites could be detected from 20 μL of human urine. -- Abstract: A large fraction of the known human metabolome belong to organic acids. However, comprehensive profiling of the organic acid sub-metabolome is a major analytical challenge. In this work, we report an improved method for detecting organic acid metabolites. This method is based on the use of liquid–liquid extraction (LLE) to selectively extract the organic acids, followed by using differential isotope p-dimethylaminophenacyl (DmPA) labeling of the acid metabolites. The {sup 12}C-/{sup 13}C-labeled samples are analyzed by liquid chromatography Fourier-transform ion cyclotron resonance mass spectrometry (LC–FTICR–MS). It is shown that this LLE DmPA labeling method offers superior performance over the method of direct DmPA labeling of biofluids such as human urine. LLE of organic acids reduces the interference of amine-containing metabolites that may also react with DmPA. It can also remove water in a biofluid that can reduce the labeling efficiency. Using human urine as an example, it is demonstrated that about 2500 peak pairs or putative metabolites could be detected in a 30-min gradient LC–MS run, which is about 3 times more than that detected in a sample prepared using direct DmPA labeling. About 95% of the 1000 or so matched metabolites to the Human Metabolome Database (HMDB) are organic acids. It is further shown that this method can be used to handle as small as 10 μL of urine. We believe that this method opens the

  11. DEVELOPMENT OF AUTOMATIC EXTRACTION METHOD FOR ROAD UPDATE INFORMATION BASED ON PUBLIC WORK ORDER OUTLOOK

    Science.gov (United States)

    Sekimoto, Yoshihide; Nakajo, Satoru; Minami, Yoshitaka; Yamaguchi, Syohei; Yamada, Harutoshi; Fuse, Takashi

    Recently, disclosure of statistic data, representing financial effects or burden for public work, through each web site of national or local government, enables us to discuss macroscopic financial trends. However, it is still difficult to grasp a basic property nationwide how each spot was changed by public work. In this research, our research purpose is to collect road update information reasonably which various road managers provide, in order to realize efficient updating of various maps such as car navigation maps. In particular, we develop the system extracting public work concerned and registering summary including position information to database automatically from public work order outlook, released by each local government, combinating some web mining technologies. Finally, we collect and register several tens of thousands from web site all over Japan, and confirm the feasibility of our method.

  12. Passive Polarimetric Information Processing for Target Classification

    Science.gov (United States)

    Sadjadi, Firooz; Sadjadi, Farzad

    Polarimetric sensing is an area of active research in a variety of applications. In particular, the use of polarization diversity has been shown to improve performance in automatic target detection and recognition. Within the diverse scope of polarimetric sensing, the field of passive polarimetric sensing is of particular interest. This chapter presents several new methods for gathering in formation using such passive techniques. One method extracts three-dimensional (3D) information and surface properties using one or more sensors. Another method extracts scene-specific algebraic expressions that remain unchanged under polariza tion transformations (such as along the transmission path to the sensor).

  13. Extract the Relational Information of Static Features and Motion Features for Human Activities Recognition in Videos

    Directory of Open Access Journals (Sweden)

    Li Yao

    2016-01-01

    Full Text Available Both static features and motion features have shown promising performance in human activities recognition task. However, the information included in these features is insufficient for complex human activities. In this paper, we propose extracting relational information of static features and motion features for human activities recognition. The videos are represented by a classical Bag-of-Word (BoW model which is useful in many works. To get a compact and discriminative codebook with small dimension, we employ the divisive algorithm based on KL-divergence to reconstruct the codebook. After that, to further capture strong relational information, we construct a bipartite graph to model the relationship between words of different feature set. Then we use a k-way partition to create a new codebook in which similar words are getting together. With this new codebook, videos can be represented by a new BoW vector with strong relational information. Moreover, we propose a method to compute new clusters from the divisive algorithm’s projective function. We test our work on the several datasets and obtain very promising results.

  14. Automatic River Network Extraction from LIDAR Data

    Science.gov (United States)

    Maderal, E. N.; Valcarcel, N.; Delgado, J.; Sevilla, C.; Ojeda, J. C.

    2016-06-01

    National Geographic Institute of Spain (IGN-ES) has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI) within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network) and hydrological criteria (flow accumulation river network), and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files), and process; using local virtualization and the Amazon Web Service (AWS), which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri) and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  15. AUTOMATIC RIVER NETWORK EXTRACTION FROM LIDAR DATA

    Directory of Open Access Journals (Sweden)

    E. N. Maderal

    2016-06-01

    Full Text Available National Geographic Institute of Spain (IGN-ES has launched a new production system for automatic river network extraction for the Geospatial Reference Information (GRI within hydrography theme. The goal is to get an accurate and updated river network, automatically extracted as possible. For this, IGN-ES has full LiDAR coverage for the whole Spanish territory with a density of 0.5 points per square meter. To implement this work, it has been validated the technical feasibility, developed a methodology to automate each production phase: hydrological terrain models generation with 2 meter grid size and river network extraction combining hydrographic criteria (topographic network and hydrological criteria (flow accumulation river network, and finally the production was launched. The key points of this work has been managing a big data environment, more than 160,000 Lidar data files, the infrastructure to store (up to 40 Tb between results and intermediate files, and process; using local virtualization and the Amazon Web Service (AWS, which allowed to obtain this automatic production within 6 months, it also has been important the software stability (TerraScan-TerraSolid, GlobalMapper-Blue Marble , FME-Safe, ArcGIS-Esri and finally, the human resources managing. The results of this production has been an accurate automatic river network extraction for the whole country with a significant improvement for the altimetric component of the 3D linear vector. This article presents the technical feasibility, the production methodology, the automatic river network extraction production and its advantages over traditional vector extraction systems.

  16. ZnO nanorod array polydimethylsiloxane composite solid phase micro-extraction fiber coating: fabrication and extraction capability.

    Science.gov (United States)

    Wang, Dan; Wang, Qingtang; Zhang, Zhuomin; Chen, Guonan

    2012-01-21

    ZnO nanorod array coating is a novel kind of solid-phase microextraction (SPME) fiber coating which shows good extraction capability due to the nanostructure. To prepare the composite coating is a good way to improve the extraction capability. In this paper, the ZnO nanorod array polydimethylsiloxane (PDMS) composite SPME fiber coating has been prepared and its extraction capability for volatile organic compounds (VOCs) has been studied by headspace sampling the typical volatile mixed standard solution of benzene, toluene, ethylbenzene and xylene (BTEX). Improved detection limit and good linear ranges have been achieved for this composite SPME fiber coating. Also, it is found that the composite SPME fiber coating shows good extraction selectivity to the VOCs with alkane radicals.

  17. Brief biopsychosocially informed education can improve insurance workers? back pain beliefs: Implications for improving claims management behaviours

    OpenAIRE

    Beales, Darren; Mitchell, Tim; Pole, Naomi; Weir, James

    2016-01-01

    BACKGROUND: Biopsychosocially informed education is associated with improved back pain beliefs and positive changes in health care practitioners? practice behaviours. OBJECTIVE: Assess the effect of this type of education for insurance workers who are important non-clinical stakeholders in the rehabilitation of injured workers. METHODS: Insurance workers operating in the Western Australian workers? compensation system underwent two, 1.5 hour sessions of biopsychosocially informed education fo...

  18. Computer-based information management system for interventional radiology

    International Nuclear Information System (INIS)

    Forman, B.H.; Silverman, S.G.; Mueller, P.R.; Hahn, P.F.; Papanicolaou, N.; Tung, G.A.; Brink, J.A.; Ferrucci, J.T.

    1989-01-01

    The authors authored and implemented a computer-based information management system (CBIMS) for the integrated analysis of data from a variety of abdominal nonvascular interventional procedures. The CBIMS improved on their initial handwritten-card system (which listed only patient name, hospital number, and type of procedure) by capturing relevant patient data in an organized fashion and integrating information for meaningful analysis. Advantages of CBIMS include enhanced compilation of monthly census, easy access to a patient's interventional history, and flexible querying capability that allows easy extraction of subsets of information from the patient database

  19. Improved Antitumoral Activity of Extracts Derived from Cultured ...

    African Journals Online (AJOL)

    Antiproliferative activity was assayed in four cancer cell lines (Hep-2, HeLa, SiHa, and KB) while cytotoxic activity was evaluated on a normal cell line (MDCK). Results: The 10-day cultivation organic extract exhibited increased antiproliferative activity compared with the control on human carcinoma nasopharynx (KB) and ...

  20. Information Technology (IT Improvement At PT. Sumber Alfaria Trijaya

    Directory of Open Access Journals (Sweden)

    Marisa Karsen

    2013-06-01

    Full Text Available Retail industry is the second largest industry after agricultural industry in terms of employment absorption in Indonesia. The situation of the quite dynamic retail industry is marked by the development of modern retail trade and it impacts on traditional markets and suppliers. PT Sumber Alfaria Trijaya, known as Alfamart is one of the best retail company  in Indonesia. It already uses Supply Chain Management and B2B to support their operations. Alfamart also has its own website which provides information about products, outlets, services, and promo. This research discusses about IT improvement. The purpose of this paper is to improve Alfamart IT performance and make innovation on the IT to increase customer satisfaction.  The methodology used is defining the problem, measure, analyst problem, improvement required by Alfamart, and control to monitor the implementation. Problems are identified using SWOT analysis, problem clarification, and business model canvas. Analyzing the problems, solution hypotheses and IT improvement are recommended for Alfamart.  

  1. Cyanidin-3-O-galactoside and blueberry extracts supplementation improves spatial memory and regulates hippocampal ERK expression in senescence-accelerated mice.

    Science.gov (United States)

    Tan, Long; Yang, Hong Peng; Pang, Wei; Lu, Hao; Hu, Yan Dan; Li, Jing; Lu, Shi Jun; Zhang, Wan Qi; Jiang, Yu Gang

    2014-03-01

    To investigate whether the antioxidation and the regulation on the Extracellular Regulated Protein Kinases (ERK) signaling pathway are involved in the protective effects of blueberry on central nervous system. 30 Senescence-accelerated mice prone 8 (SAMP8) mice were divided into three groups and treated with normal diet, blueberry extracts (200 mg/kg•bw/day) and cyaniding-3-O-galactoside (Cy-3-GAL) (50 mg/kg•bw/day) from blueberry for 8 weeks. 10 SAMR1 mice were set as control group. The capacity of spatial memory was assessed by Passive avoidance task and Morris water maze. Histological analyses on hippocampus were completed. Malondialdehyde (MDA) levels, Superoxide Dismutase (SOD) activity and the expression of ERK were detected. Both Cy-3-GAL and blueberry extracts were shown effective functions to relieve cellular injury, improve hippocampal neurons survival and inhibit the pyramidal cell layer damage. Cy-3-GAL and blueberry extracts also increased SOD activity and reduced MDA content in brain tissues and plasma, and increased hippocampal phosphorylated ERK (p-ERK) expression in SAMP8 mice. Further more, the passive avoidance task test showed that both the latency time and the number of errors were improved by Cy-3-GAL treatment, and the Morris Water Maze test showed significant decreases of latency were detected by Cy-3-GAL and blueberry extracts treatment on day 4. Blueberry extracts may reverse the declines of cognitive and behavioral function in the ageing process through several pathways, including enhancing the capacity of antioxidation, altering stress signaling. Cy-3-GAL may be an important active ingredient for these biological effects. Copyright © 2014 The Editorial Board of Biomedical and Environmental Sciences. Published by China CDC. All rights reserved.

  2. An improved contour symmetry axes extraction algorithm and its application in the location of picking points of apples

    Energy Technology Data Exchange (ETDEWEB)

    Wang, D.; Song, H.; Yu, X.; Zhang, W.; Qu, W.; Xu, Y.

    2015-07-01

    The key problem for picking robots is to locate the picking points of fruit. A method based on the moment of inertia and symmetry of apples is proposed in this paper to locate the picking points of apples. Image pre-processing procedures, which are crucial to improving the accuracy of the location, were carried out to remove noise and smooth the edges of apples. The moment of inertia method has the disadvantage of high computational complexity, which should be solved, so convex hull was used to improve this problem. To verify the validity of this algorithm, a test was conducted using four types of apple images containing 107 apple targets. These images were single and unblocked apple images, single and blocked apple images, images containing adjacent apples, and apples in panoramas. The root mean square error values of these four types of apple images were 6.3, 15.0, 21.6 and 18.4, respectively, and the average location errors were 4.9°, 10.2°, 16.3° and 13.8°, respectively. Furthermore, the improved algorithm was effective in terms of average runtime, with 3.7 ms and 9.2 ms for single and unblocked and single and blocked apple images, respectively. For the other two types of apple images, the runtime was determined by the number of apples and blocked apples contained in the images. The results showed that the improved algorithm could extract symmetry axes and locate the picking points of apples more efficiently. In conclusion, the improved algorithm is feasible for extracting symmetry axes and locating the picking points of apples. (Author)

  3. Pressurized Hot Water Extraction of anthocyanins from red onion: A study on extraction and degradation rates

    Energy Technology Data Exchange (ETDEWEB)

    Petersson, Erik V.; Liu Jiayin; Sjoeberg, Per J.R.; Danielsson, Rolf [Uppsala University, Department of Physical and Analytical Chemistry, P.O. Box 599, SE-751 24, Uppsala (Sweden); Turner, Charlotta, E-mail: Charlotta.Turner@kemi.uu.se [Uppsala University, Department of Physical and Analytical Chemistry, P.O. Box 599, SE-751 24, Uppsala (Sweden)

    2010-03-17

    Pressurized Hot Water Extraction (PHWE) is a quick, efficient and environmentally friendly technique for extractions. However, when using PHWE to extract thermally unstable analytes, extraction and degradation effects occur at the same time, and thereby compete. At first, the extraction effect dominates, but degradation effects soon take over. In this paper, extraction and degradation rates of anthocyanins from red onion were studied with experiments in a static batch reactor at 110 deg. C. A total extraction curve was calculated with data from the actual extraction and degradation curves, showing that more anthocyanins, 21-36% depending on the species, could be extracted if no degradation occurred, but then longer extraction times would be required than those needed to reach the peak level in the apparent extraction curves. The results give information about the different kinetic processes competing during an extraction procedure.

  4. Improving information density in ferroelectric polymer films by using nanoimprinted gratings

    Science.gov (United States)

    Martínez-Tong, Daniel E.; Soccio, Michela; Rueda, Daniel R.; Nogales, Aurora; García-Gutiérrez, Mari Cruz; Ezquerra, Tiberio A.

    2015-03-01

    The development of polymer non-volatile memories depends on the effective fabrication of devices with high density of information. Well-defined low aspect ratio nanogratings on thin films of poly(vinylidene fluoride-trifluoroethylene) copolymers can be fabricated by using Nanoimprint Lithography (NIL). By using these nanogratings, an improved management of writing and reading information can be reached as revealed by Piezoresponse Force Microscopy (PFM). Structural investigation by means of Grazing Incidence X-ray (GIX) scattering techniques indicates that the physical confinement generated by nanoimprint promotes the development of smaller and edge-on oriented crystals. Our results evidence that one-dimensional nanostructuring can be a straightforward approach to improve the control of the polarization in ferroelectric polymer thin films.

  5. Handling Internet-Based Health Information: Improving Health Information Web Site Literacy Among Undergraduate Nursing Students.

    Science.gov (United States)

    Wang, Weiwen; Sun, Ran; Mulvehill, Alice M; Gilson, Courtney C; Huang, Linda L

    2017-02-01

    Patient care problems arise when health care consumers and professionals find health information on the Internet because that information is often inaccurate. To mitigate this problem, nurses can develop Web literacy and share that skill with health care consumers. This study evaluated a Web-literacy intervention for undergraduate nursing students to find reliable Web-based health information. A pre- and postsurvey queried undergraduate nursing students in an informatics course; the intervention comprised lecture, in-class practice, and assignments about health Web site evaluation tools. Data were analyzed using Wilcoxon and ANOVA signed-rank tests. Pre-intervention, 75.9% of participants reported using Web sites to obtain health information. Postintervention, 87.9% displayed confidence in using an evaluation tool. Both the ability to critique health Web sites (p = .005) and confidence in finding reliable Internet-based health information (p = .058) increased. Web-literacy education guides nursing students to find, evaluate, and use reliable Web sites, which improves their ability to deliver safer patient care. [J Nurs Educ. 2017;56(2):110-114.]. Copyright 2017, SLACK Incorporated.

  6. Hyaluronic Acid Improves Bone Formation in Extraction Sockets With Chronic Pathology: A Pilot Study in Dogs.

    Science.gov (United States)

    Kim, Jung-Ju; Song, Hyun Young; Ben Amara, Heithem; Kyung-Rim, Kang; Koo, Ki-Tae

    2016-07-01

    Previous studies on ridge preservation focusing on fresh extraction sockets using graft materials for ridge preservation procedures have reported a delay in the tissue modeling and remodeling phases. The objective of this study is to evaluate the effect of hyaluronic acid (HA) on healing of infected sockets. Six beagle dogs were used in this study. Both mandibular third premolars were hemisected, and the distal roots were extracted. Subsequently, periodontal and endodontic lesions were induced at the remaining mesial root. After communication of the periodontal lesion, an endodontic periapical lesion was observed at 4 months, and the mesial roots of both the right and left sides were extracted. HA was applied into the socket of the test group, and no treatment was administered to the other group (control group). Three months after extraction of the mesial roots, the dogs were sacrificed, and histologic evaluations were performed. The sockets were filled by mineralized bone (47.80% ± 6.60%) and bone marrow (50.47% ± 6.38%) in the control group, whereas corresponding values were 63.29% ± 9.78% and 34.73% ± 8.97% for the test group, respectively. There was a statistically significant difference between the groups. Reversal lines and a copious lineup of osteoblasts were observed in the middle and apical parts of the sockets in the test group. An infected socket shows delayed healing of the socket wound, and HA, because of its osteoinductive, bacteriostatic, and anti-inflammatory properties, may improve bone formation and accelerate wound healing in infected sockets.

  7. A Probabilistic Approach for Breast Boundary Extraction in Mammograms

    Directory of Open Access Journals (Sweden)

    Hamed Habibi Aghdam

    2013-01-01

    Full Text Available The extraction of the breast boundary is crucial to perform further analysis of mammogram. Methods to extract the breast boundary can be classified into two categories: methods based on image processing techniques and those based on models. The former use image transformation techniques such as thresholding, morphological operations, and region growing. In the second category, the boundary is extracted using more advanced techniques, such as the active contour model. The problem with thresholding methods is that it is a hard to automatically find the optimal threshold value by using histogram information. On the other hand, active contour models require defining a starting point close to the actual boundary to be able to successfully extract the boundary. In this paper, we propose a probabilistic approach to address the aforementioned problems. In our approach we use local binary patterns to describe the texture around each pixel. In addition, the smoothness of the boundary is handled by using a new probability model. Experimental results show that the proposed method reaches 38% and 50% improvement with respect to the results obtained by the active contour model and threshold-based methods respectively, and it increases the stability of the boundary extraction process up to 86%.

  8. [Improvement of Stress Resistance and Quality of Life of Adults with Nervous Restlessness after Treatment with a Passion Flower Dry Extract].

    Science.gov (United States)

    Gibbert, Judith; Kreimendahl, Fabian; Lebert, Jennifer; Rychlik, Reinhard; Trompetter, Inga

    The passion flower dried ethanolic extract investigated in this non-interventional study has well-documented calmative effects and good tolerability. We investigated the effects of this extract on the stress resistance (resilience) and quality of life (QoL) of patients suffering from nervous restlessness. The addiction potential of the drug and the course of symptoms were also evaluated. Adult patients aged ≤ 95 years with the diagnosis 'nervous restlessness' were treated for 12 weeks with a dried ethanolic extract of passion flower (Passiflora incarnata L.). Standardized questionnaires were used to evaluate the resilience (RS-13), QoL (EQ-5D including EQ-VAS), and the addiction potential (BDEPQ). After 12 weeks of treatment, significant (p passion flower extract investigated in the present study appears to be effective in improving resilience and QoL in patients suffering from nervous restlessness and is well tolerated. © 2017 S. Karger GmbH, Freiburg.

  9. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    Science.gov (United States)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  10. Linking attentional processes and conceptual problem solving: visual cues facilitate the automaticity of extracting relevant information from diagrams.

    Science.gov (United States)

    Rouinfar, Amy; Agra, Elise; Larson, Adam M; Rebello, N Sanjay; Loschky, Lester C

    2014-01-01

    This study investigated links between visual attention processes and conceptual problem solving. This was done by overlaying visual cues on conceptual physics problem diagrams to direct participants' attention to relevant areas to facilitate problem solving. Participants (N = 80) individually worked through four problem sets, each containing a diagram, while their eye movements were recorded. Each diagram contained regions that were relevant to solving the problem correctly and separate regions related to common incorrect responses. Problem sets contained an initial problem, six isomorphic training problems, and a transfer problem. The cued condition saw visual cues overlaid on the training problems. Participants' verbal responses were used to determine their accuracy. This study produced two major findings. First, short duration visual cues which draw attention to solution-relevant information and aid in the organizing and integrating of it, facilitate both immediate problem solving and generalization of that ability to new problems. Thus, visual cues can facilitate re-representing a problem and overcoming impasse, enabling a correct solution. Importantly, these cueing effects on problem solving did not involve the solvers' attention necessarily embodying the solution to the problem, but were instead caused by solvers attending to and integrating relevant information in the problems into a solution path. Second, this study demonstrates that when such cues are used across multiple problems, solvers can automatize the extraction of problem-relevant information extraction. These results suggest that low-level attentional selection processes provide a necessary gateway for relevant information to be used in problem solving, but are generally not sufficient for correct problem solving. Instead, factors that lead a solver to an impasse and to organize and integrate problem information also greatly facilitate arriving at correct solutions.

  11. A METHOD OF EXTRACTING SHORELINE BASED ON SEMANTIC INFORMATION USING DUAL-LENGTH LiDAR DATA

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available Shoreline is a spatial varying separation between water and land. By utilizing dual-wavelength LiDAR point data together with semantic information that shoreline often appears beyond water surface profile and is observable on the beach, the paper generates the shoreline and the details are as follows: (1 Gain the water surface profile: first we obtain water surface by roughly selecting water points based on several features of water body, then apply least square fitting method to get the whole water trend surface. Then we get the ground surface connecting the under -water surface by both TIN progressive filtering method and surface interpolation method. After that, we have two fitting surfaces intersected to get water surface profile of the island. (2 Gain the sandy beach: we grid all points and select the water surface profile grids points as seeds, then extract sandy beach points based on eight-neighborhood method and features, then we get all sandy beaches. (3 Get the island shoreline: first we get the sandy beach shoreline based on intensity information, then we get a threshold value to distinguish wet area and dry area, therefore we get the shoreline of several sandy beaches. In some extent, the shoreline has the same height values within a small area, by using all the sandy shoreline points to fit a plane P, and the intersection line of the ground surface and the shoreline plane P can be regarded as the island shoreline. By comparing with the surveying shoreline, the results show that the proposed method can successfully extract shoreline.

  12. Efficacy of standardized extract of Hibiscus sabdariffa L. (Malvaceae) in improving iron status of adults in malaria endemic area: A randomized controlled trial.

    Science.gov (United States)

    Peter, Emanuel L; Rumisha, Susan F; Mashoto, Kijakazi O; Minzi, Omary Ms; Mfinanga, Sayoki

    2017-09-14

    Indigenous community of Mkuranga district have been using aqueous extract of H. sabdariffa L. for treating anemia. However, there have been neither safety nor efficacy studies to validate this medicinal product in anemia. The purpose of this study was to establish efficacy and safety of standardized aqueous extract of H. sabdariffa L. in anemic adults. This was a randomized controlled clinical trial in which 130 adults' men and women aged 18-50 years were involved after meeting the inclusion criteria. Initially, standardized aqueous extract of H. sabdariffa L. was prepared using optimized extraction parameters. Stratified randomization was used to randomize participants into four fixed dose groups. The first group received oral dose of 1000ml while the 2nd group was randomized to receive 1500ml orally. The last two groups were given a dose of 2000ml of extract and 200mg ferrous sulphate tablet respectively. Primary endpoint was the actual change of iron status indicators at the end of 30 days follow up period as compared to those recorded at baseline. Adverse effects were assessed at every 10th day scheduled visit. In all arms, HB and hematopoietic parameters were measured using HemoCue hemoglobinometer® (HemoCue, Ängelholm, Sweden) and hematology analyzer® respectively at the trial site. Follow up was done for 30 days. A total of 82 participants were included for analysis. A standardized aqueous extract of H. sabdariffa L. did not improve iron status in anemic adults in malaria endemic region (P>0.005). However, there was evidence to support the safety of the extract for human consumptions as herbal supplement. Iron and organic acids contents of H. sabdariffa L. extract showed the potential of improving hematopoietic parameters. Studies with bigger sample size are therefore needed to establish the efficacy of the extract when concurrently used with malaria chemoprophylaxis in malaria endemic areas. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  13. Recent solvent extraction experience at Savannah River

    International Nuclear Information System (INIS)

    Gray, L.W.; Burney, G.A.; Gray, J.H.; Hodges, M.E.; Holt, D.L.; Macafee, I.M.; Reif, D.J.; Shook, H.E.

    1986-01-01

    Tributyl phosphate-based solvent extraction processes have been used at Savannah River for more than 30 years to separate and purify thorium, uranium, neptunium, plutonium, americium, and curium isotopes. This report summarizes the advancement of solvent extraction technology at Savannah River during the 1980's. Topics that are discussed include equipment improvements, solvent treatment, waste reduction, and an improved understanding of the various chemistries in the process streams entering, within, and leaving the solvent extraction processes

  14. Strawberry (cv. Romina Methanolic Extract and Anthocyanin-Enriched Fraction Improve Lipid Profile and Antioxidant Status in HepG2 Cells

    Directory of Open Access Journals (Sweden)

    Tamara Y. Forbes-Hernández

    2017-05-01

    Full Text Available Dyslipidemia and oxidation of low density lipoproteins (LDL are recognized as critical factors in the development of atherosclerosis. Healthy dietary patterns, with abundant fruit and vegetable consumption, may prevent the onset of these risk factors due to the presence of phytochemical compounds. Strawberries are known for their high content of polyphenols; among them, flavonoids are the major constituents, and it is presumed that they are responsible for the biological activity of the fruit. Nevertheless, there are only a few studies that actually evaluate the effects of different fractions isolated from strawberries. In order to assess the effects of two different strawberry extracts (whole methanolic extract/anthocyanin-enriched fraction on the lipid profile and antioxidant status in human hepatocellular carcinoma (HepG2 cells, the triglycerides and LDL-cholesterol content, lipid peroxidation, intracellular reactive oxygen species (ROS content and antioxidant enzymes’ activity on cell lysates were determined. Results demonstrated that both strawberry extracts not only improved the lipid metabolism by decreasing triglycerides and LDL-cholesterol contents, but also improved the redox state of HepG2 cells by modulating thiobarbituric acid-reactive substances production, antioxidant enzyme activity and ROS generation. The observed effects were more pronounced for the anthocyanin-enriched fraction.

  15. Improving the quality of cancer care in America through health information technology.

    Science.gov (United States)

    Feeley, Thomas W; Sledge, George W; Levit, Laura; Ganz, Patricia A

    2014-01-01

    A recent report from the Institute of Medicine titled Delivering High-Quality Cancer Care: Charting a New Course for a System in Crisis, identifies improvement in information technology (IT) as essential to improving the quality of cancer care in America. The report calls for implementation of a learning healthcare IT system: a system that supports patient-clinician interactions by providing patients and clinicians with the information and tools necessary to make well informed medical decisions and to support quality measurement and improvement. While some elements needed for a learning healthcare system are already in place for cancer, they are incompletely implemented, have functional deficiencies, and are not integrated in a way that creates a true learning healthcare system. To achieve the goal of a learning cancer care delivery system, clinicians, professional organizations, government, and the IT industry will have to partner, develop, and incentivize participation. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Information Extraction From Chemical Patents

    Directory of Open Access Journals (Sweden)

    Sandra Bergmann

    2012-01-01

    Full Text Available The development of new chemicals or pharmaceuticals is preceded by an indepth analysis of published patents in this field. This information retrieval is a costly and time inefficient step when done by a human reader, yet it is mandatory for potential success of an investment. The goal of the research project UIMA-HPC is to automate and hence speed-up the process of knowledge mining about patents. Multi-threaded analysis engines, developed according to UIMA (Unstructured Information Management Architecture standards, process texts and images in thousands of documents in parallel. UNICORE (UNiform Interface to COmputing Resources workflow control structures make it possible to dynamically allocate resources for every given task to gain best cpu-time/realtime ratios in an HPC environment.

  17. Information technology as a tool to improve the quality of American Indian health care.

    Science.gov (United States)

    Sequist, Thomas D; Cullen, Theresa; Ayanian, John Z

    2005-12-01

    The American Indian/Alaska Native population experiences a disproportionate burden of disease across a spectrum of conditions. While the recent National Healthcare Disparities Report highlighted differences in quality of care among racial and ethnic groups, there was only very limited information available for American Indians. The Indian Health Service (IHS) is currently enhancing its information systems to improve the measurement of health care quality as well as to support quality improvement initiatives. We summarize current knowledge regarding health care quality for American Indians, highlighting the variation in reported measures in the existing literature. We then discuss how the IHS is using information systems to produce standardized performance measures and present future directions for improving American Indian health care quality.

  18. Improving CANDU plant operation and maintenance through retrofit information technology systems

    International Nuclear Information System (INIS)

    Lupton, L. R.; Judd, R. A.

    1998-01-01

    CANDU plant owners are facing an increasingly competitive environment for the generation of electricity. To meet this challenge, all owners have identified that information technology offers opportunities for significant improvements in CANDU operation, maintenance and administration (OM and A) costs. Targeted information technology application areas include instrumentation and control, engineering, construction, operations and plant information management. These opportunities also pose challenges and issues that must be addressed if the full benefits of the advances in information technology are to be achieved. Key among these are system hardware and software maintenance, and obsolescence protection. AECL has been supporting CANDU stations with the initial development and evaluation of systems to improve plant performance and cost. Five key initiatives that have been implemented or are in the process of being implemented in some CANDU plants to achieve cooperational benefits include: critical safety parameter monitor system; advanced computerized annunciation system; plant historical data system; plant display system; and digital protection system. Each system will be described in terms of its role in enhancing current CANDU plant performance and how they will contribute to future CANDU plant performance. (author). 8 refs., 3 figs

  19. An innovative method for extracting isotopic information from low-resolution gamma spectra

    International Nuclear Information System (INIS)

    Miko, D.; Estep, R.J.; Rawool-Sullivan, M.W.

    1998-01-01

    A method is described for the extraction of isotopic information from attenuated gamma ray spectra using the gross-count material basis set (GC-MBS) model. This method solves for the isotopic composition of an unknown mixture of isotopes attenuated through an absorber of unknown material. For binary isotopic combinations the problem is nonlinear in only one variable and is easily solved using standard line optimization techniques. Results are presented for NaI spectrum analyses of various binary combinations of enriched uranium, depleted uranium, low burnup Pu, 137 Cs, and 133 Ba attenuated through a suite of absorbers ranging in Z from polyethylene through lead. The GC-MBS method results are compared to those computed using ordinary response function fitting and with a simple net peak area method. The GC-MBS method was found to be significantly more accurate than the other methods over the range of absorbers and isotopic blends studied

  20. Radiological information systems: improvements in service, economy, and quality control?

    International Nuclear Information System (INIS)

    Gross-Fengels, W.; Weber, M.

    1997-01-01

    By means of complete service control and standardized accounting processes, radiological information systems clearly contribute to improved results. They provide the prerequisites for the establishment of expanded networks and allow comprisons with comparable institutions. The quality of patient care can be improved since, for example, the production time from referral to finished result becomes shorter. Direct access to patient and findings data from several positions is possible. Preliminary results can be viewed immediately. The patient's history is accessible to authorized users at all times. The exact reproducibility and assignment of services leads to more clarity. By means of the information available form RIS, rapid adaptive processes can be undertaken. The system assists the to fulfill the requirements of health regulations. The above-mentioned relationships demonstrate that the EDP systems are well accepted by physicians, medical assistants, and administrators and represent an indispensable aid for solving problems. (orig.) [de

  1. Extractive text summarization system to aid data extraction from full text in systematic review development.

    Science.gov (United States)

    Bui, Duy Duc An; Del Fiol, Guilherme; Hurdle, John F; Jonnalagadda, Siddhartha

    2016-12-01

    Extracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process. We developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables. At the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure. Computer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Efforts in improvement of nuclear knowledge and information management in Croatia

    International Nuclear Information System (INIS)

    Pleslic, S.; Novosel, N.

    2005-01-01

    The IAEA was authorised for exchange of technical and scientific information on peaceful uses of atomic energy and established INIS in 1970 as an international bibliographic database in the nuclear field and in nuclear related areas. Countries at different levels of technological development could derive benefits from INIS output products. The use of nuclear technology relies on the accumulation of knowledge in nuclear science and technology, including both technical information in documents and databases, and knowledge in human resources. Nuclear knowledge and information exchange are important for the process of decision-making. The IAEA supports all Members in systematic knowledge preservation and information exchange, who want to transfer their practical experience to the younger generation and to archive important information. Croatia is involved in activities in knowledge and information management since 1994 when she joined INIS. Thanks to development and application of new information technologies within the INIS information management framework, Members improve the collection, production and dissemination of nuclear knowledge and information. (author)

  3. Assessment of clinical effects and safety of an oral supplement based on marine protein, vitamin C, grape seed extract, zinc, and tomato extract in the improvement of visible signs of skin aging in men

    Directory of Open Access Journals (Sweden)

    Costa A

    2015-06-01

    Full Text Available Adilson Costa,1,2 Elisangela Samartin Pegas Pereira,1 Elvira Cancio Assumpção,1 Felipe Borba Calixto dos Santos,1 Fernanda Sayuri Ota,1 Margareth de Oliveira Pereira,1 Maria Carolina Fidelis,1 Raquel Fávaro,1 Stephanie Selma Barros Langen,1 Lúcia Helena Favaro de Arruda,1 Eva Nydal Abildgaard3 1Department of Dermatology, Pontifícia Universidade Católica de Campinas, Campinas, São Paulo, Brazil; 2KOLderma Clinical Trials Institute, Campinas, São Paulo, Brazil; 3Pfizer Consumer Healthcare, Nutritional Sciences, Copenhagen, Denmark Background: Skin aging is a natural process that may be aggravated by environmental factors. Topical products are the conventional means to combat aging; however, the use of oral supplements is on the rise to assist in the management of aged skin.Objective: The purpose of this study was to assess the effects and safety of an oral supplement containing (per tablet marine protein (105 mg, vitamin C (27 mg, grape seed extract (13.75 mg, zinc (2 mg, and tomato extract (14.38 mg in the improvement of skin aging in men.Methods: This single-center, open-label, quasi-experimental clinical study enrolled 47 male subjects, aged 30–45 years, with phototypes I–IV on the Fitzpatrick scale. Subjects received two tablets of the oral supplement for 180 consecutive days. Each subject served as their own control. Clinical assessments were made by medical personnel and by the subjects, respectively. Objective assessments were carried out through pH measurements, sebumetry, corneometry, ultrasound scanning, skin biopsies, and photographic images.Results: Forty-one subjects (87% completed the study. Clinical improvements on both investigator- and subject-rated outcomes were found for the following parameters: erythema, hydration, radiance, and overall appearance (P<0.05. The objective measurements in the facial skin showed significant improvements from baseline in skin hydration (P<0.05, dermal ultrasound density (P<0.001, and

  4. EnvMine: A text-mining system for the automatic extraction of contextual information

    Directory of Open Access Journals (Sweden)

    de Lorenzo Victor

    2010-06-01

    Full Text Available Abstract Background For ecological studies, it is crucial to count on adequate descriptions of the environments and samples being studied. Such a description must be done in terms of their physicochemical characteristics, allowing a direct comparison between different environments that would be difficult to do otherwise. Also the characterization must include the precise geographical location, to make possible the study of geographical distributions and biogeographical patterns. Currently, there is no schema for annotating these environmental features, and these data have to be extracted from textual sources (published articles. So far, this had to be performed by manual inspection of the corresponding documents. To facilitate this task, we have developed EnvMine, a set of text-mining tools devoted to retrieve contextual information (physicochemical variables and geographical locations from textual sources of any kind. Results EnvMine is capable of retrieving the physicochemical variables cited in the text, by means of the accurate identification of their associated units of measurement. In this task, the system achieves a recall (percentage of items retrieved of 92% with less than 1% error. Also a Bayesian classifier was tested for distinguishing parts of the text describing environmental characteristics from others dealing with, for instance, experimental settings. Regarding the identification of geographical locations, the system takes advantage of existing databases such as GeoNames to achieve 86% recall with 92% precision. The identification of a location includes also the determination of its exact coordinates (latitude and longitude, thus allowing the calculation of distance between the individual locations. Conclusion EnvMine is a very efficient method for extracting contextual information from different text sources, like published articles or web pages. This tool can help in determining the precise location and physicochemical

  5. Earth Science Data Analytics: Preparing for Extracting Knowledge from Information

    Science.gov (United States)

    Kempler, Steven; Barbieri, Lindsay

    2016-01-01

    Data analytics is the process of examining large amounts of data of a variety of types to uncover hidden patterns, unknown correlations and other useful information. Data analytics is a broad term that includes data analysis, as well as an understanding of the cognitive processes an analyst uses to understand problems and explore data in meaningful ways. Analytics also include data extraction, transformation, and reduction, utilizing specific tools, techniques, and methods. Turning to data science, definitions of data science sound very similar to those of data analytics (which leads to a lot of the confusion between the two). But the skills needed for both, co-analyzing large amounts of heterogeneous data, understanding and utilizing relevant tools and techniques, and subject matter expertise, although similar, serve different purposes. Data Analytics takes on a practitioners approach to applying expertise and skills to solve issues and gain subject knowledge. Data Science, is more theoretical (research in itself) in nature, providing strategic actionable insights and new innovative methodologies. Earth Science Data Analytics (ESDA) is the process of examining, preparing, reducing, and analyzing large amounts of spatial (multi-dimensional), temporal, or spectral data using a variety of data types to uncover patterns, correlations and other information, to better understand our Earth. The large variety of datasets (temporal spatial differences, data types, formats, etc.) invite the need for data analytics skills that understand the science domain, and data preparation, reduction, and analysis techniques, from a practitioners point of view. The application of these skills to ESDA is the focus of this presentation. The Earth Science Information Partners (ESIP) Federation Earth Science Data Analytics (ESDA) Cluster was created in recognition of the practical need to facilitate the co-analysis of large amounts of data and information for Earth science. Thus, from a to

  6. Text mining facilitates database curation - extraction of mutation-disease associations from Bio-medical literature.

    Science.gov (United States)

    Ravikumar, Komandur Elayavilli; Wagholikar, Kavishwar B; Li, Dingcheng; Kocher, Jean-Pierre; Liu, Hongfang

    2015-06-06

    Advances in the next generation sequencing technology has accelerated the pace of individualized medicine (IM), which aims to incorporate genetic/genomic information into medicine. One immediate need in interpreting sequencing data is the assembly of information about genetic variants and their corresponding associations with other entities (e.g., diseases or medications). Even with dedicated effort to capture such information in biological databases, much of this information remains 'locked' in the unstructured text of biomedical publications. There is a substantial lag between the publication and the subsequent abstraction of such information into databases. Multiple text mining systems have been developed, but most of them focus on the sentence level association extraction with performance evaluation based on gold standard text annotations specifically prepared for text mining systems. We developed and evaluated a text mining system, MutD, which extracts protein mutation-disease associations from MEDLINE abstracts by incorporating discourse level analysis, using a benchmark data set extracted from curated database records. MutD achieves an F-measure of 64.3% for reconstructing protein mutation disease associations in curated database records. Discourse level analysis component of MutD contributed to a gain of more than 10% in F-measure when compared against the sentence level association extraction. Our error analysis indicates that 23 of the 64 precision errors are true associations that were not captured by database curators and 68 of the 113 recall errors are caused by the absence of associated disease entities in the abstract. After adjusting for the defects in the curated database, the revised F-measure of MutD in association detection reaches 81.5%. Our quantitative analysis reveals that MutD can effectively extract protein mutation disease associations when benchmarking based on curated database records. The analysis also demonstrates that incorporating

  7. E-Business, the impact of regional growth on the improvement of Information and Communication Development

    Science.gov (United States)

    Setiawan, MI; Hasyim, C.; Kurniasih, N.; Abdullah, D.; Napitupulu, D.; Rahim, R.; Sukoco, A.; Dhaniarti, I.; Suyono, J.; Sudapet, IN; Nasihien, RD; Wulandari, DAR; Reswanda; Mudjanarko, SW; Sugeng; Wajdi, MBN

    2018-04-01

    ICT becomes a key element to improve industrial infrastructure efficiency and sustainable economic productivity. This study aims to analysis the impact of regional improvement on information and communication development in Indonesia. This research is a correlational study. Population of this research include 151 regions in Indonesia. By using a total sampling, there were 151 sample regions. The results show there are the strong impact of regional growth on increasing Gross Regional Domestic Product (GRDP) of information and communication. It can be seen from all regional improvement sub variables that have a high correlation in increasing GRDP of Information and Communication in Indonesia. Only two sub-variables that have low correlation to GRDP of Information and Communication variable i.e. GRDP of Agriculture, Forestry and Fishing (0.01) and GRDP of Mining and Quarrying (0.04). The correlation coefficient (R) is 0.981, means the variable of information and communication GRDP has a very strong correlation with regional growth variable. Thus the value of Adjusted R Square is 95.8%, means there are impact of regional growth variables in increasing GRDPof Information and Communication, while the increase of 4.2% of Information and Communication GRDP is influenced by other factors aside from regional improvement.

  8. Ultraviolet light assisted extraction of flavonoids and allantoin from aqueous and alcoholic extracts of Symphytum officinale

    Directory of Open Access Journals (Sweden)

    Marwan S.M. Al-Nimer

    2017-09-01

    Conclusions: UV-radiation enhances the yields of active ingredient, of comfrey extracted with methanol whereas improves the flavonoids, reducing power and allantoin levels of comfrey extracted by the aqueous infusion method. UV-radiation reduces the levels of flavonoids, reducing power and allantoin when the comfrey extracted by alcohols [J Complement Med Res 2017; 6(3.000: 280-283

  9. Mining of the social network extraction

    Science.gov (United States)

    Nasution, M. K. M.; Hardi, M.; Syah, R.

    2017-01-01

    The use of Web as social media is steadily gaining ground in the study of social actor behaviour. However, information in Web can be interpreted in accordance with the ability of the method such as superficial methods for extracting social networks. Each method however has features and drawbacks: it cannot reveal the behaviour of social actors, but it has the hidden information about them. Therefore, this paper aims to reveal such information in the social networks mining. Social behaviour could be expressed through a set of words extracted from the list of snippets.

  10. Dissemination of performance information and continuous improvement: A narrative systematic review.

    Science.gov (United States)

    Lemire, Marc; Demers-Payette, Olivier; Jefferson-Falardeau, Justin

    2013-01-01

    Developing a performance measure and reporting the results to support decision making at an individual level has yielded poor results in many health systems. The purpose of this paper is to highlight the factors associated with the dissemination of performance information that generate and support continuous improvement in health organizations. A systematic data collection strategy that includes empirical and theoretical research published from 1980 to 2010, both qualitative and quantitative, was performed on Web of Science, Current Contents, EMBASE and MEDLINE. A narrative synthesis method was used to iteratively detail explicative processes that underlie the intervention. A classification and synthesis framework was developed, drawing on knowledge transfer and exchange (KTE) literature. The sample consisted of 114 articles, including seven systematic or exhaustive reviews. Results showed that dissemination in itself is not enough to produce improvement initiatives. Successful dissemination depends on various factors, which influence the way collective actors react to performance information such as the clarity of objectives, the relationships between stakeholders, the system's governance and the available incentives. This review was limited to the process of knowledge dissemination in health systems and its utilization by users at the health organization level. Issues related to improvement initiatives deserve more attention. Knowledge dissemination goes beyond better communication and should be considered as carefully as the measurement of performance. Choices pertaining to intervention should be continuously prompted by the concern to support organizational action. While considerable attention was paid to the public reporting of performance information, this review sheds some light on a more promising avenue for changes and improvements, notably in public health systems.

  11. The Aqueous Extract of Gynura divaricata (L. DC. Improves Glucose and Lipid Metabolism and Ameliorates Type 2 Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Jinnan Li

    2018-01-01

    Full Text Available Type 2 diabetes mellitus (T2DM is a chronic disease characterized by hyperglycemia and dyslipidemia caused by impaired insulin secretion and resistance of the peripheral tissues. A major pathogenesis of T2DM is obesity-associated insulin resistance. Gynura divaricata (L. DC. (GD is a natural plant and has been reported to have numerous health-promoting effects on both animals and humans. In this study, we aimed to elucidate the regulatory mechanism of GD improving glucose and lipid metabolism in an obesity animal model induced by high-fat and high-sugar diet in combination with low dose of streptozocin and an insulin-resistant HepG2 cell model induced by dexamethasone. The study showed that the water extract of GD (GD extract A could significantly reduce fasting serum glucose, reverse dyslipidemia and pancreatic damage, and regulate the body weight of mice. We also found that GD extract A had low toxicity in vivo and in vitro. Furthermore, GD extract A may increase glucose consumption in insulin-resistant HepG2 cells, markedly inhibit NF-κB activation, and decrease the impairment in signaling molecules of insulin pathway, such as IRS-1, AKT, and GLUT1. Overall, the results indicate that GD extract A is a promising candidate for the prevention and treatment of T2DM.

  12. Regional Studies Program. Extraction of North Dakota lignite: environmental and reclamation issues

    Energy Technology Data Exchange (ETDEWEB)

    LaFevers, J.R.; Johnson, D.O.; Dvorak, A.J.

    1976-12-01

    This study, sponsored by the U.S. Energy Research and Development Administration, addresses the environmental implications of extraction of coal in North Dakota. These implications are supported by details of the geologic and historical background of the area of focus, the lignite resources in the Fort Union coalfield portion. The particular concentration is on the four-county area of Mercer, Dunn, McLean, and Oliver where substantial coal reserves exist and a potential gasification plant site has been identified. The purposes of this extensive study are to identify the land use and environmental problems and issues associated with extraction; to provide a base of information for assessing the impacts of various levels of extraction; to examine the economics and feasibility of reclamation; and to identify research that needs to be undertaken to evaluate and to improve reclamation practices. The study also includes a description of the physical and chemical soil characteristics and hydrological and climatic factors entailed in extraction, revegetation, and reclamation procedures.

  13. Full-fledged temporal processing: bridging the gap between deep linguistic processing and temporal extraction

    Directory of Open Access Journals (Sweden)

    Francisco Costa

    2013-07-01

    Full Text Available The full-fledged processing of temporal information presents specific challenges. These difficulties largely stem from the fact that the temporal meaning conveyed by grammatical means interacts with many extra-linguistic factors (world knowledge, causality, calendar systems, reasoning. This article proposes a novel approach to this problem, based on a hybrid strategy that explores the complementarity of the symbolic and probabilistic methods. A specialized temporal extraction system is combined with a deep linguistic processing grammar. The temporal extraction system extracts eventualities, times and dates mentioned in text, and also temporal relations between them, in line with the tasks of the recent TempEval challenges; and uses machine learning techniques to draw from different sources of information (grammatical and extra-grammatical even if it is not explicitly known how these combine to produce the final temporal meaning being expressed. In turn, the deep computational grammar delivers richer truth-conditional meaning representations of input sentences, which include a principled representation of temporal information, on which higher level tasks, including reasoning, can be based. These deep semantic representations are extended and improved according to the output of the aforementioned temporal extraction module. The prototype implemented shows performance results that increase the quality of the temporal meaning representations and are better than the performance of each of the two components in isolation.

  14. Developing a Process Model for the Forensic Extraction of Information from Desktop Search Applications

    Directory of Open Access Journals (Sweden)

    Timothy Pavlic

    2008-03-01

    Full Text Available Desktop search applications can contain cached copies of files that were deleted from the file system. Forensic investigators see this as a potential source of evidence, as documents deleted by suspects may still exist in the cache. Whilst there have been attempts at recovering data collected by desktop search applications, there is no methodology governing the process, nor discussion on the most appropriate means to do so. This article seeks to address this issue by developing a process model that can be applied when developing an information extraction application for desktop search applications, discussing preferred methods and the limitations of each. This work represents a more structured approach than other forms of current research.

  15. An Improved Information Value Model Based on Gray Clustering for Landslide Susceptibility Mapping

    Directory of Open Access Journals (Sweden)

    Qianqian Ba

    2017-01-01

    Full Text Available Landslides, as geological hazards, cause significant casualties and economic losses. Therefore, it is necessary to identify areas prone to landslides for prevention work. This paper proposes an improved information value model based on gray clustering (IVM-GC for landslide susceptibility mapping. This method uses the information value derived from an information value model to achieve susceptibility classification and weight determination of landslide predisposing factors and, hence, obtain the landslide susceptibility of each study unit based on the clustering analysis. Using a landslide inventory of Chongqing, China, which contains 8435 landslides, three landslide susceptibility maps were generated based on the common information value model (IVM, an information value model improved by an analytic hierarchy process (IVM-AHP and our new improved model. Approximately 70% (5905 of the inventory landslides were used to generate the susceptibility maps, while the remaining 30% (2530 were used to validate the results. The training accuracies of the IVM, IVM-AHP and IVM-GC were 81.8%, 78.7% and 85.2%, respectively, and the prediction accuracies were 82.0%, 78.7% and 85.4%, respectively. The results demonstrate that all three methods perform well in evaluating landslide susceptibility. Among them, IVM-GC has the best performance.

  16. Smart Extraction and Analysis System for Clinical Research.

    Science.gov (United States)

    Afzal, Muhammad; Hussain, Maqbool; Khan, Wajahat Ali; Ali, Taqdir; Jamshed, Arif; Lee, Sungyoung

    2017-05-01

    With the increasing use of electronic health records (EHRs), there is a growing need to expand the utilization of EHR data to support clinical research. The key challenge in achieving this goal is the unavailability of smart systems and methods to overcome the issue of data preparation, structuring, and sharing for smooth clinical research. We developed a robust analysis system called the smart extraction and analysis system (SEAS) that consists of two subsystems: (1) the information extraction system (IES), for extracting information from clinical documents, and (2) the survival analysis system (SAS), for a descriptive and predictive analysis to compile the survival statistics and predict the future chance of survivability. The IES subsystem is based on a novel permutation-based pattern recognition method that extracts information from unstructured clinical documents. Similarly, the SAS subsystem is based on a classification and regression tree (CART)-based prediction model for survival analysis. SEAS is evaluated and validated on a real-world case study of head and neck cancer. The overall information extraction accuracy of the system for semistructured text is recorded at 99%, while that for unstructured text is 97%. Furthermore, the automated, unstructured information extraction has reduced the average time spent on manual data entry by 75%, without compromising the accuracy of the system. Moreover, around 88% of patients are found in a terminal or dead state for the highest clinical stage of disease (level IV). Similarly, there is an ∼36% probability of a patient being alive if at least one of the lifestyle risk factors was positive. We presented our work on the development of SEAS to replace costly and time-consuming manual methods with smart automatic extraction of information and survival prediction methods. SEAS has reduced the time and energy of human resources spent unnecessarily on manual tasks.

  17. Sequence complexity and work extraction

    International Nuclear Information System (INIS)

    Merhav, Neri

    2015-01-01

    We consider a simplified version of a solvable model by Mandal and Jarzynski, which constructively demonstrates the interplay between work extraction and the increase of the Shannon entropy of an information reservoir which is in contact with a physical system. We extend Mandal and Jarzynski’s main findings in several directions: first, we allow sequences of correlated bits rather than just independent bits. Secondly, at least for the case of binary information, we show that, in fact, the Shannon entropy is only one measure of complexity of the information that must increase in order for work to be extracted. The extracted work can also be upper bounded in terms of the increase in other quantities that measure complexity, like the predictability of future bits from past ones. Third, we provide an extension to the case of non-binary information (i.e. a larger alphabet), and finally, we extend the scope to the case where the incoming bits (before the interaction) form an individual sequence, rather than a random one. In this case, the entropy before the interaction can be replaced by the Lempel–Ziv (LZ) complexity of the incoming sequence, a fact that gives rise to an entropic meaning of the LZ complexity, not only in information theory, but also in physics. (paper)

  18. Feedback from incident reporting: information and action to improve patient safety.

    Science.gov (United States)

    Benn, J; Koutantji, M; Wallace, L; Spurgeon, P; Rejman, M; Healey, A; Vincent, C

    2009-02-01

    Effective feedback from incident reporting systems in healthcare is essential if organisations are to learn from failures in the delivery of care. Despite the wide-scale development and implementation of incident reporting in healthcare, studies in the UK suggest that information concerning system vulnerabilities could be better applied to improve operational safety within organisations. In this article, the findings and implications of research to identify forms of effective feedback from incident reporting are discussed, to promote best practices in this area. The research comprised a mixed methods review to investigate mechanisms of effective feedback for healthcare, drawing upon experience within established reporting programmes in high-risk industry and transport domains. Systematic searches of published literature were undertaken, and 23 case studies describing incident reporting programmes with feedback were identified for analysis from the international healthcare literature. Semistructured interviews were undertaken with 19 subject matter experts across a range of domains, including: civil aviation, maritime, energy, rail, offshore production and healthcare. In analysis, qualitative information from several sources was synthesised into practical requirements for developing effective feedback in healthcare. Both action and information feedback mechanisms were identified, serving safety awareness, improvement and motivational functions. The provision of actionable feedback that visibly improved systems was highlighted as important in promoting future reporting. Fifteen requirements for the design of effective feedback systems were identified, concerning: the role of leadership, the credibility and content of information, effective dissemination channels, the capacity for rapid action and the need for feedback at all levels of the organisation, among others. Above all, the safety-feedback cycle must be closed by ensuring that reporting, analysis and

  19. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  20. Influence of Tannin Extract and Yeast Extract on Color Preservation and Anthocyanin Content of Mulberry Wine.

    Science.gov (United States)

    You, Yilin; Li, Na; Han, Xue; Guo, Jielong; Liu, Guojie; Huang, Weidong; Zhan, Jicheng

    2018-04-01

    The color of mulberry wine is extremely unstable in processing and aging. This paper investigates the effects of tannin extract and yeast extract on the color and color-preserving characteristics of mulberry wine made from the Dashi cultivar. The results showed that the maximum absorption wavelength in both tannin extract and yeast extract groups changed generating the red shift effect. The color of the tannin extract maintained a good gloss in the first 4 months, while the yeast extract group showed remarkable color preservation for the first 3 months. The total anthocyanin and cyanidin-3-rutinoside contents in both experiment groups were significantly higher than that of the control group, thus proving that tannin extract and yeast extract both exert a remarkably positive effect on preserving the color of mulberry wine during its aging. Moreover, sensory analysis indicated that the quality of mulberry wine treated with tannin extract was significantly higher than that of the control. The distinct color of mulberry wine is one of the foremost qualities that imprints on consumers' senses, but it is extremely unstable in processing and aging. However, the color protection of mulberry wine was not studied previously. In this study, we found that tannin extract and yeast extract both exert a remarkably positive effect on preserving the color of mulberry wine during aging. The study is of great significance as a guide to improving the color stability of mulberry wine, thereby also improving and promoting the development of the mulberry deep processing industry. © 2018 Institute of Food Technologists®.

  1. Extraction of indirectly captured information for use in a comparison of offline pH measurement technologies.

    Science.gov (United States)

    Ritchie, Elspeth K; Martin, Elaine B; Racher, Andy; Jaques, Colin

    2017-06-10

    Understanding the causes of discrepancies in pH readings of a sample can allow more robust pH control strategies to be implemented. It was found that 59.4% of differences between two offline pH measurement technologies for an historical dataset lay outside an expected instrument error range of ±0.02pH. A new variable, Osmo Res , was created using multiple linear regression (MLR) to extract information indirectly captured in the recorded measurements for osmolality. Principal component analysis and time series analysis were used to validate the expansion of the historical dataset with the new variable Osmo Res . MLR was used to identify variables strongly correlated (p<0.05) with differences in pH readings by the two offline pH measurement technologies. These included concentrations of specific chemicals (e.g. glucose) and Osmo Res, indicating culture medium and bolus feed additions as possible causes of discrepancies between the offline pH measurement technologies. Temperature was also identified as statistically significant. It is suggested that this was a result of differences in pH-temperature compensations employed by the pH measurement technologies. In summary, a method for extracting indirectly captured information has been demonstrated, and it has been shown that competing pH measurement technologies were not necessarily interchangeable at the desired level of control (±0.02pH). Copyright © 2017 Elsevier B.V. All rights reserved.

  2. An Alginate/Cyclodextrin Spray Drying Matrix to Improve Shelf Life and Antioxidant Efficiency of a Blood Orange By-Product Extract Rich in Polyphenols: MMPs Inhibition and Antiglycation Activity in Dysmetabolic Diseases

    Directory of Open Access Journals (Sweden)

    Maria Rosaria Lauro

    2017-01-01

    Full Text Available Alginate and β-cyclodextrin were used to produce easily dosable and spray-dried microsystems of a dried blood orange extract with antidysmetabolic properties, obtained from a by-product fluid extract. The spray-dried applied conditions were able to obtain a concentrate dried extract without the loss of AOA and with TPC and TMA values of 35–40% higher than that of the starting material. They were also effective in producing microparticles with 80–100% of encapsulation efficiency. The 2% sodium alginate was capable of improving the extract shelf life, while the beta-cyclodextrin (1 : 1 molar ratio with dried extract prolonged the extract antioxidant efficiency by 6 hours. The good inhibition effect of the dried extract on the AGE formation and the MMP-2 and MMP-9 activity is presumably due to a synergic effect exerted by both anthocyanin and bioflavonoid extract compounds and was improved by the use of alginate and cyclodextrin.

  3. An Alginate/Cyclodextrin Spray Drying Matrix to Improve Shelf Life and Antioxidant Efficiency of a Blood Orange By-Product Extract Rich in Polyphenols: MMPs Inhibition and Antiglycation Activity in Dysmetabolic Diseases.

    Science.gov (United States)

    Lauro, Maria Rosaria; Crascì, Lucia; Giannone, Virgilio; Ballistreri, Gabriele; Fabroni, Simona; Sansone, Francesca; Rapisarda, Paolo; Panico, Anna Maria; Puglisi, Giovanni

    2017-01-01

    Alginate and β -cyclodextrin were used to produce easily dosable and spray-dried microsystems of a dried blood orange extract with antidysmetabolic properties, obtained from a by-product fluid extract. The spray-dried applied conditions were able to obtain a concentrate dried extract without the loss of AOA and with TPC and TMA values of 35-40% higher than that of the starting material. They were also effective in producing microparticles with 80-100% of encapsulation efficiency. The 2% sodium alginate was capable of improving the extract shelf life , while the beta-cyclodextrin (1 : 1 molar ratio with dried extract) prolonged the extract antioxidant efficiency by 6 hours. The good inhibition effect of the dried extract on the AGE formation and the MMP-2 and MMP-9 activity is presumably due to a synergic effect exerted by both anthocyanin and bioflavonoid extract compounds and was improved by the use of alginate and cyclodextrin.

  4. Extraction and analysis of reducing alteration information of oil-gas in Bashibulake uranium ore district based on ASTER remote sensing data

    International Nuclear Information System (INIS)

    Ye Fawang; Liu Dechang; Zhao Yingjun; Yang Xu

    2008-01-01

    Beginning with the analysis of the spectral characteristics of sandstone with reducing alteration of oil-gas in Bashibulake ore district, the extract technology of reducing alteration information based on ASTER data is presented. Several remote sensing anomaly zones of reducing alteration information similar with that in uranium deposit are interpreted in study area. On the basis of above study, these alteration anomaly information are further classified by using the advantage of ASTER data with multi-band in SWIR, the geological significance for alteration anomaly information is respectively discussed. As a result, alteration anomalies good for uranium prospecting are really selected, which provides some important information for uranium exploration in outland of Bashibulake uranium ore area. (authors)

  5. 75 FR 7459 - Office of Elementary and Secondary Education; Overview Information; Improving Literacy Through...

    Science.gov (United States)

    2010-02-19

    ... information literacy, information retrieval, and critical-thinking skills of students; facilitating Internet... DEPARTMENT OF EDUCATION Office of Elementary and Secondary Education; Overview Information; Improving Literacy Through School Libraries Program Notice Inviting Applications for New Awards for Fiscal...

  6. Executive Information Systems' Multidimensional Models

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.

  7. Improvement of extraction method of coagulation active components from Moringa oleifera seed

    OpenAIRE

    Okuda, Tetsuji; Baes, Aloysius U.; Nishijima, Wataru; Okada, Mitsumasa

    1999-01-01

    A new method for the extraction of the active coagulation component from Moringa oleifera seeds was developed and compared with the ordinary water extraction method (MOC–DW). In the new method, 1.0 mol l-1 solution of sodium chloride (MOC–SC) and other salts were used for extraction of the active coagulation component. Batch coagulation experiments were conducted using 500 ml of low turbid water (50 NTU). Coagulation efficiencies were evaluated based on the dosage required to remove kaolinite...

  8. Consideration on information work under new situation consider the situation, integrate the resources, develop the innovate, improve the information work level

    International Nuclear Information System (INIS)

    Wu Erni

    2010-01-01

    The nuclear document information work under the new socioeconomic development situation in China shall focus on the nuclear fuel cycle industry, accelerate the information network informatization process, establish the information network platform so as to improve the management efficiency and benefit and reduce the resource consumption and management cost through publicity and information acquisition; we shall narrow the difference in understanding and application between the specialized technical personnel and informatization personnel during informatization management process, and speed up the structural readjustment through information resources sharing and network information system; during the transformation of the nuclear information work and nuclear resources into effective productive power, we shall conduct management and technology innovation and resource integration according to the development strategy of the enterprise so as to promote the enthusiasm for informatization as well as invention, creation and technological innovation in the enterprise and the whole country, in general, the nuclear information work shall be in the service of the nuclear manufacturing enterprises and we shall change our ideology and work style under the new situation so as to improve the overall capability and level of our scientific and technological information system by advanced science and technology. (author)

  9. Information Extraction of Tourist Geological Resources Based on 3d Visualization Remote Sensing Image

    Science.gov (United States)

    Wang, X.

    2018-04-01

    Tourism geological resources are of high value in admiration, scientific research and universal education, which need to be protected and rationally utilized. In the past, most of the remote sensing investigations of tourism geological resources used two-dimensional remote sensing interpretation method, which made it difficult for some geological heritages to be interpreted and led to the omission of some information. This aim of this paper is to assess the value of a method using the three-dimensional visual remote sensing image to extract information of geological heritages. skyline software system is applied to fuse the 0.36 m aerial images and 5m interval DEM to establish the digital earth model. Based on the three-dimensional shape, color tone, shadow, texture and other image features, the distribution of tourism geological resources in Shandong Province and the location of geological heritage sites were obtained, such as geological structure, DaiGu landform, granite landform, Volcanic landform, sandy landform, Waterscapes, etc. The results show that using this method for remote sensing interpretation is highly recognizable, making the interpretation more accurate and comprehensive.

  10. Ekstrak Daun Pepaya dan Kangkung untuk Meningkatkan Daya Tetas Telur dan Kelangsungan Hidup Larva Lele (EXTRACTS OF CARICA PAPAYA AND IPOMOEA AQUATICA FOR IMPROVING EGG HATCHABILITY AND LARVAL VIABILITY OF CATFISH

    Directory of Open Access Journals (Sweden)

    Gina Saptiani

    2016-07-01

    Full Text Available This research was aimed to investigate the potential use of leaf extract of Carica papaya and Ipomoeaaquatica lto improve egg hatchability and larval viability of catfish. Dried leaves of Carica papaya andIpomoea aquatica were macerated and extracted in water and ethanol. Eggs and larvae were tested in theaquarium size of 10 L with a a diameter of 28 cm. The extracts in concentration 600, 800 and 1.000 ppmwere tested on the egg hatchability of catfish with immersion method, and challed with Aeromonashydrophyla, Pseudomonas sp., and Saprolegnia spp. The extracts in concentration 800 and 1.000 ppm weretested on the larval viability with immersion method, and challed with pathogens. Water or ethanolextract of Carica papaya and Ipomoea aquatica can improve egg hatchability 67±8% until 90±6% andlarval viability of catfish 77±0,5 until 90±9%. Eight hundred ppm ethanol extract of Carica papaya has thebest egg hatchability and 1000 ppm can improve larval viability of catfish.

  11. Biological network extraction from scientific literature: state of the art and challenges.

    Science.gov (United States)

    Li, Chen; Liakata, Maria; Rebholz-Schuhmann, Dietrich

    2014-09-01

    Networks of molecular interactions explain complex biological processes, and all known information on molecular events is contained in a number of public repositories including the scientific literature. Metabolic and signalling pathways are often viewed separately, even though both types are composed of interactions involving proteins and other chemical entities. It is necessary to be able to combine data from all available resources to judge the functionality, complexity and completeness of any given network overall, but especially the full integration of relevant information from the scientific literature is still an ongoing and complex task. Currently, the text-mining research community is steadily moving towards processing the full body of the scientific literature by making use of rich linguistic features such as full text parsing, to extract biological interactions. The next step will be to combine these with information from scientific databases to support hypothesis generation for the discovery of new knowledge and the extension of biological networks. The generation of comprehensive networks requires technologies such as entity grounding, coordination resolution and co-reference resolution, which are not fully solved and are required to further improve the quality of results. Here, we analyse the state of the art for the extraction of network information from the scientific literature and the evaluation of extraction methods against reference corpora, discuss challenges involved and identify directions for future research. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  12. Active surface model improvement by energy function optimization for 3D segmentation.

    Science.gov (United States)

    Azimifar, Zohreh; Mohaddesi, Mahsa

    2015-04-01

    This paper proposes an optimized and efficient active surface model by improving the energy functions, searching method, neighborhood definition and resampling criterion. Extracting an accurate surface of the desired object from a number of 3D images using active surface and deformable models plays an important role in computer vision especially medical image processing. Different powerful segmentation algorithms have been suggested to address the limitations associated with the model initialization, poor convergence to surface concavities and slow convergence rate. This paper proposes a method to improve one of the strongest and recent segmentation algorithms, namely the Decoupled Active Surface (DAS) method. We consider a gradient of wavelet edge extracted image and local phase coherence as external energy to extract more information from images and we use curvature integral as internal energy to focus on high curvature region extraction. Similarly, we use resampling of points and a line search for point selection to improve the accuracy of the algorithm. We further employ an estimation of the desired object as an initialization for the active surface model. A number of tests and experiments have been done and the results show the improvements with regards to the extracted surface accuracy and computational time of the presented algorithm compared with the best and recent active surface models. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Method of Improving Personal Name Search in Academic Information Service

    Directory of Open Access Journals (Sweden)

    Heejun Han

    2012-12-01

    Full Text Available All academic information on the web or elsewhere has its creator, that is, a subject who has created the information. The subject can be an individual, a group, or an institution, and can be a nation depending on the nature of the relevant information. Most information is composed of a title, an author, and contents. An essay which is under the academic information category has metadata including a title, an author, keyword, abstract, data about publication, place of publication, ISSN, and the like. A patent has metadata including the title, an applicant, an inventor, an attorney, IPC, number of application, and claims of the invention. Most web-based academic information services enable users to search the information by processing the meta-information. An important element is to search information by using the author field which corresponds to a personal name. This study suggests a method of efficient indexing and using the adjacent operation result ranking algorithm to which phrase search-based boosting elements are applied, and thus improving the accuracy of the search results of personal names. It also describes a method for providing the results of searching co-authors and related researchers in searching personal names. This method can be effectively applied to providing accurate and additional search results in the academic information services.

  14. Black ginseng-enriched Chong-Myung-Tang extracts improve spatial learning behavior in rats and elicit anti-inflammatory effects in vitro

    Directory of Open Access Journals (Sweden)

    Evelyn Saba

    2017-04-01

    Conclusion: Our research for the first time provides the scientific evidence that consumption of black ginseng-enriched CMT extract as a brain tonic improves memory impairment. Thus, our study results can be taken as a reference for future neurobehavioral studies.

  15. Addressing Information Proliferation: Applications of Information Extraction and Text Mining

    Science.gov (United States)

    Li, Jingjing

    2013-01-01

    The advent of the Internet and the ever-increasing capacity of storage media have made it easy to store, deliver, and share enormous volumes of data, leading to a proliferation of information on the Web, in online libraries, on news wires, and almost everywhere in our daily lives. Since our ability to process and absorb this information remains…

  16. Improving information retrieval with multiple health terminologies in a quality-controlled gateway.

    Science.gov (United States)

    Soualmia, Lina F; Sakji, Saoussen; Letord, Catherine; Rollin, Laetitia; Massari, Philippe; Darmoni, Stéfan J

    2013-01-01

    The Catalog and Index of French-language Health Internet resources (CISMeF) is a quality-controlled health gateway, primarily for Web resources in French (n=89,751). Recently, we achieved a major improvement in the structure of the catalogue by setting-up multiple terminologies, based on twelve health terminologies available in French, to overcome the potential weakness of the MeSH thesaurus, which is the main and pivotal terminology we use for indexing and retrieval since 1995. The main aim of this study was to estimate the added-value of exploiting several terminologies and their semantic relationships to improve Web resource indexing and retrieval in CISMeF, in order to provide additional health resources which meet the users' expectations. Twelve terminologies were integrated into the CISMeF information system to set up multiple-terminologies indexing and retrieval. The same sets of thirty queries were run: (i) by exploiting the hierarchical structure of the MeSH, and (ii) by exploiting the additional twelve terminologies and their semantic links. The two search modes were evaluated and compared. The overall coverage of the multiple-terminologies search mode was improved by comparison to the coverage of using the MeSH (16,283 vs. 14,159) (+15%). These additional findings were estimated at 56.6% relevant results, 24.7% intermediate results and 18.7% irrelevant. The multiple-terminologies approach improved information retrieval. These results suggest that integrating additional health terminologies was able to improve recall. Since performing the study, 21 other terminologies have been added which should enable us to make broader studies in multiple-terminologies information retrieval.

  17. Quantum measurement information as a key to energy extraction from local vacuums

    International Nuclear Information System (INIS)

    Hotta, Masahiro

    2008-01-01

    In this paper, a protocol is proposed in which energy extraction from local vacuum states is possible by using quantum measurement information for the vacuum state of quantum fields. In the protocol, Alice, who stays at a spatial point, excites the ground state of the fields by a local measurement. Consequently, wave packets generated by Alice's measurement propagate the vacuum to spatial infinity. Let us assume that Bob stays away from Alice and fails to catch the excitation energy when the wave packets pass in front of him. Next Alice announces her local measurement result to Bob by classical communication. Bob performs a local unitary operation depending on the measurement result. In this process, positive energy is released from the fields to Bob's apparatus of the unitary operation. In the field systems, wave packets are generated with negative energy around Bob's location. Soon afterwards, the negative-energy wave packets begin to chase after the positive-energy wave packets generated by Alice and form loosely bound states.

  18. Acquiring Information from Wider Scope to Improve Event Extraction

    Science.gov (United States)

    2012-05-01

    film ”. 2.3.2 Argument Constraint Even if the scenario is well detected, there is no guarantee of identifying the event correctly. Think about words...from 2003 newswire, with the same genre and time period as ACE 2005 data to avoid possible influences of variations in the genre or time period on the

  19. An integrated conceptual framework for evaluating and improving 'understanding' in informed consent.

    Science.gov (United States)

    Bossert, Sabine; Strech, Daniel

    2017-10-17

    The development of understandable informed consent (IC) documents has proven to be one of the most important challenges in research with humans as well as in healthcare settings. Therefore, evaluating and improving understanding has been of increasing interest for empirical research on IC. However, several conceptual and practical challenges for the development of understandable IC documents remain unresolved. In this paper, we will outline and systematize some of these challenges. On the basis of our own experiences in empirical user testing of IC documents as well as the relevant literature on understanding in IC, we propose an integrated conceptual model for the development of understandable IC documents. The proposed conceptual model integrates different methods for the participatory improvement of written information, including IC, as well as quantitative methods for measuring understanding in IC. In most IC processes, understandable written information is an important prerequisite for valid IC. To improve the quality of IC documents, a conceptual model for participatory procedures of testing, revising, and retesting can be applied. However, the model presented in this paper needs further theoretical and empirical elaboration and clarification of several conceptual and practical challenges.

  20. The decision to extract: part II. Analysis of clinicians' stated reasons for extraction.

    Science.gov (United States)

    Baumrind, S; Korn, E L; Boyd, R L; Maxwell, R

    1996-04-01

    In a recently reported study, the pretreatment records of each subject in a randomized clinical trial of 148 patients with Class I and Class II malocclusions presenting for orthodontic treatment were evaluated independently by five experienced clinicians (drawn from a panel of 14). The clinicians displayed a higher incidence of agreement with each other than had been expected with respect to the decision as to whether extraction was indicated in each specific case. To improve our understanding of how clinicians made their decisions on whether to extract or not, the records of a subset of 72 subjects randomly selected from the full sample of 148, have now been examined in greater detail. In 21 of these cases, all five clinicians decided to treat without extraction. Among the remaining 51 cases, there were 202 decisions to extract (31 unanimous decision cases and 20 split decision cases). The clinicians cited a total of 469 reasons to support these decisions. Crowding was cited as the first reason in 49% of decisions to extract, followed by incisor protrusion (14%), need for profile correction (8%), Class II severity (5%), and achievement of a stable result (5%). When all the reasons for extraction in each clinician's decision were considered as a group, crowding was cited in 73% of decisions, incisor protrusion in 35%, need for profile correction in 27%, Class II severity in 15% and posttreatment stability in 9%. Tooth size anomalies, midline deviations, reduced growth potential, severity of overjet, maintenance of existing profile, desire to close the bite, periodontal problems, and anticipation of poor cooperation accounted collectively for 12% of the first reasons and were mentioned in 54% of the decisions, implying that these considerations play a consequential, if secondary, role in the decision-making process. All other reasons taken together were mentioned in fewer than 20% of cases. In this sample at least, clinicians focused heavily on appearance