WorldWideScience

Sample records for knowledge discovery approaches

  1. A collaborative filtering-based approach to biomedical knowledge discovery.

    Science.gov (United States)

    Lever, Jake; Gakkhar, Sitanshu; Gottlieb, Michael; Rashnavadi, Tahereh; Lin, Santina; Siu, Celia; Smith, Maia; Jones, Martin R; Krzywinski, Martin; Jones, Steven J M; Wren, Jonathan

    2018-02-15

    The increase in publication rates makes it challenging for an individual researcher to stay abreast of all relevant research in order to find novel research hypotheses. Literature-based discovery methods make use of knowledge graphs built using text mining and can infer future associations between biomedical concepts that will likely occur in new publications. These predictions are a valuable resource for researchers to explore a research topic. Current methods for prediction are based on the local structure of the knowledge graph. A method that uses global knowledge from across the knowledge graph needs to be developed in order to make knowledge discovery a frequently used tool by researchers. We propose an approach based on the singular value decomposition (SVD) that is able to combine data from across the knowledge graph through a reduced representation. Using cooccurrence data extracted from published literature, we show that SVD performs better than the leading methods for scoring discoveries. We also show the diminishing predictive power of knowledge discovery as we compare our predictions with real associations that appear further into the future. Finally, we examine the strengths and weaknesses of the SVD approach against another well-performing system using several predicted associations. All code and results files for this analysis can be accessed at https://github.com/jakelever/knowledgediscovery. sjones@bcgsc.ca. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. Semantic Approaches for Knowledge Discovery and Retrieval in Biomedicine

    DEFF Research Database (Denmark)

    Wilkowski, Bartlomiej

    This thesis discusses potential applications of semantics to the recent literaturebased informatics systems to facilitate knowledge discovery, hypothesis generation, and literature retrieval in the domain of biomedicine. The approaches presented herein make use of semantic information extracted...

  3. Network-based approaches to climate knowledge discovery

    Science.gov (United States)

    Budich, Reinhard; Nyberg, Per; Weigel, Tobias

    2011-11-01

    Climate Knowledge Discovery Workshop; Hamburg, Germany, 30 March to 1 April 2011 Do complex networks combined with semantic Web technologies offer the next generation of solutions in climate science? To address this question, a first Climate Knowledge Discovery (CKD) Workshop, hosted by the German Climate Computing Center (Deutsches Klimarechenzentrum (DKRZ)), brought together climate and computer scientists from major American and European laboratories, data centers, and universities, as well as representatives from industry, the broader academic community, and the semantic Web communities. The participants, representing six countries, were concerned with large-scale Earth system modeling and computational data analysis. The motivation for the meeting was the growing problem that climate scientists generate data faster than it can be interpreted and the need to prepare for further exponential data increases. Current analysis approaches are focused primarily on traditional methods, which are best suited for large-scale phenomena and coarse-resolution data sets. The workshop focused on the open discussion of ideas and technologies to provide the next generation of solutions to cope with the increasing data volumes in climate science.

  4. Rule Induction-Based Knowledge Discovery for Energy Efficiency

    OpenAIRE

    Chen, Qipeng; Fan, Zhong; Kaleshi, Dritan; Armour, Simon M D

    2015-01-01

    Rule induction is a practical approach to knowledge discovery. Provided that a problem is developed, rule induction is able to return the knowledge that addresses the goal of this problem as if-then rules. The primary goals of knowledge discovery are for prediction and description. The rule format knowledge representation is easily understandable so as to enable users to make decisions. This paper presents the potential of rule induction for energy efficiency. In particular, three rule induct...

  5. Working with Data: Discovering Knowledge through Mining and Analysis; Systematic Knowledge Management and Knowledge Discovery; Text Mining; Methodological Approach in Discovering User Search Patterns through Web Log Analysis; Knowledge Discovery in Databases Using Formal Concept Analysis; Knowledge Discovery with a Little Perspective.

    Science.gov (United States)

    Qin, Jian; Jurisica, Igor; Liddy, Elizabeth D.; Jansen, Bernard J; Spink, Amanda; Priss, Uta; Norton, Melanie J.

    2000-01-01

    These six articles discuss knowledge discovery in databases (KDD). Topics include data mining; knowledge management systems; applications of knowledge discovery; text and Web mining; text mining and information retrieval; user search patterns through Web log analysis; concept analysis; data collection; and data structure inconsistency. (LRW)

  6. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    Science.gov (United States)

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object

  7. A knowledge discovery approach to urban analysis: Beyoglu Preservation Area as a data mine

    Directory of Open Access Journals (Sweden)

    Ahu Sokmenoglu Sohtorik

    2017-11-01

    Full Text Available Enhancing our knowledge of the complexities of cities in order to empower ourselves to make more informed decisions has always been a challenge for urban research. Recent developments in large-scale computing, together with the new techniques and automated tools for data collection and analysis are opening up promising opportunities for addressing this problem. The main motivation that served as the driving force behind this research is how these developments may contribute to urban data analysis. On this basis, the thesis focuses on urban data analysis in order to search for findings that can enhance our knowledge of urban environments, using the generic process of knowledge discovery using data mining. A knowledge discovery process based on data mining is a fully automated or semi-automated process which involves the application of computational tools and techniques to explore the “previously unknown, and potentially useful information” (Witten & Frank, 2005 hidden in large and often complex and multi-dimensional databases. This information can be obtained in the form of correlations amongst variables, data groupings (classes and clusters or more complex hypotheses (probabilistic rules of co-occurrence, performance vectors of prediction models etc.. This research targets researchers and practitioners working in the field of urban studies who are interested in quantitative/ computational approaches to urban data analysis and specifically aims to engage the interest of architects, urban designers and planners who do not have a background in statistics or in using data mining methods in their work. Accordingly, the overall aim of the thesis is the development of a knowledge discovery approach to urban analysis; a domain-specific adaptation of the generic process of knowledge discovery using data mining enabling the analyst to discover ‘relational urban knowledge’. ‘Relational urban knowledge’ is a term employed in this thesis to refer

  8. Knowledge discovery with classification rules in a cardiovascular dataset.

    Science.gov (United States)

    Podgorelec, Vili; Kokol, Peter; Stiglic, Milojka Molan; Hericko, Marjan; Rozman, Ivan

    2005-12-01

    In this paper we study an evolutionary machine learning approach to data mining and knowledge discovery based on the induction of classification rules. A method for automatic rules induction called AREX using evolutionary induction of decision trees and automatic programming is introduced. The proposed algorithm is applied to a cardiovascular dataset consisting of different groups of attributes which should possibly reveal the presence of some specific cardiovascular problems in young patients. A case study is presented that shows the use of AREX for the classification of patients and for discovering possible new medical knowledge from the dataset. The defined knowledge discovery loop comprises a medical expert's assessment of induced rules to drive the evolution of rule sets towards more appropriate solutions. The final result is the discovery of a possible new medical knowledge in the field of pediatric cardiology.

  9. Concept Formation in Scientific Knowledge Discovery from a Constructivist View

    Science.gov (United States)

    Peng, Wei; Gero, John S.

    The central goal of scientific knowledge discovery is to learn cause-effect relationships among natural phenomena presented as variables and the consequences their interactions. Scientific knowledge is normally expressed as scientific taxonomies and qualitative and quantitative laws [1]. This type of knowledge represents intrinsic regularities of the observed phenomena that can be used to explain and predict behaviors of the phenomena. It is a generalization that is abstracted and externalized from a set of contexts and applicable to a broader scope. Scientific knowledge is a type of third-person knowledge, i.e., knowledge that independent of a specific enquirer. Artificial intelligence approaches, particularly data mining algorithms that are used to identify meaningful patterns from large data sets, are approaches that aim to facilitate the knowledge discovery process [2]. A broad spectrum of algorithms has been developed in addressing classification, associative learning, and clustering problems. However, their linkages to people who use them have not been adequately explored. Issues in relation to supporting the interpretation of the patterns, the application of prior knowledge to the data mining process and addressing user interactions remain challenges for building knowledge discovery tools [3]. As a consequence, scientists rely on their experience to formulate problems, evaluate hypotheses, reason about untraceable factors and derive new problems. This type of knowledge which they have developed during their career is called "first-person" knowledge. The formation of scientific knowledge (third-person knowledge) is highly influenced by the enquirer's first-person knowledge construct, which is a result of his or her interactions with the environment. There have been attempts to craft automatic knowledge discovery tools but these systems are limited in their capabilities to handle the dynamics of personal experience. There are now trends in developing

  10. Knowledge Discovery and Data Mining in Iran's Climatic Researches

    Science.gov (United States)

    Karimi, Mostafa

    2013-04-01

    Advances in measurement technology and data collection is the database gets larger. Large databases require powerful tools for analysis data. Iterative process of acquiring knowledge from information obtained from data processing is done in various forms in all scientific fields. However, when the data volume large, and many of the problems the Traditional methods cannot respond. in the recent years, use of databases in various scientific fields, especially atmospheric databases in climatology expanded. in addition, increases in the amount of data generated by the climate models is a challenge for analysis of it for extraction of hidden pattern and knowledge. The approach to this problem has been made in recent years uses the process of knowledge discovery and data mining techniques with the use of the concepts of machine learning, artificial intelligence and expert (professional) systems is overall performance. Data manning is analytically process for manning in massive volume data. The ultimate goal of data mining is access to information and finally knowledge. climatology is a part of science that uses variety and massive volume data. Goal of the climate data manning is Achieve to information from variety and massive atmospheric and non-atmospheric data. in fact, Knowledge Discovery performs these activities in a logical and predetermined and almost automatic process. The goal of this research is study of uses knowledge Discovery and data mining technique in Iranian climate research. For Achieve This goal, study content (descriptive) analysis and classify base method and issue. The result shown that in climatic research of Iran most clustering, k-means and wards applied and in terms of issues precipitation and atmospheric circulation patterns most introduced. Although several studies in geography and climate issues with statistical techniques such as clustering and pattern extraction is done, Due to the nature of statistics and data mining, but cannot say for

  11. ESIP's Earth Science Knowledge Graph (ESKG) Testbed Project: An Automatic Approach to Building Interdisciplinary Earth Science Knowledge Graphs to Improve Data Discovery

    Science.gov (United States)

    McGibbney, L. J.; Jiang, Y.; Burgess, A. B.

    2017-12-01

    Big Earth observation data have been produced, archived and made available online, but discovering the right data in a manner that precisely and efficiently satisfies user needs presents a significant challenge to the Earth Science (ES) community. An emerging trend in information retrieval community is to utilize knowledge graphs to assist users in quickly finding desired information from across knowledge sources. This is particularly prevalent within the fields of social media and complex multimodal information processing to name but a few, however building a domain-specific knowledge graph is labour-intensive and hard to keep up-to-date. In this work, we update our progress on the Earth Science Knowledge Graph (ESKG) project; an ESIP-funded testbed project which provides an automatic approach to building a dynamic knowledge graph for ES to improve interdisciplinary data discovery by leveraging implicit, latent existing knowledge present within across several U.S Federal Agencies e.g. NASA, NOAA and USGS. ESKG strengthens ties between observations and user communities by: 1) developing a knowledge graph derived from various sources e.g. Web pages, Web Services, etc. via natural language processing and knowledge extraction techniques; 2) allowing users to traverse, explore, query, reason and navigate ES data via knowledge graph interaction. ESKG has the potential to revolutionize the way in which ES communities interact with ES data in the open world through the entity, spatial and temporal linkages and characteristics that make it up. This project enables the advancement of ESIP collaboration areas including both Discovery and Semantic Technologies by putting graph information right at our fingertips in an interactive, modern manner and reducing the efforts to constructing ontology. To demonstrate the ESKG concept, we will demonstrate use of our framework across NASA JPL's PO.DAAC, NOAA's Earth Observation Requirements Evaluation System (EORES) and various USGS

  12. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species

    Directory of Open Access Journals (Sweden)

    Kristopher J. L. Irizarry

    2016-01-01

    Full Text Available Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management.

  13. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species.

    Science.gov (United States)

    Irizarry, Kristopher J L; Bryant, Doug; Kalish, Jordan; Eng, Curtis; Schmidt, Peggy L; Barrett, Gini; Barr, Margaret C

    2016-01-01

    Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs) that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management.

  14. An Ontology-supported Approach for Automatic Chaining of Web Services in Geospatial Knowledge Discovery

    Science.gov (United States)

    di, L.; Yue, P.; Yang, W.; Yu, G.

    2006-12-01

    Recent developments in geospatial semantic Web have shown promise for automatic discovery, access, and use of geospatial Web services to quickly and efficiently solve particular application problems. With the semantic Web technology, it is highly feasible to construct intelligent geospatial knowledge systems that can provide answers to many geospatial application questions. A key challenge in constructing such intelligent knowledge system is to automate the creation of a chain or process workflow that involves multiple services and highly diversified data and can generate the answer to a specific question of users. This presentation discusses an approach for automating composition of geospatial Web service chains by employing geospatial semantics described by geospatial ontologies. It shows how ontology-based geospatial semantics are used for enabling the automatic discovery, mediation, and chaining of geospatial Web services. OWL-S is used to represent the geospatial semantics of individual Web services and the type of the services it belongs to and the type of the data it can handle. The hierarchy and classification of service types are described in the service ontology. The hierarchy and classification of data types are presented in the data ontology. For answering users' geospatial questions, an Artificial Intelligent (AI) planning algorithm is used to construct the service chain by using the service and data logics expressed in the ontologies. The chain can be expressed as a graph with nodes representing services and connection weights representing degrees of semantic matching between nodes. The graph is a visual representation of logical geo-processing path for answering users' questions. The graph can be instantiated to a physical service workflow for execution to generate the answer to a user's question. A prototype system, which includes real world geospatial applications, is implemented to demonstrate the concept and approach.

  15. RHSEG and Subdue: Background and Preliminary Approach for Combining these Technologies for Enhanced Image Data Analysis, Mining and Knowledge Discovery

    Science.gov (United States)

    Tilton, James C.; Cook, Diane J.

    2008-01-01

    Under a project recently selected for funding by NASA's Science Mission Directorate under the Applied Information Systems Research (AISR) program, Tilton and Cook will design and implement the integration of the Subdue graph based knowledge discovery system, developed at the University of Texas Arlington and Washington State University, with image segmentation hierarchies produced by the RHSEG software, developed at NASA GSFC, and perform pilot demonstration studies of data analysis, mining and knowledge discovery on NASA data. Subdue represents a method for discovering substructures in structural databases. Subdue is devised for general-purpose automated discovery, concept learning, and hierarchical clustering, with or without domain knowledge. Subdue was developed by Cook and her colleague, Lawrence B. Holder. For Subdue to be effective in finding patterns in imagery data, the data must be abstracted up from the pixel domain. An appropriate abstraction of imagery data is a segmentation hierarchy: a set of several segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. The RHSEG program, a recursive approximation to a Hierarchical Segmentation approach (HSEG), can produce segmentation hierarchies quickly and effectively for a wide variety of images. RHSEG and HSEG were developed at NASA GSFC by Tilton. In this presentation we provide background on the RHSEG and Subdue technologies and present a preliminary analysis on how RHSEG and Subdue may be combined to enhance image data analysis, mining and knowledge discovery.

  16. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  17. A knowledge discovery in databases approach for industrial microgrid planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.; Montero, Eduardo

    2016-01-01

    The progressive application of Information and Communication Technologies to industrial processes has increased the amount of data gathered by manufacturing companies during last decades. Nowadays some standardized management systems, such as ISO 50.001 and ISO 14.001, exploit these data in order...... sustainable and proactive microgrid which allows identifying, designing and developing energy efficiency strategies at supply, management and energy use levels. In this context, the expansion of Internet of Things and Knowledge Discovery in Databases techniques will drive changes in current microgrid planning...

  18. Mandolin: A Knowledge Discovery Framework for the Web of Data

    OpenAIRE

    Soru, Tommaso; Esteves, Diego; Marx, Edgard; Ngomo, Axel-Cyrille Ngonga

    2017-01-01

    Markov Logic Networks join probabilistic modeling with first-order logic and have been shown to integrate well with the Semantic Web foundations. While several approaches have been devised to tackle the subproblems of rule mining, grounding, and inference, no comprehensive workflow has been proposed so far. In this paper, we fill this gap by introducing a framework called Mandolin, which implements a workflow for knowledge discovery specifically on RDF datasets. Our framework imports knowledg...

  19. Discovery simulations and the assessment of intuitive knowledge

    NARCIS (Netherlands)

    Swaak, Janine; de Jong, Anthonius J.M.

    2001-01-01

    The objective of the present work is to have a closer look at the relations between the features of discovery simulations, the learning processes elicited, the knowledge that results, and the methods used to measure this acquired knowledge. It is argued that discovery simulations are ‘rich’, have a

  20. A Hybrid Information Mining Approach for Knowledge Discovery in Cardiovascular Disease (CVD

    Directory of Open Access Journals (Sweden)

    Stefania Pasanisi

    2018-04-01

    Full Text Available The healthcare ambit is usually perceived as “information rich” yet “knowledge poor”. Nowadays, an unprecedented effort is underway to increase the use of business intelligence techniques to solve this problem. Heart disease (HD is a major cause of mortality in modern society. This paper analyzes the risk factors that have been identified in cardiovascular disease (CVD surveillance systems. The Heart Care study identifies attributes related to CVD risk (gender, age, smoking habit, etc. and other dependent variables that include a specific form of CVD (diabetes, hypertension, cardiac disease, etc.. In this paper, we combine Clustering, Association Rules, and Neural Networks for the assessment of heart-event-related risk factors, targeting the reduction of CVD risk. With the use of the K-means algorithm, significant groups of patients are found. Then, the Apriori algorithm is applied in order to understand the kinds of relations between the attributes within the dataset, first looking within the whole dataset and then refining the results through the subsets defined by the clusters. Finally, both results allow us to better define patients’ characteristics in order to make predictions about CVD risk with a Multilayer Perceptron Neural Network. The results obtained with the hybrid information mining approach indicate that it is an effective strategy for knowledge discovery concerning chronic diseases, particularly for CVD risk.

  1. Development of Scientific Approach Based on Discovery Learning Module

    Science.gov (United States)

    Ellizar, E.; Hardeli, H.; Beltris, S.; Suharni, R.

    2018-04-01

    Scientific Approach is a learning process, designed to make the students actively construct their own knowledge through stages of scientific method. The scientific approach in learning process can be done by using learning modules. One of the learning model is discovery based learning. Discovery learning is a learning model for the valuable things in learning through various activities, such as observation, experience, and reasoning. In fact, the students’ activity to construct their own knowledge were not optimal. It’s because the available learning modules were not in line with the scientific approach. The purpose of this study was to develop a scientific approach discovery based learning module on Acid Based, also on electrolyte and non-electrolyte solution. The developing process of this chemistry modules use the Plomp Model with three main stages. The stages are preliminary research, prototyping stage, and the assessment stage. The subject of this research was the 10th and 11th Grade of Senior High School students (SMAN 2 Padang). Validation were tested by the experts of Chemistry lecturers and teachers. Practicality of these modules had been tested through questionnaire. The effectiveness had been tested through experimental procedure by comparing student achievement between experiment and control groups. Based on the findings, it can be concluded that the developed scientific approach discovery based learning module significantly improve the students’ learning in Acid-based and Electrolyte solution. The result of the data analysis indicated that the chemistry module was valid in content, construct, and presentation. Chemistry module also has a good practicality level and also accordance with the available time. This chemistry module was also effective, because it can help the students to understand the content of the learning material. That’s proved by the result of learning student. Based on the result can conclude that chemistry module based on

  2. Scientific Knowledge Discovery in Complex Semantic Networks of Geophysical Systems

    Science.gov (United States)

    Fox, P.

    2012-04-01

    The vast majority of explorations of the Earth's systems are limited in their ability to effectively explore the most important (often most difficult) problems because they are forced to interconnect at the data-element, or syntactic, level rather than at a higher scientific, or semantic, level. Recent successes in the application of complex network theory and algorithms to climate data, raise expectations that more general graph-based approaches offer the opportunity for new discoveries. In the past ~ 5 years in the natural sciences there has substantial progress in providing both specialists and non-specialists the ability to describe in machine readable form, geophysical quantities and relations among them in meaningful and natural ways, effectively breaking the prior syntax barrier. The corresponding open-world semantics and reasoning provide higher-level interconnections. That is, semantics provided around the data structures, using semantically-equipped tools, and semantically aware interfaces between science application components allowing for discovery at the knowledge level. More recently, formal semantic approaches to continuous and aggregate physical processes are beginning to show promise and are soon likely to be ready to apply to geoscientific systems. To illustrate these opportunities, this presentation presents two application examples featuring domain vocabulary (ontology) and property relations (named and typed edges in the graphs). First, a climate knowledge discovery pilot encoding and exploration of CMIP5 catalog information with the eventual goal to encode and explore CMIP5 data. Second, a multi-stakeholder knowledge network for integrated assessments in marine ecosystems, where the data is highly inter-disciplinary.

  3. Drive Cost Reduction, Increase Innovation and Mitigate Risk with Advanced Knowledge Discovery Tools Designed to Unlock and Leverage Prior Knowledge

    International Nuclear Information System (INIS)

    Mitchell, I.

    2016-01-01

    Full text: The nuclear industry is knowledge-intensive and includes a diverse number of stakeholders. Much of this knowledge is at risk as engineers, technicians and project professionals retire, leaving a widening skills and information gap. This knowledge is critical in an increasingly complex environment with information from past projects often buried in decades-old, non-integrated systems enterprise. Engineers can spend 40% or more of their time searching for answers across the enterprise instead of solving problems. The inability to access trusted industry knowledge results in increased risk and expense. Advanced knowledge discovery technologies slash research times by as much as 75% and accelerate innovation and problem solving by giving technical professionals access to the information they need, in the context of the problems they are trying to solve. Unlike traditional knowledge management approaches, knowledge discovery tools powered by semantic search technologies are adept at uncovering answers in unstructured data and require no tagging, organization or moving of data, meaning a smaller IT footprint and faster time-to-knowledge. This session will highlight best-in-class knowledge discovery technologies, content, and strategies to give nuclear industry organizations the ability to leverage the corpus of enterprise knowledge into the future. (author

  4. Data Mining and Knowledge Discovery via Logic-Based Methods

    CERN Document Server

    Triantaphyllou, Evangelos

    2010-01-01

    There are many approaches to data mining and knowledge discovery (DM&KD), including neural networks, closest neighbor methods, and various statistical methods. This monograph, however, focuses on the development and use of a novel approach, based on mathematical logic, that the author and his research associates have worked on over the last 20 years. The methods presented in the book deal with key DM&KD issues in an intuitive manner and in a natural sequence. Compared to other DM&KD methods, those based on mathematical logic offer a direct and often intuitive approach for extracting easily int

  5. Coupling Visualization and Data Analysis for Knowledge Discovery from Multi-dimensional Scientific Data

    International Nuclear Information System (INIS)

    Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G.R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V.E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

    2010-01-01

    Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies 'such as efficient data management' supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

  6. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships.

    Science.gov (United States)

    Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong

    2010-01-18

    The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.

  7. Developing integrated crop knowledge networks to advance candidate gene discovery.

    Science.gov (United States)

    Hassani-Pak, Keywan; Castellote, Martin; Esch, Maria; Hindle, Matthew; Lysenko, Artem; Taubert, Jan; Rawlings, Christopher

    2016-12-01

    The chances of raising crop productivity to enhance global food security would be greatly improved if we had a complete understanding of all the biological mechanisms that underpinned traits such as crop yield, disease resistance or nutrient and water use efficiency. With more crop genomes emerging all the time, we are nearer having the basic information, at the gene-level, to begin assembling crop gene catalogues and using data from other plant species to understand how the genes function and how their interactions govern crop development and physiology. Unfortunately, the task of creating such a complete knowledge base of gene functions, interaction networks and trait biology is technically challenging because the relevant data are dispersed in myriad databases in a variety of data formats with variable quality and coverage. In this paper we present a general approach for building genome-scale knowledge networks that provide a unified representation of heterogeneous but interconnected datasets to enable effective knowledge mining and gene discovery. We describe the datasets and outline the methods, workflows and tools that we have developed for creating and visualising these networks for the major crop species, wheat and barley. We present the global characteristics of such knowledge networks and with an example linking a seed size phenotype to a barley WRKY transcription factor orthologous to TTG2 from Arabidopsis, we illustrate the value of integrated data in biological knowledge discovery. The software we have developed (www.ondex.org) and the knowledge resources (http://knetminer.rothamsted.ac.uk) we have created are all open-source and provide a first step towards systematic and evidence-based gene discovery in order to facilitate crop improvement.

  8. Semi-automated knowledge discovery: identifying and profiling human trafficking

    Science.gov (United States)

    Poelmans, Jonas; Elzinga, Paul; Ignatov, Dmitry I.; Kuznetsov, Sergei O.

    2012-11-01

    We propose an iterative and human-centred knowledge discovery methodology based on formal concept analysis. The proposed approach recognizes the important role of the domain expert in mining real-world enterprise applications and makes use of specific domain knowledge, including human intelligence and domain-specific constraints. Our approach was empirically validated at the Amsterdam-Amstelland police to identify suspects and victims of human trafficking in 266,157 suspicious activity reports. Based on guidelines of the Attorney Generals of the Netherlands, we first defined multiple early warning indicators that were used to index the police reports. Using concept lattices, we revealed numerous unknown human trafficking and loverboy suspects. In-depth investigation by the police resulted in a confirmation of their involvement in illegal activities resulting in actual arrestments been made. Our human-centred approach was embedded into operational policing practice and is now successfully used on a daily basis to cope with the vastly growing amount of unstructured information.

  9. Knowledge discovery in the prediction of bankruptcy

    NARCIS (Netherlands)

    Almeida, R.J.; Vieira, S.M.; Milea, D.V.; Kaymak, U.; Costa Sousa, da J.M.; Carvalho, J.P.; Dubois, D.; Kaymak, U.

    2009-01-01

    Knowledge discovery in databases (KDD) is the process of discovering interesting knowledge from large amounts of data. However, real-world datasets have problems such as incompleteness, redundancy, inconsistency, noise, etc. All these problems affect the performance of data mining algorithms. Thus,

  10. Knowledge discovery from data streams

    CERN Document Server

    Gama, Joao

    2010-01-01

    Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams.The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks,

  11. Knowledge management and Discovery for advanced Enterprise Knowledge Engineering

    OpenAIRE

    Novi, Daniele

    2014-01-01

    2012 - 2013 The research work addresses mainly issues related to the adoption of models, methodologies and knowledge management tools that implement a pervasive use of the latest technologies in the area of Semantic Web for the improvement of business processes and Enterprise 2.0 applications. The first phase of the research has focused on the study and analysis of the state of the art and the problems of Knowledge Discovery database, paying more attention to the data mining systems. Th...

  12. KNODWAT: a scientific framework application for testing knowledge discovery methods for the biomedical domain.

    Science.gov (United States)

    Holzinger, Andreas; Zupan, Mario

    2013-06-13

    Professionals in the biomedical domain are confronted with an increasing mass of data. Developing methods to assist professional end users in the field of Knowledge Discovery to identify, extract, visualize and understand useful information from these huge amounts of data is a huge challenge. However, there are so many diverse methods and methodologies available, that for biomedical researchers who are inexperienced in the use of even relatively popular knowledge discovery methods, it can be very difficult to select the most appropriate method for their particular research problem. A web application, called KNODWAT (KNOwledge Discovery With Advanced Techniques) has been developed, using Java on Spring framework 3.1. and following a user-centered approach. The software runs on Java 1.6 and above and requires a web server such as Apache Tomcat and a database server such as the MySQL Server. For frontend functionality and styling, Twitter Bootstrap was used as well as jQuery for interactive user interface operations. The framework presented is user-centric, highly extensible and flexible. Since it enables methods for testing using existing data to assess suitability and performance, it is especially suitable for inexperienced biomedical researchers, new to the field of knowledge discovery and data mining. For testing purposes two algorithms, CART and C4.5 were implemented using the WEKA data mining framework.

  13. 08471 Report -- Geographic Privacy-Aware Knowledge Discovery and Delivery

    OpenAIRE

    Kuijpers, Bart; Pedreschi, Dino; Saygin, Yucel; Spaccapietra, Stefano

    2009-01-01

    The Dagstuhl-Seminar on Geographic Privacy-Aware Knowledge Discovery and Delivery was held during 16 - 21 November, 2008, with 37 participants registered from various countries from Europe, as well as other parts of the world such as United States, Canada, Argentina, and Brazil. Issues in the newly emerging area of geographic knowledge discovery with a privacy perspective were discussed in a week to consolidate some of the research questions. The Dagstuhl program included...

  14. The relation between prior knowledge and students' collaborative discovery learning processes.

    NARCIS (Netherlands)

    Gijlers, Aaltje H.; de Jong, Anthonius J.M.

    2005-01-01

    In this study we investigate how prior knowledge influences knowledge development during collaborative discovery learning. Fifteen dyads of students (pre-university education, 15-16 years old) worked on a discovery learning task in the physics field of kinematics. The (face-to-face) communication

  15. Knowledge discovery in variant databases using inductive logic programming.

    Science.gov (United States)

    Nguyen, Hoan; Luu, Tien-Dao; Poch, Olivier; Thompson, Julie D

    2013-01-01

    Understanding the effects of genetic variation on the phenotype of an individual is a major goal of biomedical research, especially for the development of diagnostics and effective therapeutic solutions. In this work, we describe the use of a recent knowledge discovery from database (KDD) approach using inductive logic programming (ILP) to automatically extract knowledge about human monogenic diseases. We extracted background knowledge from MSV3d, a database of all human missense variants mapped to 3D protein structure. In this study, we identified 8,117 mutations in 805 proteins with known three-dimensional structures that were known to be involved in human monogenic disease. Our results help to improve our understanding of the relationships between structural, functional or evolutionary features and deleterious mutations. Our inferred rules can also be applied to predict the impact of any single amino acid replacement on the function of a protein. The interpretable rules are available at http://decrypthon.igbmc.fr/kd4v/.

  16. Engineering Application Way of Faults Knowledge Discovery Based on Rough Set Theory

    International Nuclear Information System (INIS)

    Zhao Rongzhen; Deng Linfeng; Li Chao

    2011-01-01

    For the knowledge acquisition puzzle of intelligence decision-making technology in mechanical industry, to use the Rough Set Theory (RST) as a kind of tool to solve the puzzle was researched. And the way to realize the knowledge discovery in engineering application is explored. A case extracting out the knowledge rules from a concise data table shows out some important information. It is that the knowledge discovery similar to the mechanical faults diagnosis is an item of complicated system engineering project. In where, first of all-important tasks is to preserve the faults knowledge into a table with data mode. And the data must be derived from the plant site and should also be as concise as possible. On the basis of the faults knowledge data obtained so, the methods and algorithms to process the data and extract the knowledge rules from them by means of RST can be processed only. The conclusion is that the faults knowledge discovery by the way is a process of rising upward. But to develop the advanced faults diagnosis technology by the way is a large-scale knowledge engineering project for long time. Every step in which should be designed seriously according to the tool's demands firstly. This is the basic guarantees to make the knowledge rules obtained have the values of engineering application and the studies have scientific significance. So, a general framework is designed for engineering application to go along the route developing the faults knowledge discovery technology.

  17. Reconstructing Sessions from Data Discovery and Access Logs to Build a Semantic Knowledge Base for Improving Data Discovery

    Directory of Open Access Journals (Sweden)

    Yongyao Jiang

    2016-04-01

    Full Text Available Big geospatial data are archived and made available through online web discovery and access. However, finding the right data for scientific research and application development is still a challenge. This paper aims to improve the data discovery by mining the user knowledge from log files. Specifically, user web session reconstruction is focused upon in this paper as a critical step for extracting usage patterns. However, reconstructing user sessions from raw web logs has always been difficult, as a session identifier tends to be missing in most data portals. To address this problem, we propose two session identification methods, including time-clustering-based and time-referrer-based methods. We also present the workflow of session reconstruction and discuss the approach of selecting appropriate thresholds for relevant steps in the workflow. The proposed session identification methods and workflow are proven to be able to extract data access patterns for further pattern analyses of user behavior and improvement of data discovery for more relevancy data ranking, suggestion, and navigation.

  18. 4th International conference on Knowledge Discovery and Data Mining

    CERN Document Server

    Knowledge Discovery and Data Mining

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 4th International conference on Knowledge Discovery and Data Mining, March 1-2, 2011, Macau, Chin.   This Volume is to provide a forum for researchers, educators, engineers, and government officials involved in the general areas of knowledge discovery and data mining and learning to disseminate their latest research results and exchange views on the future research directions of these fields. 108 high-quality papers are included in the volume.

  19. SemaTyP: a knowledge graph based literature mining method for drug discovery.

    Science.gov (United States)

    Sang, Shengtian; Yang, Zhihao; Wang, Lei; Liu, Xiaoxia; Lin, Hongfei; Wang, Jian

    2018-05-30

    Drug discovery is the process through which potential new medicines are identified. High-throughput screening and computer-aided drug discovery/design are the two main drug discovery methods for now, which have successfully discovered a series of drugs. However, development of new drugs is still an extremely time-consuming and expensive process. Biomedical literature contains important clues for the identification of potential treatments. It could support experts in biomedicine on their way towards new discoveries. Here, we propose a biomedical knowledge graph-based drug discovery method called SemaTyP, which discovers candidate drugs for diseases by mining published biomedical literature. We first construct a biomedical knowledge graph with the relations extracted from biomedical abstracts, then a logistic regression model is trained by learning the semantic types of paths of known drug therapies' existing in the biomedical knowledge graph, finally the learned model is used to discover drug therapies for new diseases. The experimental results show that our method could not only effectively discover new drug therapies for new diseases, but also could provide the potential mechanism of action of the candidate drugs. In this paper we propose a novel knowledge graph based literature mining method for drug discovery. It could be a supplementary method for current drug discovery methods.

  20. Knowledge discovery about quality of life changes of spinal cord injury patients: clustering based on rules by states.

    Science.gov (United States)

    Gibert, Karina; García-Rudolph, Alejandro; Curcoll, Lluïsa; Soler, Dolors; Pla, Laura; Tormos, José María

    2009-01-01

    In this paper, an integral Knowledge Discovery Methodology, named Clustering based on rules by States, which incorporates artificial intelligence (AI) and statistical methods as well as interpretation-oriented tools, is used for extracting knowledge patterns about the evolution over time of the Quality of Life (QoL) of patients with Spinal Cord Injury. The methodology incorporates the interaction with experts as a crucial element with the clustering methodology to guarantee usefulness of the results. Four typical patterns are discovered by taking into account prior expert knowledge. Several hypotheses are elaborated about the reasons for psychological distress or decreases in QoL of patients over time. The knowledge discovery from data (KDD) approach turns out, once again, to be a suitable formal framework for handling multidimensional complexity of the health domains.

  1. Bioenergy Knowledge Discovery Framework Fact Sheet

    Energy Technology Data Exchange (ETDEWEB)

    None

    2017-07-01

    The Bioenergy Knowledge Discovery Framework (KDF) supports the development of a sustainable bioenergy industry by providing access to a variety of data sets, publications, and collaboration and mapping tools that support bioenergy research, analysis, and decision making. In the KDF, users can search for information, contribute data, and use the tools and map interface to synthesize, analyze, and visualize information in a spatially integrated manner.

  2. Discovery learning with SAVI approach in geometry learning

    Science.gov (United States)

    Sahara, R.; Mardiyana; Saputro, D. R. S.

    2018-05-01

    Geometry is one branch of mathematics that an important role in learning mathematics in the schools. This research aims to find out about Discovery Learning with SAVI approach to achievement of learning geometry. This research was conducted at Junior High School in Surakarta city. Research data were obtained through test and questionnaire. Furthermore, the data was analyzed by using two-way Anova. The results showed that Discovery Learning with SAVI approach gives a positive influence on mathematics learning achievement. Discovery Learning with SAVI approach provides better mathematics learning outcomes than direct learning. In addition, students with high self-efficacy categories have better mathematics learning achievement than those with moderate and low self-efficacy categories, while student with moderate self-efficacy categories are better mathematics learning achievers than students with low self-efficacy categories. There is an interaction between Discovery Learning with SAVI approach and self-efficacy toward student's mathematics learning achievement. Therefore, Discovery Learning with SAVI approach can improve mathematics learning achievement.

  3. Proteomic and metabolomic approaches to biomarker discovery

    CERN Document Server

    Issaq, Haleem J

    2013-01-01

    Proteomic and Metabolomic Approaches to Biomarker Discovery demonstrates how to leverage biomarkers to improve accuracy and reduce errors in research. Disease biomarker discovery is one of the most vibrant and important areas of research today, as the identification of reliable biomarkers has an enormous impact on disease diagnosis, selection of treatment regimens, and therapeutic monitoring. Various techniques are used in the biomarker discovery process, including techniques used in proteomics, the study of the proteins that make up an organism, and metabolomics, the study of chemical fingerprints created from cellular processes. Proteomic and Metabolomic Approaches to Biomarker Discovery is the only publication that covers techniques from both proteomics and metabolomics and includes all steps involved in biomarker discovery, from study design to study execution.  The book describes methods, and presents a standard operating procedure for sample selection, preparation, and storage, as well as data analysis...

  4. Enhancing Big Data Value Using Knowledge Discovery Techniques

    OpenAIRE

    Mai Abdrabo; Mohammed Elmogy; Ghada Eltaweel; Sherif Barakat

    2016-01-01

    The world has been drowned by floods of data due to technological development. Consequently, the Big Data term has gotten the expression to portray the gigantic sum. Different sorts of quick data are doubling every second. We have to profit from this enormous surge of data to convert it to knowledge. Knowledge Discovery (KDD) can enhance detecting the value of Big Data based on some techniques and technologies like Hadoop, MapReduce, and NoSQL. The use of Big D...

  5. Knowledge Discovery in Data in Construction Projects

    Directory of Open Access Journals (Sweden)

    Szelka J.

    2016-06-01

    Full Text Available Decision-making processes, including the ones related to ill-structured problems, are of considerable significance in the area of construction projects. Computer-aided inference under such conditions requires the employment of specific methods and tools (non-algorithmic ones, the best recognized and successfully used in practice represented by expert systems. The knowledge indispensable for such systems to perform inference is most frequently acquired directly from experts (through a dialogue: a domain expert - a knowledge engineer and from various source documents. Little is known, however, about the possibility of automating knowledge acquisition in this area and as a result, in practice it is scarcely ever used. It has to be noted that in numerous areas of management more and more attention is paid to the issue of acquiring knowledge from available data. What is known and successfully employed in the practice of aiding the decision-making is the different methods and tools. The paper attempts to select methods for knowledge discovery in data and presents possible ways of representing the acquired knowledge as well as sample tools (including programming ones, allowing for the use of this knowledge in the area under consideration.

  6. Data-Centric Knowledge Discovery Strategy for a Safety-Critical Sensor Application

    Directory of Open Access Journals (Sweden)

    Nilamadhab Mishra

    2014-01-01

    Full Text Available In an indoor safety-critical application, sensors and actuators are clustered together to accomplish critical actions within a limited time constraint. The cluster may be controlled by a dedicated programmed autonomous microcontroller device powered with electricity to perform in-network time critical functions, such as data collection, data processing, and knowledge production. In a data-centric sensor network, approximately 3–60% of the sensor data are faulty, and the data collected from the sensor environment are highly unstructured and ambiguous. Therefore, for safety-critical sensor applications, actuators must function intelligently within a hard time frame and have proper knowledge to perform their logical actions. This paper proposes a knowledge discovery strategy and an exploration algorithm for indoor safety-critical industrial applications. The application evidence and discussion validate that the proposed strategy and algorithm can be implemented for knowledge discovery within the operational framework.

  7. Advances in knowledge discovery in databases

    CERN Document Server

    Adhikari, Animesh

    2015-01-01

    This book presents recent advances in Knowledge discovery in databases (KDD) with a focus on the areas of market basket database, time-stamped databases and multiple related databases. Various interesting and intelligent algorithms are reported on data mining tasks. A large number of association measures are presented, which play significant roles in decision support applications. This book presents, discusses and contrasts new developments in mining time-stamped data, time-based data analyses, the identification of temporal patterns, the mining of multiple related databases, as well as local patterns analysis.  

  8. The Knowledge-Integrated Network Biomarkers Discovery for Major Adverse Cardiac Events

    Science.gov (United States)

    Jin, Guangxu; Zhou, Xiaobo; Wang, Honghui; Zhao, Hong; Cui, Kemi; Zhang, Xiang-Sun; Chen, Luonan; Hazen, Stanley L.; Li, King; Wong, Stephen T. C.

    2010-01-01

    The mass spectrometry (MS) technology in clinical proteomics is very promising for discovery of new biomarkers for diseases management. To overcome the obstacles of data noises in MS analysis, we proposed a new approach of knowledge-integrated biomarker discovery using data from Major Adverse Cardiac Events (MACE) patients. We first built up a cardiovascular-related network based on protein information coming from protein annotations in Uniprot, protein–protein interaction (PPI), and signal transduction database. Distinct from the previous machine learning methods in MS data processing, we then used statistical methods to discover biomarkers in cardiovascular-related network. Through the tradeoff between known protein information and data noises in mass spectrometry data, we finally could firmly identify those high-confident biomarkers. Most importantly, aided by protein–protein interaction network, that is, cardiovascular-related network, we proposed a new type of biomarkers, that is, network biomarkers, composed of a set of proteins and the interactions among them. The candidate network biomarkers can classify the two groups of patients more accurately than current single ones without consideration of biological molecular interaction. PMID:18665624

  9. Asymmetric threat data mining and knowledge discovery

    Science.gov (United States)

    Gilmore, John F.; Pagels, Michael A.; Palk, Justin

    2001-03-01

    Asymmetric threats differ from the conventional force-on- force military encounters that the Defense Department has historically been trained to engage. Terrorism by its nature is now an operational activity that is neither easily detected or countered as its very existence depends on small covert attacks exploiting the element of surprise. But terrorism does have defined forms, motivations, tactics and organizational structure. Exploiting a terrorism taxonomy provides the opportunity to discover and assess knowledge of terrorist operations. This paper describes the Asymmetric Threat Terrorist Assessment, Countering, and Knowledge (ATTACK) system. ATTACK has been developed to (a) data mine open source intelligence (OSINT) information from web-based newspaper sources, video news web casts, and actual terrorist web sites, (b) evaluate this information against a terrorism taxonomy, (c) exploit country/region specific social, economic, political, and religious knowledge, and (d) discover and predict potential terrorist activities and association links. Details of the asymmetric threat structure and the ATTACK system architecture are presented with results of an actual terrorist data mining and knowledge discovery test case shown.

  10. Service-oriented discovery of knowledge : foundations, implementations and applications

    NARCIS (Netherlands)

    Bruin, Jeroen Sebastiaan de

    2010-01-01

    In this thesis we will investigate how a popular new way of distributed computing called service orientation can be used within the field of Knowledge Discovery. We critically investigate its principles and present models for developing withing this paradigm. We then apply this model to create a web

  11. Energy-Water Nexus Knowledge Discovery Framework

    Science.gov (United States)

    Bhaduri, B. L.; Foster, I.; Chandola, V.; Chen, B.; Sanyal, J.; Allen, M.; McManamay, R.

    2017-12-01

    As demand for energy grows, the energy sector is experiencing increasing competition for water. With increasing population and changing environmental, socioeconomic scenarios, new technology and investment decisions must be made for optimized and sustainable energy-water resource management. This requires novel scientific insights into the complex interdependencies of energy-water infrastructures across multiple space and time scales. An integrated data driven modeling, analysis, and visualization capability is needed to understand, design, and develop efficient local and regional practices for the energy-water infrastructure components that can be guided with strategic (federal) policy decisions to ensure national energy resilience. To meet this need of the energy-water nexus (EWN) community, an Energy-Water Knowledge Discovery Framework (EWN-KDF) is being proposed to accomplish two objectives: Development of a robust data management and geovisual analytics platform that provides access to disparate and distributed physiographic, critical infrastructure, and socioeconomic data, along with emergent ad-hoc sensor data to provide a powerful toolkit of analysis algorithms and compute resources to empower user-guided data analysis and inquiries; and Demonstration of knowledge generation with selected illustrative use cases for the implications of climate variability for coupled land-water-energy systems through the application of state-of-the art data integration, analysis, and synthesis. Oak Ridge National Laboratory (ORNL), in partnership with Argonne National Laboratory (ANL) and researchers affiliated with the Center for International Earth Science Information Partnership (CIESIN) at Columbia University and State University of New York-Buffalo (SUNY), propose to develop this Energy-Water Knowledge Discovery Framework to generate new, critical insights regarding the complex dynamics of the EWN and its interactions with climate variability and change. An overarching

  12. Big data analytics in immunology: a knowledge-based approach.

    Science.gov (United States)

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  13. Big Data Analytics in Immunology: A Knowledge-Based Approach

    Directory of Open Access Journals (Sweden)

    Guang Lan Zhang

    2014-01-01

    Full Text Available With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  14. Hierarchical virtual screening approaches in small molecule drug discovery.

    Science.gov (United States)

    Kumar, Ashutosh; Zhang, Kam Y J

    2015-01-01

    Virtual screening has played a significant role in the discovery of small molecule inhibitors of therapeutic targets in last two decades. Various ligand and structure-based virtual screening approaches are employed to identify small molecule ligands for proteins of interest. These approaches are often combined in either hierarchical or parallel manner to take advantage of the strength and avoid the limitations associated with individual methods. Hierarchical combination of ligand and structure-based virtual screening approaches has received noteworthy success in numerous drug discovery campaigns. In hierarchical virtual screening, several filters using ligand and structure-based approaches are sequentially applied to reduce a large screening library to a number small enough for experimental testing. In this review, we focus on different hierarchical virtual screening strategies and their application in the discovery of small molecule modulators of important drug targets. Several virtual screening studies are discussed to demonstrate the successful application of hierarchical virtual screening in small molecule drug discovery. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Knowledge Discovery from Vibration Measurements

    Directory of Open Access Journals (Sweden)

    Jun Deng

    2014-01-01

    Full Text Available The framework as well as the particular algorithms of pattern recognition process is widely adopted in structural health monitoring (SHM. However, as a part of the overall process of knowledge discovery from data bases (KDD, the results of pattern recognition are only changes and patterns of changes of data features. In this paper, based on the similarity between KDD and SHM and considering the particularity of SHM problems, a four-step framework of SHM is proposed which extends the final goal of SHM from detecting damages to extracting knowledge to facilitate decision making. The purposes and proper methods of each step of this framework are discussed. To demonstrate the proposed SHM framework, a specific SHM method which is composed by the second order structural parameter identification, statistical control chart analysis, and system reliability analysis is then presented. To examine the performance of this SHM method, real sensor data measured from a lab size steel bridge model structure are used. The developed four-step framework of SHM has the potential to clarify the process of SHM to facilitate the further development of SHM techniques.

  16. A Cognitive Adopted Framework for IoT Big-Data Management and Knowledge Discovery Prospective

    OpenAIRE

    Mishra, Nilamadhab; Lin, Chung-Chih; Chang, Hsien-Tsung

    2015-01-01

    In future IoT big-data management and knowledge discovery for large scale industrial automation application, the importance of industrial internet is increasing day by day. Several diversified technologies such as IoT (Internet of Things), computational intelligence, machine type communication, big-data, and sensor technology can be incorporated together to improve the data management and knowledge discovery efficiency of large scale automation applications. So in this work, we need to propos...

  17. Analysis student self efficacy in terms of using Discovery Learning model with SAVI approach

    Science.gov (United States)

    Sahara, Rifki; Mardiyana, S., Dewi Retno Sari

    2017-12-01

    Often students are unable to prove their academic achievement optimally according to their abilities. One reason is that they often feel unsure that they are capable of completing the tasks assigned to them. For students, such beliefs are necessary. The term belief has called self efficacy. Self efficacy is not something that has brought about by birth or something with permanent quality of an individual, but is the result of cognitive processes, the meaning one's self efficacy will be stimulated through learning activities. Self efficacy has developed and enhanced by a learning model that can stimulate students to foster confidence in their capabilities. One of them is by using Discovery Learning model with SAVI approach. Discovery Learning model with SAVI approach is one of learning models that involves the active participation of students in exploring and discovering their own knowledge and using it in problem solving by utilizing all the sensory devices they have. This naturalistic qualitative research aims to analyze student self efficacy in terms of use the Discovery Learning model with SAVI approach. The subjects of this study are 30 students focused on eight students who have high, medium, and low self efficacy obtained through purposive sampling technique. The data analysis of this research used three stages, that were reducing, displaying, and getting conclusion of the data. Based on the results of data analysis, it was concluded that the self efficacy appeared dominantly on the learning by using Discovery Learning model with SAVI approach is magnitude dimension.

  18. Data Mining in Education : A Review on the Knowledge Discovery Perspective

    OpenAIRE

    Pratiyush Guleria; Manu Sood

    2014-01-01

    Knowledge Discovery in Databases is the process of finding knowledge in massive amount of data where data mining is the core of this process. Data minin g can be used to mine understandable meaningful patterns from large databases and these patterns ma y then be converted into knowledge.Data mining is t he process of extracting the information and patterns derived by the KDD process which helps in crucial decision-making.Data mining works with data warehou se and...

  19. Fragment approaches in structure-based drug discovery

    International Nuclear Information System (INIS)

    Hubbard, Roderick E.

    2008-01-01

    Fragment-based methods are successfully generating novel and selective drug-like inhibitors of protein targets, with a number of groups reporting compounds entering clinical trials. This paper summarizes the key features of the approach as one of the tools in structure-guided drug discovery. There has been considerable interest recently in what is known as 'fragment-based lead discovery'. The novel feature of the approach is to begin with small low-affinity compounds. The main advantage is that a larger potential chemical diversity can be sampled with fewer compounds, which is particularly important for new target classes. The approach relies on careful design of the fragment library, a method that can detect binding of the fragment to the protein target, determination of the structure of the fragment bound to the target, and the conventional use of structural information to guide compound optimization. In this article the methods are reviewed, and experiences in fragment-based discovery of lead series of compounds against kinases such as PDK1 and ATPases such as Hsp90 are discussed. The examples illustrate some of the key benefits and issues of the approach and also provide anecdotal examples of the patterns seen in selectivity and the binding mode of fragments across different protein targets

  20. Knowledge discovery in traditional Chinese medicine: state of the art and perspectives.

    Science.gov (United States)

    Feng, Yi; Wu, Zhaohui; Zhou, Xuezhong; Zhou, Zhongmei; Fan, Weiyu

    2006-11-01

    As a complementary medical system to Western medicine, traditional Chinese medicine (TCM) provides a unique theoretical and practical approach to the treatment of diseases over thousands of years. Confronted with the increasing popularity of TCM and the huge volume of TCM data, historically accumulated and recently obtained, there is an urgent need to explore these resources effectively by the techniques of knowledge discovery in database (KDD). This paper aims at providing an overview of recent KDD studies in TCM field. A literature search was conducted in both English and Chinese publications, and major studies of knowledge discovery in TCM (KDTCM) reported in these materials were identified. Based on an introduction to the state of the art of TCM data resources, a review of four subfields of KDTCM research was presented, including KDD for the research of Chinese medical formula, KDD for the research of Chinese herbal medicine, KDD for TCM syndrome research, and KDD for TCM clinical diagnosis. Furthermore, the current state and main problems in each subfield were summarized based on a discussion of existing studies, and future directions for each subfield were also proposed accordingly. A series of KDD methods are used in existing KDTCM researches, ranging from conventional frequent itemset mining to state of the art latent structure model. Considerable interesting discoveries are obtained by these methods, such as novel TCM paired drugs discovered by frequent itemset analysis, functional community of related genes discovered under syndrome perspective by text mining, the high proportion of toxic plants in the botanical family Ranunculaceae disclosed by statistical analysis, the association between M-cholinoceptor blocking drug and Solanaceae revealed by association rule mining, etc. It is particularly inspiring to see some studies connecting TCM with biomedicine, which provide a novel top-down view for functional genomics research. However, further developments

  1. Machine Learning Methods for Knowledge Discovery in Medical Data on Atherosclerosis

    Czech Academy of Sciences Publication Activity Database

    Serrano, J.I.; Tomečková, Marie; Zvárová, Jana

    2006-01-01

    Roč. 1, - (2006), s. 6-33 ISSN 1801-5603 Institutional research plan: CEZ:AV0Z10300504 Keywords : knowledge discovery * supervised machine learning * biomedical data mining * risk factors of atherosclerosis Subject RIV: BB - Applied Statistics, Operational Research

  2. Flood AI: An Intelligent Systems for Discovery and Communication of Disaster Knowledge

    Science.gov (United States)

    Demir, I.; Sermet, M. Y.

    2017-12-01

    Communities are not immune from extreme events or natural disasters that can lead to large-scale consequences for the nation and public. Improving resilience to better prepare, plan, recover, and adapt to disasters is critical to reduce the impacts of extreme events. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This project presents an intelligent system, Flood AI, for flooding to improve societal preparedness by providing a knowledge engine using voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine utilizes the flood ontology and concepts to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework utilizing environmental observations, forecast models, and knowledge bases. Communication channels of the framework includes web-based systems, agent-based chat bots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases.

  3. Methodologies of Knowledge Discovery from Data and Data Mining Methods in Mechanical Engineering

    Directory of Open Access Journals (Sweden)

    Rogalewicz Michał

    2016-12-01

    Full Text Available The paper contains a review of methodologies of a process of knowledge discovery from data and methods of data exploration (Data Mining, which are the most frequently used in mechanical engineering. The methodologies contain various scenarios of data exploring, while DM methods are used in their scope. The paper shows premises for use of DM methods in industry, as well as their advantages and disadvantages. Development of methodologies of knowledge discovery from data is also presented, along with a classification of the most widespread Data Mining methods, divided by type of realized tasks. The paper is summarized by presentation of selected Data Mining applications in mechanical engineering.

  4. Knowledge discovery from models of soil properties developed through data mining

    NARCIS (Netherlands)

    Bui, E.N.; Henderson, B.L.; Viergever, K.

    2006-01-01

    We modelled the distribution of soil properties across the agricultural zone on the Australian continent using data mining and knowledge discovery from databases (DM&KDD) tools. Piecewise linear tree models were built choosing from 19 climate variables, digital elevation model (DEM) and derived

  5. Text mining for traditional Chinese medical knowledge discovery: a survey.

    Science.gov (United States)

    Zhou, Xuezhong; Peng, Yonghong; Liu, Baoyan

    2010-08-01

    Extracting meaningful information and knowledge from free text is the subject of considerable research interest in the machine learning and data mining fields. Text data mining (or text mining) has become one of the most active research sub-fields in data mining. Significant developments in the area of biomedical text mining during the past years have demonstrated its great promise for supporting scientists in developing novel hypotheses and new knowledge from the biomedical literature. Traditional Chinese medicine (TCM) provides a distinct methodology with which to view human life. It is one of the most complete and distinguished traditional medicines with a history of several thousand years of studying and practicing the diagnosis and treatment of human disease. It has been shown that the TCM knowledge obtained from clinical practice has become a significant complementary source of information for modern biomedical sciences. TCM literature obtained from the historical period and from modern clinical studies has recently been transformed into digital data in the form of relational databases or text documents, which provide an effective platform for information sharing and retrieval. This motivates and facilitates research and development into knowledge discovery approaches and to modernize TCM. In order to contribute to this still growing field, this paper presents (1) a comparative introduction to TCM and modern biomedicine, (2) a survey of the related information sources of TCM, (3) a review and discussion of the state of the art and the development of text mining techniques with applications to TCM, (4) a discussion of the research issues around TCM text mining and its future directions. Copyright 2010 Elsevier Inc. All rights reserved.

  6. A Knowledge Discovery Approach to Diagnosing Intracranial Hematomas on Brain CT: Recognition, Measurement and Classification

    Science.gov (United States)

    Liao, Chun-Chih; Xiao, Furen; Wong, Jau-Min; Chiang, I.-Jen

    Computed tomography (CT) of the brain is preferred study on neurological emergencies. Physicians use CT to diagnose various types of intracranial hematomas, including epidural, subdural and intracerebral hematomas according to their locations and shapes. We propose a novel method that can automatically diagnose intracranial hematomas by combining machine vision and knowledge discovery techniques. The skull on the CT slice is located and the depth of each intracranial pixel is labeled. After normalization of the pixel intensities by their depth, the hyperdense area of intracranial hematoma is segmented with multi-resolution thresholding and region-growing. We then apply C4.5 algorithm to construct a decision tree using the features of the segmented hematoma and the diagnoses made by physicians. The algorithm was evaluated on 48 pathological images treated in a single institute. The two discovered rules closely resemble those used by human experts, and are able to make correct diagnoses in all cases.

  7. The Knowledge Governance Approach

    DEFF Research Database (Denmark)

    Foss, Nicolai J.

    with diverse capabilities of handling these transactions. Various open research issues that a knowledge governance approach may illuminate are sketched. Although knowledge governance draws clear inspiration from organizational economics and `rational' organization theory, it recognizes that knowledge......An attempt is made to characterize a `knowledge governance approach' as a distinctive, emerging field that cuts across the fields of knowledge management, organisation studies, strategy and human resource management. Knowledge governance is taken up with how the deployment of administrative...

  8. Computational neuropharmacology: dynamical approaches in drug discovery.

    Science.gov (United States)

    Aradi, Ildiko; Erdi, Péter

    2006-05-01

    Computational approaches that adopt dynamical models are widely accepted in basic and clinical neuroscience research as indispensable tools with which to understand normal and pathological neuronal mechanisms. Although computer-aided techniques have been used in pharmaceutical research (e.g. in structure- and ligand-based drug design), the power of dynamical models has not yet been exploited in drug discovery. We suggest that dynamical system theory and computational neuroscience--integrated with well-established, conventional molecular and electrophysiological methods--offer a broad perspective in drug discovery and in the search for novel targets and strategies for the treatment of neurological and psychiatric diseases.

  9. Novel approaches to develop community-built biological network models for potential drug discovery.

    Science.gov (United States)

    Talikka, Marja; Bukharov, Natalia; Hayes, William S; Hofmann-Apitius, Martin; Alexopoulos, Leonidas; Peitsch, Manuel C; Hoeng, Julia

    2017-08-01

    Hundreds of thousands of data points are now routinely generated in clinical trials by molecular profiling and NGS technologies. A true translation of this data into knowledge is not possible without analysis and interpretation in a well-defined biology context. Currently, there are many public and commercial pathway tools and network models that can facilitate such analysis. At the same time, insights and knowledge that can be gained is highly dependent on the underlying biological content of these resources. Crowdsourcing can be employed to guarantee the accuracy and transparency of the biological content underlining the tools used to interpret rich molecular data. Areas covered: In this review, the authors describe crowdsourcing in drug discovery. The focal point is the efforts that have successfully used the crowdsourcing approach to verify and augment pathway tools and biological network models. Technologies that enable the building of biological networks with the community are also described. Expert opinion: A crowd of experts can be leveraged for the entire development process of biological network models, from ontologies to the evaluation of their mechanistic completeness. The ultimate goal is to facilitate biomarker discovery and personalized medicine by mechanistically explaining patients' differences with respect to disease prevention, diagnosis, and therapy outcome.

  10. Biomarker Gene Signature Discovery Integrating Network Knowledge

    Directory of Open Access Journals (Sweden)

    Holger Fröhlich

    2012-02-01

    Full Text Available Discovery of prognostic and diagnostic biomarker gene signatures for diseases, such as cancer, is seen as a major step towards a better personalized medicine. During the last decade various methods, mainly coming from the machine learning or statistical domain, have been proposed for that purpose. However, one important obstacle for making gene signatures a standard tool in clinical diagnosis is the typical low reproducibility of these signatures combined with the difficulty to achieve a clear biological interpretation. For that purpose in the last years there has been a growing interest in approaches that try to integrate information from molecular interaction networks. Here we review the current state of research in this field by giving an overview about so-far proposed approaches.

  11. Privacy-aware knowledge discovery novel applications and new techniques

    CERN Document Server

    Bonchi, Francesco

    2010-01-01

    Covering research at the frontier of this field, Privacy-Aware Knowledge Discovery: Novel Applications and New Techniques presents state-of-the-art privacy-preserving data mining techniques for application domains, such as medicine and social networks, that face the increasing heterogeneity and complexity of new forms of data. Renowned authorities from prominent organizations not only cover well-established results-they also explore complex domains where privacy issues are generally clear and well defined, but the solutions are still preliminary and in continuous development. Divided into seve

  12. Translational Research 2.0: a framework for accelerating collaborative discovery.

    Science.gov (United States)

    Asakiewicz, Chris

    2014-05-01

    The world wide web has revolutionized the conduct of global, cross-disciplinary research. In the life sciences, interdisciplinary approaches to problem solving and collaboration are becoming increasingly important in facilitating knowledge discovery and integration. Web 2.0 technologies promise to have a profound impact - enabling reproducibility, aiding in discovery, and accelerating and transforming medical and healthcare research across the healthcare ecosystem. However, knowledge integration and discovery require a consistent foundation upon which to operate. A foundation should be capable of addressing some of the critical issues associated with how research is conducted within the ecosystem today and how it should be conducted for the future. This article will discuss a framework for enhancing collaborative knowledge discovery across the medical and healthcare research ecosystem. A framework that could serve as a foundation upon which ecosystem stakeholders can enhance the way data, information and knowledge is created, shared and used to accelerate the translation of knowledge from one area of the ecosystem to another.

  13. Pattern recognition algorithms for data mining scalability, knowledge discovery and soft granular computing

    CERN Document Server

    Pal, Sankar K

    2004-01-01

    Pattern Recognition Algorithms for Data Mining addresses different pattern recognition (PR) tasks in a unified framework with both theoretical and experimental results. Tasks covered include data condensation, feature selection, case generation, clustering/classification, and rule generation and evaluation. This volume presents various theories, methodologies, and algorithms, using both classical approaches and hybrid paradigms. The authors emphasize large datasets with overlapping, intractable, or nonlinear boundary classes, and datasets that demonstrate granular computing in soft frameworks.Organized into eight chapters, the book begins with an introduction to PR, data mining, and knowledge discovery concepts. The authors analyze the tasks of multi-scale data condensation and dimensionality reduction, then explore the problem of learning with support vector machine (SVM). They conclude by highlighting the significance of granular computing for different mining tasks in a soft paradigm.

  14. Net present value approaches for drug discovery.

    Science.gov (United States)

    Svennebring, Andreas M; Wikberg, Jarl Es

    2013-12-01

    Three dedicated approaches to the calculation of the risk-adjusted net present value (rNPV) in drug discovery projects under different assumptions are suggested. The probability of finding a candidate drug suitable for clinical development and the time to the initiation of the clinical development is assumed to be flexible in contrast to the previously used models. The rNPV of the post-discovery cash flows is calculated as the probability weighted average of the rNPV at each potential time of initiation of clinical development. Practical considerations how to set probability rates, in particular during the initiation and termination of a project is discussed.

  15. Self organising hypothesis networks: a new approach for representing and structuring SAR knowledge.

    Science.gov (United States)

    Hanser, Thierry; Barber, Chris; Rosser, Edward; Vessey, Jonathan D; Webb, Samuel J; Werner, Stéphane

    2014-01-01

    Combining different sources of knowledge to build improved structure activity relationship models is not easy owing to the variety of knowledge formats and the absence of a common framework to interoperate between learning techniques. Most of the current approaches address this problem by using consensus models that operate at the prediction level. We explore the possibility to directly combine these sources at the knowledge level, with the aim to harvest potentially increased synergy at an earlier stage. Our goal is to design a general methodology to facilitate knowledge discovery and produce accurate and interpretable models. To combine models at the knowledge level, we propose to decouple the learning phase from the knowledge application phase using a pivot representation (lingua franca) based on the concept of hypothesis. A hypothesis is a simple and interpretable knowledge unit. Regardless of its origin, knowledge is broken down into a collection of hypotheses. These hypotheses are subsequently organised into hierarchical network. This unification permits to combine different sources of knowledge into a common formalised framework. The approach allows us to create a synergistic system between different forms of knowledge and new algorithms can be applied to leverage this unified model. This first article focuses on the general principle of the Self Organising Hypothesis Network (SOHN) approach in the context of binary classification problems along with an illustrative application to the prediction of mutagenicity. It is possible to represent knowledge in the unified form of a hypothesis network allowing interpretable predictions with performances comparable to mainstream machine learning techniques. This new approach offers the potential to combine knowledge from different sources into a common framework in which high level reasoning and meta-learning can be applied; these latter perspectives will be explored in future work.

  16. ForEx++: A New Framework for Knowledge Discovery from Decision Forests

    Directory of Open Access Journals (Sweden)

    Md Nasim Adnan

    2017-11-01

    Full Text Available Decision trees are popularly used in a wide range of real world problems for both prediction and classification (logic rules discovery. A decision forest is an ensemble of decision trees and it is often built for achieving better predictive performance compared to a single decision tree. Besides improving predictive performance, a decision forest can be seen as a pool of logic rules (rules with great potential for knowledge discovery. However, a standard-sized decision forest usually generates a large number of rules that a user may not able to manage for effective knowledge analysis. In this paper, we propose a new, data set independent framework for extracting those rules that are comparatively more accurate, generalized and concise than others. We apply the proposed framework on rules generated by two different decision forest algorithms from some publicly available medical related data sets on dementia and heart disease. We then compare the quality of rules extracted by the proposed framework with rules generated from a single J48 decision tree and rules extracted by another recent method. The results reported in this paper demonstrate the effectiveness of the proposed framework.

  17. Knowledge Discovery Process: Case Study of RNAV Adherence of Radar Track Data

    Science.gov (United States)

    Matthews, Bryan

    2018-01-01

    This talk is an introduction to the knowledge discovery process, beginning with: identifying the problem, choosing data sources, matching the appropriate machine learning tools, and reviewing the results. The overview will be given in the context of an ongoing study that is assessing RNAV adherence of commercial aircraft in the national airspace.

  18. Knowledge Discovery from Posts in Online Health Communities Using Unified Medical Language System

    Directory of Open Access Journals (Sweden)

    Donghua Chen

    2018-06-01

    Full Text Available Patient-reported posts in Online Health Communities (OHCs contain various valuable information that can help establish knowledge-based online support for online patients. However, utilizing these reports to improve online patient services in the absence of appropriate medical and healthcare expert knowledge is difficult. Thus, we propose a comprehensive knowledge discovery method that is based on the Unified Medical Language System for the analysis of narrative posts in OHCs. First, we propose a domain-knowledge support framework for OHCs to provide a basis for post analysis. Second, we develop a Knowledge-Involved Topic Modeling (KI-TM method to extract and expand explicit knowledge within the text. We propose four metrics, namely, explicit knowledge rate, latent knowledge rate, knowledge correlation rate, and perplexity, for the evaluation of the KI-TM method. Our experimental results indicate that our proposed method outperforms existing methods in terms of providing knowledge support. Our method enhances knowledge support for online patients and can help develop intelligent OHCs in the future.

  19. A knowledge discovery object model API for Java

    Directory of Open Access Journals (Sweden)

    Jones Steven JM

    2003-10-01

    Full Text Available Abstract Background Biological data resources have become heterogeneous and derive from multiple sources. This introduces challenges in the management and utilization of this data in software development. Although efforts are underway to create a standard format for the transmission and storage of biological data, this objective has yet to be fully realized. Results This work describes an application programming interface (API that provides a framework for developing an effective biological knowledge ontology for Java-based software projects. The API provides a robust framework for the data acquisition and management needs of an ontology implementation. In addition, the API contains classes to assist in creating GUIs to represent this data visually. Conclusions The Knowledge Discovery Object Model (KDOM API is particularly useful for medium to large applications, or for a number of smaller software projects with common characteristics or objectives. KDOM can be coupled effectively with other biologically relevant APIs and classes. Source code, libraries, documentation and examples are available at http://www.bcgsc.ca/bioinfo/software.

  20. Lean approach in knowledge work

    Directory of Open Access Journals (Sweden)

    Hanna Kropsu-Vehkapera

    2018-05-01

    Full Text Available Purpose: Knowledge work productivity is a key area of improvement for many organisations. Lean approach is a sustainable way to achieve operational excellence and can be applied in many areas. The purpose of this novel study is to examine the potential of using lean approach for improving knowledge work practices. Design/methodology/approach: A systematic literature review has been carried out to study how lean approach is realised in knowledge work. The research is conceptual in nature and draws upon earlier research findings. Findings: This study shows that lean studies’ in knowledge work is an emerging research area. This study documents the methods and practices implemented in knowledge work to date, and presents a knowledge work continuum, which is an essential framework for effective lean approach deployment and to frame future research focus in knowledge work productivity. Research limitations/implications: This study structures the concept of knowledge work and outlines a concrete concept derived from earlier literature. The study summarises the literature on lean in knowledge work and highlights, which methods are used. More research is needed to understand how lean can be implemented in complex knowledge work environment and not only on the repetitive knowledge work. The limitations of this research are due to the limited availability of previous research. Practical implications: To analyse the nature of knowledge work, we implicate the areas where lean methods especially apply to improving knowledge work productivity. When applying lean in knowledge work context the focus should be using the people better and improving information flow. Originality/value: This study focuses on adapting lean methods into a knowledge work context and summarises earlier research done in this field. The study discusses the potential to improve knowledge work productivity by implementing lean methods and presents a unique knowledge work continuum to

  1. Knowledge discovery from seismic data using neural networks; Descoberta de conhecimento a partir de dados sismicos utilizando redes neurais

    Energy Technology Data Exchange (ETDEWEB)

    Paula, Wesley R. de; Costa, Bruno A.D.; Gomes, Herman M. [Universidade Federal de Campina Grande (UFCG), PB (Brazil)

    2004-07-01

    The analysis and interpretation of seismic data is of fundamental importance to the Oil Industry, since it helps discover geologic formations that are conducive to hydrocarbon accumulation. The use of seismic data in reservoir characterization may be performed through localized data inspections and clustering based on features of common seismic responses. This clustering or classification can be performed in two basic ways: visually, with the help of graphical tools; or using automatic classification techniques, such as statistical models and artificial neural networks. Neural network based methods are generally superior to rule- or knowledge-based systems, since they have a better generalization capability and are fault tolerant. Within this context, the main objective of this work is to describe methods that employ the two main neural network based approaches (supervised and unsupervised) in knowledge discovery from seismic data. Initially, the implementation and experiments were focused on the problem of seismic facies recognition using the unsupervised approach, but in future works, the implementation of the supervised approach, an application to fault detection and a parallel implementation of the proposed methods are planned. (author)

  2. Knowledge Discovery in Biological Databases for Revealing Candidate Genes Linked to Complex Phenotypes.

    Science.gov (United States)

    Hassani-Pak, Keywan; Rawlings, Christopher

    2017-06-13

    Genetics and "omics" studies designed to uncover genotype to phenotype relationships often identify large numbers of potential candidate genes, among which the causal genes are hidden. Scientists generally lack the time and technical expertise to review all relevant information available from the literature, from key model species and from a potentially wide range of related biological databases in a variety of data formats with variable quality and coverage. Computational tools are needed for the integration and evaluation of heterogeneous information in order to prioritise candidate genes and components of interaction networks that, if perturbed through potential interventions, have a positive impact on the biological outcome in the whole organism without producing negative side effects. Here we review several bioinformatics tools and databases that play an important role in biological knowledge discovery and candidate gene prioritization. We conclude with several key challenges that need to be addressed in order to facilitate biological knowledge discovery in the future.

  3. Knowledge discovery from structured mammography reports using inductive logic programming.

    Science.gov (United States)

    Burnside, Elizabeth S; Davis, Jesse; Costa, Victor Santos; Dutra, Inês de Castro; Kahn, Charles E; Fine, Jason; Page, David

    2005-01-01

    The development of large mammography databases provides an opportunity for knowledge discovery and data mining techniques to recognize patterns not previously appreciated. Using a database from a breast imaging practice containing patient risk factors, imaging findings, and biopsy results, we tested whether inductive logic programming (ILP) could discover interesting hypotheses that could subsequently be tested and validated. The ILP algorithm discovered two hypotheses from the data that were 1) judged as interesting by a subspecialty trained mammographer and 2) validated by analysis of the data itself.

  4. Semantic Search in E-Discovery: An Interdisciplinary Approach

    NARCIS (Netherlands)

    Graus, D.; Ren, Z.; de Rijke, M.; van Dijk, D.; Henseler, H.; van der Knaap, N.

    2013-01-01

    We propose an interdisciplinary approach to applying and evaluating semantic search in the e-discovery setting. By combining expertise from the fields of law and criminology with that of information retrieval and extraction, we move beyond "algorithm-centric" evaluation, towards evaluating the

  5. How does non-formal marine education affect student attitude and knowledge? A case study using SCDNR's Discovery program

    Science.gov (United States)

    McGovern, Mary Francis

    Non-formal environmental education provides students the opportunity to learn in ways that would not be possible in a traditional classroom setting. Outdoor learning allows students to make connections to their environment and helps to foster an appreciation for nature. This type of education can be interdisciplinary---students not only develop skills in science, but also in mathematics, social studies, technology, and critical thinking. This case study focuses on a non-formal marine education program, the South Carolina Department of Natural Resources' (SCDNR) Discovery vessel based program. The Discovery curriculum was evaluated to determine impact on student knowledge about and attitude toward the estuary. Students from two South Carolina coastal counties who attended the boat program during fall 2014 were asked to complete a brief survey before, immediately after, and two weeks following the program. The results of this study indicate that both student knowledge about and attitude significantly improved after completion of the Discovery vessel based program. Knowledge and attitude scores demonstrated a positive correlation.

  6. Accelerating knowledge discovery through community data sharing and integration.

    Science.gov (United States)

    Yip, Y L

    2009-01-01

    To summarize current excellent research in the field of bioinformatics. Synopsis of the articles selected for the IMIA Yearbook 2009. The selection process for this yearbook's section on Bioinformatics results in six excellent articles highlighting several important trends First, it can be noted that Semantic Web technology continues to play an important role in heterogeneous data integration. Novel applications also put more emphasis on its ability to make logical inferences leading to new insights and discoveries. Second, translational research, due to its complex nature, increasingly relies on collective intelligence made available through the adoption of community-defined protocols or software architectures for secure data annotation, sharing and analysis. Advances in systems biology, bio-ontologies and text-ming can also be noted. Current biomedical research gradually evolves towards an environment characterized by intensive collaboration and more sophisticated knowledge processing activities. Enabling technologies, either Semantic Web or other solutions, are expected to play an increasingly important role in generating new knowledge in the foreseeable future.

  7. CLARM: An integrative approach for functional modules discovery

    KAUST Repository

    Salem, Saeed M.; Alroobi, Rami; Banitaan, Shadi; Seridi, Loqmane; Brewer, James E.; Aljarah, Ibrahim

    2011-01-01

    Functional module discovery aims to find well-connected subnetworks which can serve as candidate protein complexes. Advances in High-throughput proteomic technologies have enabled the collection of large amount of interaction data as well as gene expression data. We propose, CLARM, a clustering algorithm that integrates gene expression profiles and protein protein interaction network for biological modules discovery. The main premise is that by enriching the interaction network by adding interactions between genes which are highly co-expressed over a wide range of biological and environmental conditions, we can improve the quality of the discovered modules. Protein protein interactions, known protein complexes, and gene expression profiles for diverse environmental conditions from the yeast Saccharomyces cerevisiae were used for evaluate the biological significance of the reported modules. Our experiments show that the CLARM approach is competitive to wellestablished module discovery methods. Copyright © 2011 ACM.

  8. Semantically-enabled Knowledge Discovery in the Deep Carbon Observatory

    Science.gov (United States)

    Wang, H.; Chen, Y.; Ma, X.; Erickson, J. S.; West, P.; Fox, P. A.

    2013-12-01

    The Deep Carbon Observatory (DCO) is a decadal effort aimed at transforming scientific and public understanding of carbon in the complex deep earth system from the perspectives of Deep Energy, Deep Life, Extreme Physics and Chemistry, and Reservoirs and Fluxes. Over the course of the decade DCO scientific activities will generate a massive volume of data across a variety of disciplines, presenting significant challenges in terms of data integration, management, analysis and visualization, and ultimately limiting the ability of scientists across disciplines to make insights and unlock new knowledge. The DCO Data Science Team (DCO-DS) is applying Semantic Web methodologies to construct a knowledge representation focused on the DCO Earth science disciplines, and use it together with other technologies (e.g. natural language processing and data mining) to create a more expressive representation of the distributed corpus of DCO artifacts including datasets, metadata, instruments, sensors, platforms, deployments, researchers, organizations, funding agencies, grants and various awards. The embodiment of this knowledge representation is the DCO Data Science Infrastructure, in which unique entities within the DCO domain and the relations between them are recognized and explicitly identified. The DCO-DS Infrastructure will serve as a platform for more efficient and reliable searching, discovery, access, and publication of information and knowledge for the DCO scientific community and beyond.

  9. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    Science.gov (United States)

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  10. "Structured Discovery": A Modified Inquiry Approach to Teaching Social Studies.

    Science.gov (United States)

    Lordon, John

    1981-01-01

    Describes structured discovery approach to inquiry teaching which encourages the teacher to select instructional objectives, content, and questions to be answered. The focus is on individual and group activities. A brief outline using this approach to analyze Adolf Hitler is presented. (KC)

  11. Discovery and Development of ATP-Competitive mTOR Inhibitors Using Computational Approaches.

    Science.gov (United States)

    Luo, Yao; Wang, Ling

    2017-11-16

    The mammalian target of rapamycin (mTOR) is a central controller of cell growth, proliferation, metabolism, and angiogenesis. This protein is an attractive target for new anticancer drug development. Significant progress has been made in hit discovery, lead optimization, drug candidate development and determination of the three-dimensional (3D) structure of mTOR. Computational methods have been applied to accelerate the discovery and development of mTOR inhibitors helping to model the structure of mTOR, screen compound databases, uncover structure-activity relationship (SAR) and optimize the hits, mine the privileged fragments and design focused libraries. Besides, computational approaches were also applied to study protein-ligand interactions mechanisms and in natural product-driven drug discovery. Herein, we survey the most recent progress on the application of computational approaches to advance the discovery and development of compounds targeting mTOR. Future directions in the discovery of new mTOR inhibitors using computational methods are also discussed. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. Introduction to fragment-based drug discovery.

    Science.gov (United States)

    Erlanson, Daniel A

    2012-01-01

    Fragment-based drug discovery (FBDD) has emerged in the past decade as a powerful tool for discovering drug leads. The approach first identifies starting points: very small molecules (fragments) that are about half the size of typical drugs. These fragments are then expanded or linked together to generate drug leads. Although the origins of the technique date back some 30 years, it was only in the mid-1990s that experimental techniques became sufficiently sensitive and rapid for the concept to be become practical. Since that time, the field has exploded: FBDD has played a role in discovery of at least 18 drugs that have entered the clinic, and practitioners of FBDD can be found throughout the world in both academia and industry. Literally dozens of reviews have been published on various aspects of FBDD or on the field as a whole, as have three books (Jahnke and Erlanson, Fragment-based approaches in drug discovery, 2006; Zartler and Shapiro, Fragment-based drug discovery: a practical approach, 2008; Kuo, Fragment based drug design: tools, practical approaches, and examples, 2011). However, this chapter will assume that the reader is approaching the field with little prior knowledge. It will introduce some of the key concepts, set the stage for the chapters to follow, and demonstrate how X-ray crystallography plays a central role in fragment identification and advancement.

  13. Exploring relation types for literature-based discovery.

    Science.gov (United States)

    Preiss, Judita; Stevenson, Mark; Gaizauskas, Robert

    2015-09-01

    Literature-based discovery (LBD) aims to identify "hidden knowledge" in the medical literature by: (1) analyzing documents to identify pairs of explicitly related concepts (terms), then (2) hypothesizing novel relations between pairs of unrelated concepts that are implicitly related via a shared concept to which both are explicitly related. Many LBD approaches use simple techniques to identify semantically weak relations between concepts, for example, document co-occurrence. These generate huge numbers of hypotheses, difficult for humans to assess. More complex techniques rely on linguistic analysis, for example, shallow parsing, to identify semantically stronger relations. Such approaches generate fewer hypotheses, but may miss hidden knowledge. The authors investigate this trade-off in detail, comparing techniques for identifying related concepts to discover which are most suitable for LBD. A generic LBD system that can utilize a range of relation types was developed. Experiments were carried out comparing a number of techniques for identifying relations. Two approaches were used for evaluation: replication of existing discoveries and the "time slicing" approach.(1) RESULTS: Previous LBD discoveries could be replicated using relations based either on document co-occurrence or linguistic analysis. Using relations based on linguistic analysis generated many fewer hypotheses, but a significantly greater proportion of them were candidates for hidden knowledge. The use of linguistic analysis-based relations improves accuracy of LBD without overly damaging coverage. LBD systems often generate huge numbers of hypotheses, which are infeasible to manually review. Improving their accuracy has the potential to make these systems significantly more usable. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  14. GrandBase: generating actionable knowledge from Big Data

    Directory of Open Access Journals (Sweden)

    Xiu Susie Fang

    2017-08-01

    Full Text Available Purpose – This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB, called GrandBase. Design/methodology/approach – In particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase. In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed. Findings – Empirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed. Originality/value – To revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem. Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has only one true value. In this paper, the focus is on the problem of generating actionable knowledge from Big Data. A system is proposed, which consists of two phases, namely, knowledge extraction and truth discovery, to construct a broader KB, called GrandBase.

  15. Protein crystallography and drug discovery: recollections of knowledge exchange between academia and industry

    Directory of Open Access Journals (Sweden)

    Tom L. Blundell

    2017-07-01

    Full Text Available The development of structure-guided drug discovery is a story of knowledge exchange where new ideas originate from all parts of the research ecosystem. Dorothy Crowfoot Hodgkin obtained insulin from Boots Pure Drug Company in the 1930s and insulin crystallization was optimized in the company Novo in the 1950s, allowing the structure to be determined at Oxford University. The structure of renin was developed in academia, on this occasion in London, in response to a need to develop antihypertensives in pharma. The idea of a dimeric aspartic protease came from an international academic team and was discovered in HIV; it eventually led to new HIV antivirals being developed in industry. Structure-guided fragment-based discovery was developed in large pharma and biotechs, but has been exploited in academia for the development of new inhibitors targeting protein–protein interactions and also antimicrobials to combat mycobacterial infections such as tuberculosis. These observations provide a strong argument against the so-called `linear model', where ideas flow only in one direction from academic institutions to industry. Structure-guided drug discovery is a story of applications of protein crystallography and knowledge exhange between academia and industry that has led to new drug approvals for cancer and other common medical conditions by the Food and Drug Administration in the USA, as well as hope for the treatment of rare genetic diseases and infectious diseases that are a particular challenge in the developing world.

  16. Protein crystallography and drug discovery: recollections of knowledge exchange between academia and industry.

    Science.gov (United States)

    Blundell, Tom L

    2017-07-01

    The development of structure-guided drug discovery is a story of knowledge exchange where new ideas originate from all parts of the research ecosystem. Dorothy Crowfoot Hodgkin obtained insulin from Boots Pure Drug Company in the 1930s and insulin crystallization was optimized in the company Novo in the 1950s, allowing the structure to be determined at Oxford University. The structure of renin was developed in academia, on this occasion in London, in response to a need to develop antihypertensives in pharma. The idea of a dimeric aspartic protease came from an international academic team and was discovered in HIV; it eventually led to new HIV antivirals being developed in industry. Structure-guided fragment-based discovery was developed in large pharma and biotechs, but has been exploited in academia for the development of new inhibitors targeting protein-protein interactions and also antimicrobials to combat mycobacterial infections such as tuberculosis. These observations provide a strong argument against the so-called 'linear model', where ideas flow only in one direction from academic institutions to industry. Structure-guided drug discovery is a story of applications of protein crystallography and knowledge exhange between academia and industry that has led to new drug approvals for cancer and other common medical conditions by the Food and Drug Administration in the USA, as well as hope for the treatment of rare genetic diseases and infectious diseases that are a particular challenge in the developing world.

  17. An integrative data analysis platform for gene set analysis and knowledge discovery in a data warehouse framework.

    Science.gov (United States)

    Chen, Yi-An; Tripathi, Lokesh P; Mizuguchi, Kenji

    2016-01-01

    Data analysis is one of the most critical and challenging steps in drug discovery and disease biology. A user-friendly resource to visualize and analyse high-throughput data provides a powerful medium for both experimental and computational biologists to understand vastly different biological data types and obtain a concise, simplified and meaningful output for better knowledge discovery. We have previously developed TargetMine, an integrated data warehouse optimized for target prioritization. Here we describe how upgraded and newly modelled data types in TargetMine can now survey the wider biological and chemical data space, relevant to drug discovery and development. To enhance the scope of TargetMine from target prioritization to broad-based knowledge discovery, we have also developed a new auxiliary toolkit to assist with data analysis and visualization in TargetMine. This toolkit features interactive data analysis tools to query and analyse the biological data compiled within the TargetMine data warehouse. The enhanced system enables users to discover new hypotheses interactively by performing complicated searches with no programming and obtaining the results in an easy to comprehend output format. Database URL: http://targetmine.mizuguchilab.org. © The Author(s) 2016. Published by Oxford University Press.

  18. A new approach to the rationale discovery of polymeric biomaterials

    Science.gov (United States)

    Kohn, Joachim; Welsh, William J.; Knight, Doyle

    2007-01-01

    This paper attempts to illustrate both the need for new approaches to biomaterials discovery as well as the significant promise inherent in the use of combinatorial and computational design strategies. The key observation of this Leading Opinion Paper is that the biomaterials community has been slow to embrace advanced biomaterials discovery tools such as combinatorial methods, high throughput experimentation, and computational modeling in spite of the significant promise shown by these discovery tools in materials science, medicinal chemistry and the pharmaceutical industry. It seems that the complexity of living cells and their interactions with biomaterials has been a conceptual as well as a practical barrier to the use of advanced discovery tools in biomaterials science. However, with the continued increase in computer power, the goal of predicting the biological response of cells in contact with biomaterials surfaces is within reach. Once combinatorial synthesis, high throughput experimentation, and computational modeling are integrated into the biomaterials discovery process, a significant acceleration is possible in the pace of development of improved medical implants, tissue regeneration scaffolds, and gene/drug delivery systems. PMID:17644176

  19. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management.

    Science.gov (United States)

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-12-15

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM³ ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  20. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA) for Environmental Risk Management

    Science.gov (United States)

    Li, Yan; Thomas, Manoj; Osei-Bryson, Kweku-Muata; Levy, Jason

    2016-01-01

    With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA) process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM) objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs) exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity) and the degree of Socio-Economic Deprivation (SED) at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs. PMID:27983713

  1. Process Knowledge Discovery Using Sparse Principal Component Analysis

    DEFF Research Database (Denmark)

    Gao, Huihui; Gajjar, Shriram; Kulahci, Murat

    2016-01-01

    As the goals of ensuring process safety and energy efficiency become ever more challenging, engineers increasingly rely on data collected from such processes for informed decision making. During recent decades, extracting and interpreting valuable process information from large historical data sets...... SPCA approach that helps uncover the underlying process knowledge regarding variable relations. This approach systematically determines the optimal sparse loadings for each sparse PC while improving interpretability and minimizing information loss. The salient features of the proposed approach...

  2. Discovery of the leinamycin family of natural products by mining actinobacterial genomes.

    Science.gov (United States)

    Pan, Guohui; Xu, Zhengren; Guo, Zhikai; Hindra; Ma, Ming; Yang, Dong; Zhou, Hao; Gansemans, Yannick; Zhu, Xiangcheng; Huang, Yong; Zhao, Li-Xing; Jiang, Yi; Cheng, Jinhua; Van Nieuwerburgh, Filip; Suh, Joo-Won; Duan, Yanwen; Shen, Ben

    2017-12-26

    Nature's ability to generate diverse natural products from simple building blocks has inspired combinatorial biosynthesis. The knowledge-based approach to combinatorial biosynthesis has allowed the production of designer analogs by rational metabolic pathway engineering. While successful, structural alterations are limited, with designer analogs often produced in compromised titers. The discovery-based approach to combinatorial biosynthesis complements the knowledge-based approach by exploring the vast combinatorial biosynthesis repertoire found in Nature. Here we showcase the discovery-based approach to combinatorial biosynthesis by targeting the domain of unknown function and cysteine lyase domain (DUF-SH) didomain, specific for sulfur incorporation from the leinamycin (LNM) biosynthetic machinery, to discover the LNM family of natural products. By mining bacterial genomes from public databases and the actinomycetes strain collection at The Scripps Research Institute, we discovered 49 potential producers that could be grouped into 18 distinct clades based on phylogenetic analysis of the DUF-SH didomains. Further analysis of the representative genomes from each of the clades identified 28 lnm -type gene clusters. Structural diversities encoded by the LNM-type biosynthetic machineries were predicted based on bioinformatics and confirmed by in vitro characterization of selected adenylation proteins and isolation and structural elucidation of the guangnanmycins and weishanmycins. These findings demonstrate the power of the discovery-based approach to combinatorial biosynthesis for natural product discovery and structural diversity and highlight Nature's rich biosynthetic repertoire. Comparative analysis of the LNM-type biosynthetic machineries provides outstanding opportunities to dissect Nature's biosynthetic strategies and apply these findings to combinatorial biosynthesis for natural product discovery and structural diversity.

  3. Computer-Aided Drug Discovery in Plant Pathology.

    Science.gov (United States)

    Shanmugam, Gnanendra; Jeon, Junhyun

    2017-12-01

    Control of plant diseases is largely dependent on use of agrochemicals. However, there are widening gaps between our knowledge on plant diseases gained from genetic/mechanistic studies and rapid translation of the knowledge into target-oriented development of effective agrochemicals. Here we propose that the time is ripe for computer-aided drug discovery/design (CADD) in molecular plant pathology. CADD has played a pivotal role in development of medically important molecules over the last three decades. Now, explosive increase in information on genome sequences and three dimensional structures of biological molecules, in combination with advances in computational and informational technologies, opens up exciting possibilities for application of CADD in discovery and development of agrochemicals. In this review, we outline two categories of the drug discovery strategies: structure- and ligand-based CADD, and relevant computational approaches that are being employed in modern drug discovery. In order to help readers to dive into CADD, we explain concepts of homology modelling, molecular docking, virtual screening, and de novo ligand design in structure-based CADD, and pharmacophore modelling, ligand-based virtual screening, quantitative structure activity relationship modelling and de novo ligand design for ligand-based CADD. We also provide the important resources available to carry out CADD. Finally, we present a case study showing how CADD approach can be implemented in reality for identification of potent chemical compounds against the important plant pathogens, Pseudomonas syringae and Colletotrichum gloeosporioides .

  4. State of the Art in Tumor Antigen and Biomarker Discovery

    International Nuclear Information System (INIS)

    Even-Desrumeaux, Klervi; Baty, Daniel; Chames, Patrick

    2011-01-01

    Our knowledge of tumor immunology has resulted in multiple approaches for the treatment of cancer. However, a gap between research of new tumors markers and development of immunotherapy has been established and very few markers exist that can be used for treatment. The challenge is now to discover new targets for active and passive immunotherapy. This review aims at describing recent advances in biomarkers and tumor antigen discovery in terms of antigen nature and localization, and is highlighting the most recent approaches used for their discovery including “omics” technology

  5. Understanding images using knowledge based approach

    International Nuclear Information System (INIS)

    Tascini, G.

    1985-01-01

    This paper presents an approach to image understanding focusing on low level image processing and proposes a rule-based approach as part of larger knowledge-based system. The general system has a yerarchical structure that comprises several knowledge-based layers. The main idea is to confine at the lower level the domain independent knowledge and to reserve the higher levels for the domain dependent knowledge, that is for the interpretation

  6. Data Mining and Knowledge Discovery tools for exploiting big Earth-Observation data

    Science.gov (United States)

    Espinoza Molina, D.; Datcu, M.

    2015-04-01

    The continuous increase in the size of the archives and in the variety and complexity of Earth-Observation (EO) sensors require new methodologies and tools that allow the end-user to access a large image repository, to extract and to infer knowledge about the patterns hidden in the images, to retrieve dynamically a collection of relevant images, and to support the creation of emerging applications (e.g.: change detection, global monitoring, disaster and risk management, image time series, etc.). In this context, we are concerned with providing a platform for data mining and knowledge discovery content from EO archives. The platform's goal is to implement a communication channel between Payload Ground Segments and the end-user who receives the content of the data coded in an understandable format associated with semantics that is ready for immediate exploitation. It will provide the user with automated tools to explore and understand the content of highly complex images archives. The challenge lies in the extraction of meaningful information and understanding observations of large extended areas, over long periods of time, with a broad variety of EO imaging sensors in synergy with other related measurements and data. The platform is composed of several components such as 1.) ingestion of EO images and related data providing basic features for image analysis, 2.) query engine based on metadata, semantics and image content, 3.) data mining and knowledge discovery tools for supporting the interpretation and understanding of image content, 4.) semantic definition of the image content via machine learning methods. All these components are integrated and supported by a relational database management system, ensuring the integrity and consistency of Terabytes of Earth Observation data.

  7. Problem Formulation in Knowledge Discovery via Data Analytics (KDDA for Environmental Risk Management

    Directory of Open Access Journals (Sweden)

    Yan Li

    2016-12-01

    Full Text Available With the growing popularity of data analytics and data science in the field of environmental risk management, a formalized Knowledge Discovery via Data Analytics (KDDA process that incorporates all applicable analytical techniques for a specific environmental risk management problem is essential. In this emerging field, there is limited research dealing with the use of decision support to elicit environmental risk management (ERM objectives and identify analytical goals from ERM decision makers. In this paper, we address problem formulation in the ERM understanding phase of the KDDA process. We build a DM3 ontology to capture ERM objectives and to inference analytical goals and associated analytical techniques. A framework to assist decision making in the problem formulation process is developed. It is shown how the ontology-based knowledge system can provide structured guidance to retrieve relevant knowledge during problem formulation. The importance of not only operationalizing the KDDA approach in a real-world environment but also evaluating the effectiveness of the proposed procedure is emphasized. We demonstrate how ontology inferencing may be used to discover analytical goals and techniques by conceptualizing Hazardous Air Pollutants (HAPs exposure shifts based on a multilevel analysis of the level of urbanization (and related economic activity and the degree of Socio-Economic Deprivation (SED at the local neighborhood level. The HAPs case highlights not only the role of complexity in problem formulation but also the need for integrating data from multiple sources and the importance of employing appropriate KDDA modeling techniques. Challenges and opportunities for KDDA are summarized with an emphasis on environmental risk management and HAPs.

  8. Knowledge Resources - A Knowledge Management Approach for Digital Ecosystems

    Science.gov (United States)

    Kurz, Thomas; Eder, Raimund; Heistracher, Thomas

    The paper at hand presents an innovative approach for the conception and implementation of knowledge management in Digital Ecosystems. Based on a reflection of Digital Ecosystem research of the past years, an architecture is outlined which utilizes Knowledge Resources as the central and simplest entities of knowledge transfer. After the discussion of the related conception, the result of a first prototypical implementation is described that helps the transformation of implicit knowledge to explicit knowledge for wide use.

  9. Informing child welfare policy and practice: using knowledge discovery and data mining technology via a dynamic Web site.

    Science.gov (United States)

    Duncan, Dean F; Kum, Hye-Chung; Weigensberg, Elizabeth Caplick; Flair, Kimberly A; Stewart, C Joy

    2008-11-01

    Proper management and implementation of an effective child welfare agency requires the constant use of information about the experiences and outcomes of children involved in the system, emphasizing the need for comprehensive, timely, and accurate data. In the past 20 years, there have been many advances in technology that can maximize the potential of administrative data to promote better evaluation and management in the field of child welfare. Specifically, this article discusses the use of knowledge discovery and data mining (KDD), which makes it possible to create longitudinal data files from administrative data sources, extract valuable knowledge, and make the information available via a user-friendly public Web site. This article demonstrates a successful project in North Carolina where knowledge discovery and data mining technology was used to develop a comprehensive set of child welfare outcomes available through a public Web site to facilitate information sharing of child welfare data to improve policy and practice.

  10. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  11. Model-driven discovery of underground metabolic functions in Escherichia coli

    DEFF Research Database (Denmark)

    Guzmán, Gabriela I.; Utrilla, José; Nurk, Sergey

    2015-01-01

    -scale models, which have been widely used for predicting growth phenotypes in various environments or following a genetic perturbation; however, these predictions occasionally fail. Failed predictions of gene essentiality offer an opportunity for targeting biological discovery, suggesting the presence......E, and gltA and prpC. This study demonstrates how a targeted model-driven approach to discovery can systematically fill knowledge gaps, characterize underground metabolism, and elucidate regulatory mechanisms of adaptation in response to gene KO perturbations....

  12. Knowledge-Centric Technical Support Organization (TSO) Using Process Oriented Knowledge Management Approach

    International Nuclear Information System (INIS)

    Mohamad Safuan Sulaiman; Siti Nurbahyah Hamdan; Mohd Dzul Aiman Aslan

    2014-01-01

    In the United States of America, Process Oriented Knowledge Management (POKM) Model has been successfully implemented in most of Nuclear Power Plants. This approach has been introduced in Nuclear Knowledge Management program by the IAEA since 2011. Malaysia has involved in the IAEA Coordinated Research Project (CRP) focusing the approach started in 2011. The main objective for Malaysian participation of this project is to support readiness in terms of nuclear technical knowledge by Technical Support Organization (TSO) for Nuclear Power Program. This project has focused on several nuclear technical areas which consist of Public Information (PI), Radiological Impact Assessment (RIA), Nuclear Reactor Technology (NRT), Plant and Prototype Development (PDC) and nuclear knowledge management. This paper articulates the detail POKM approach and project experience in implementing the approach at organizational level. (author)

  13. The limits of de novo DNA motif discovery.

    Directory of Open Access Journals (Sweden)

    David Simcha

    Full Text Available A major challenge in molecular biology is reverse-engineering the cis-regulatory logic that plays a major role in the control of gene expression. This program includes searching through DNA sequences to identify "motifs" that serve as the binding sites for transcription factors or, more generally, are predictive of gene expression across cellular conditions. Several approaches have been proposed for de novo motif discovery-searching sequences without prior knowledge of binding sites or nucleotide patterns. However, unbiased validation is not straightforward. We consider two approaches to unbiased validation of discovered motifs: testing the statistical significance of a motif using a DNA "background" sequence model to represent the null hypothesis and measuring performance in predicting membership in gene clusters. We demonstrate that the background models typically used are "too null," resulting in overly optimistic assessments of significance, and argue that performance in predicting TF binding or expression patterns from DNA motifs should be assessed by held-out data, as in predictive learning. Applying this criterion to common motif discovery methods resulted in universally poor performance, although there is a marked improvement when motifs are statistically significant against real background sequences. Moreover, on synthetic data where "ground truth" is known, discriminative performance of all algorithms is far below the theoretical upper bound, with pronounced "over-fitting" in training. A key conclusion from this work is that the failure of de novo discovery approaches to accurately identify motifs is basically due to statistical intractability resulting from the fixed size of co-regulated gene clusters, and thus such failures do not necessarily provide evidence that unfound motifs are not active biologically. Consequently, the use of prior knowledge to enhance motif discovery is not just advantageous but necessary. An implementation of

  14. From Data to Knowledge to Discoveries: Artificial Intelligence and Scientific Workflows

    Directory of Open Access Journals (Sweden)

    Yolanda Gil

    2009-01-01

    Full Text Available Scientific computing has entered a new era of scale and sharing with the arrival of cyberinfrastructure facilities for computational experimentation. A key emerging concept is scientific workflows, which provide a declarative representation of complex scientific applications that can be automatically managed and executed in distributed shared resources. In the coming decades, computational experimentation will push the boundaries of current cyberinfrastructure in terms of inter-disciplinary scope and integrative models of scientific phenomena under study. This paper argues that knowledge-rich workflow environments will provide necessary capabilities for that vision by assisting scientists to validate and vet complex analysis processes and by automating important aspects of scientific exploration and discovery.

  15. Approach to cerebrospinal fluid (CSF) biomarker discovery and evaluation in HIV infection.

    Science.gov (United States)

    Price, Richard W; Peterson, Julia; Fuchs, Dietmar; Angel, Thomas E; Zetterberg, Henrik; Hagberg, Lars; Spudich, Serena; Smith, Richard D; Jacobs, Jon M; Brown, Joseph N; Gisslen, Magnus

    2013-12-01

    Central nervous system (CNS) infection is a nearly universal facet of systemic HIV infection that varies in character and neurological consequences. While clinical staging and neuropsychological test performance have been helpful in evaluating patients, cerebrospinal fluid (CSF) biomarkers present a valuable and objective approach to more accurate diagnosis, assessment of treatment effects and understanding of evolving pathobiology. We review some lessons from our recent experience with CSF biomarker studies. We have used two approaches to biomarker analysis: targeted, hypothesis-driven and non-targeted exploratory discovery methods. We illustrate the first with data from a cross-sectional study of defined subject groups across the spectrum of systemic and CNS disease progression and the second with a longitudinal study of the CSF proteome in subjects initiating antiretroviral treatment. Both approaches can be useful and, indeed, complementary. The first is helpful in assessing known or hypothesized biomarkers while the second can identify novel biomarkers and point to broad interactions in pathogenesis. Common to both is the need for well-defined samples and subjects that span a spectrum of biological activity and biomarker concentrations. Previously-defined guide biomarkers of CNS infection, inflammation and neural injury are useful in categorizing samples for analysis and providing critical biological context for biomarker discovery studies. CSF biomarkers represent an underutilized but valuable approach to understanding the interactions of HIV and the CNS and to more objective diagnosis and assessment of disease activity. Both hypothesis-based and discovery methods can be useful in advancing the definition and use of these biomarkers.

  16. Approach to Cerebrospinal Fluid (CSF) Biomarker Discovery and Evaluation in HIV Infection

    Energy Technology Data Exchange (ETDEWEB)

    Price, Richard W.; Peterson, Julia; Fuchs, Dietmar; Angel, Thomas E.; Zetterberg, Henrik; Hagberg, Lars; Spudich, Serena S.; Smith, Richard D.; Jacobs, Jon M.; Brown, Joseph N.; Gisslen, Magnus

    2013-12-13

    Central nervous system (CNS) infection is a nearly universal facet of systemic HIV infection that varies in character and neurological consequences. While clinical staging and neuropsychological test performance have been helpful in evaluating patients, cerebrospinal fluid (CSF) biomarkers present a valuable and objective approach to more accurate diagnosis, assessment of treatment effects and understanding of evolving pathobiology. We review some lessons from our recent experience with CSF biomarker studies. We have used two approaches to biomarker analysis: targeted, hypothesis-driven and non-targeted exploratory discovery methods. We illustrate the first with data from a cross-sectional study of defined subject groups across the spectrum of systemic and CNS disease progression and the second with a longitudinal study of the CSF proteome in subjects initiating antiretroviral treatment. Both approaches can be useful and, indeed, complementary. The first is helpful in assessing known or hypothesized biomarkers while the second can identify novel biomarkers and point to broad interactions in pathogenesis. Common to both is the need for well-defined samples and subjects that span a spectrum of biological activity and biomarker concentrations. Previouslydefined guide biomarkers of CNS infection, inflammation and neural injury are useful in categorizing samples for analysis and providing critical biological context for biomarker discovery studies. CSF biomarkers represent an underutilized but valuable approach to understanding the interactions of HIV and the CNS and to more objective diagnosis and assessment of disease activity. Both hypothesis-based and discovery methods can be useful in advancing the definition and use of these biomarkers.

  17. Usability of Discovery Portals

    OpenAIRE

    Bulens, J.D.; Vullings, L.A.E.; Houtkamp, J.M.; Vanmeulebrouk, B.

    2013-01-01

    As INSPIRE progresses to be implemented in the EU, many new discovery portals are built to facilitate finding spatial data. Currently the structure of the discovery portals is determined by the way spatial data experts like to work. However, we argue that the main target group for discovery portals are not spatial data experts but professionals with limited spatial knowledge, and a focus outside the spatial domain. An exploratory usability experiment was carried out in which three discovery p...

  18. Knowledge Discovery and Data Mining (KDDM) survey report.

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, Laurence R.; Jordan, Danyelle N.; Bauer, Travis L.; Elmore, Mark T. (Oak Ridge National Laboratory, Oak Ridge, TN); Treadwell, Jim N. (Oak Ridge National Laboratory, Oak Ridge, TN); Homan, Rossitza A.; Chapman, Leon Darrel; Spires, Shannon V.

    2005-02-01

    The large number of government and industry activities supporting the Unit of Action (UA), with attendant documents, reports and briefings, can overwhelm decision-makers with an overabundance of information that hampers the ability to make quick decisions often resulting in a form of gridlock. In particular, the large and rapidly increasing amounts of data and data formats stored on UA Advanced Collaborative Environment (ACE) servers has led to the realization that it has become impractical and even impossible to perform manual analysis leading to timely decisions. UA Program Management (PM UA) has recognized the need to implement a Decision Support System (DSS) on UA ACE. The objective of this document is to research the commercial Knowledge Discovery and Data Mining (KDDM) market and publish the results in a survey. Furthermore, a ranking mechanism based on UA ACE-specific criteria has been developed and applied to a representative set of commercially available KDDM solutions. In addition, an overview of four R&D areas identified as critical to the implementation of DSS on ACE is provided. Finally, a comprehensive database containing detailed information on surveyed KDDM tools has been developed and is available upon customer request.

  19. Use of machine learning approaches for novel drug discovery.

    Science.gov (United States)

    Lima, Angélica Nakagawa; Philot, Eric Allison; Trossini, Gustavo Henrique Goulart; Scott, Luis Paulo Barbour; Maltarollo, Vinícius Gonçalves; Honorio, Kathia Maria

    2016-01-01

    The use of computational tools in the early stages of drug development has increased in recent decades. Machine learning (ML) approaches have been of special interest, since they can be applied in several steps of the drug discovery methodology, such as prediction of target structure, prediction of biological activity of new ligands through model construction, discovery or optimization of hits, and construction of models that predict the pharmacokinetic and toxicological (ADMET) profile of compounds. This article presents an overview on some applications of ML techniques in drug design. These techniques can be employed in ligand-based drug design (LBDD) and structure-based drug design (SBDD) studies, such as similarity searches, construction of classification and/or prediction models of biological activity, prediction of secondary structures and binding sites docking and virtual screening. Successful cases have been reported in the literature, demonstrating the efficiency of ML techniques combined with traditional approaches to study medicinal chemistry problems. Some ML techniques used in drug design are: support vector machine, random forest, decision trees and artificial neural networks. Currently, an important application of ML techniques is related to the calculation of scoring functions used in docking and virtual screening assays from a consensus, combining traditional and ML techniques in order to improve the prediction of binding sites and docking solutions.

  20. Development of traditional Chinese medicine clinical data warehouse for medical knowledge discovery and decision support.

    Science.gov (United States)

    Zhou, Xuezhong; Chen, Shibo; Liu, Baoyan; Zhang, Runsun; Wang, Yinghui; Li, Ping; Guo, Yufeng; Zhang, Hua; Gao, Zhuye; Yan, Xiufeng

    2010-01-01

    Traditional Chinese medicine (TCM) is a scientific discipline, which develops the related theories from the long-term clinical practices. The large-scale clinical data are the core empirical knowledge source for TCM research. This paper introduces a clinical data warehouse (CDW) system, which incorporates the structured electronic medical record (SEMR) data for medical knowledge discovery and TCM clinical decision support (CDS). We have developed the clinical reference information model (RIM) and physical data model to manage the various information entities and their relationships in TCM clinical data. An extraction-transformation-loading (ETL) tool is implemented to integrate and normalize the clinical data from different operational data sources. The CDW includes online analytical processing (OLAP) and complex network analysis (CNA) components to explore the various clinical relationships. Furthermore, the data mining and CNA methods are used to discover the valuable clinical knowledge from the data. The CDW has integrated 20,000 TCM inpatient data and 20,000 outpatient data, which contains manifestations (e.g. symptoms, physical examinations and laboratory test results), diagnoses and prescriptions as the main information components. We propose a practical solution to accomplish the large-scale clinical data integration and preprocessing tasks. Meanwhile, we have developed over 400 OLAP reports to enable the multidimensional analysis of clinical data and the case-based CDS. We have successfully conducted several interesting data mining applications. Particularly, we use various classification methods, namely support vector machine, decision tree and Bayesian network, to discover the knowledge of syndrome differentiation. Furthermore, we have applied association rule and CNA to extract the useful acupuncture point and herb combination patterns from the clinical prescriptions. A CDW system consisting of TCM clinical RIM, ETL, OLAP and data mining as the core

  1. The biological knowledge discovery by PCCF measure and PCA-F projection.

    Science.gov (United States)

    Jia, Xingang; Zhu, Guanqun; Han, Qiuhong; Lu, Zuhong

    2017-01-01

    In the process of biological knowledge discovery, PCA is commonly used to complement the clustering analysis, but PCA typically gives the poor visualizations for most gene expression data sets. Here, we propose a PCCF measure, and use PCA-F to display clusters of PCCF, where PCCF and PCA-F are modeled from the modified cumulative probabilities of genes. From the analysis of simulated and experimental data sets, we demonstrate that PCCF is more appropriate and reliable for analyzing gene expression data compared to other commonly used distances or similarity measures, and PCA-F is a good visualization technique for identifying clusters of PCCF, where we aim at such data sets that the expression values of genes are collected at different time points.

  2. Understanding price discovery in interconnected markets: Generalized Langevin process approach and simulation

    Science.gov (United States)

    Schenck, Natalya A.; Horvath, Philip A.; Sinha, Amit K.

    2018-02-01

    While the literature on price discovery process and information flow between dominant and satellite market is exhaustive, most studies have applied an approach that can be traced back to Hasbrouck (1995) or Gonzalo and Granger (1995). In this paper, however, we propose a Generalized Langevin process with asymmetric double-well potential function, with co-integrated time series and interconnected diffusion processes to model the information flow and price discovery process in two, a dominant and a satellite, interconnected markets. A simulated illustration of the model is also provided.

  3. Participative knowledge management to empower manufacturing workers

    DEFF Research Database (Denmark)

    Campatelli, Gianni; Richter, Alexander; Stocker, Alexander

    2016-01-01

    skills. In this paper, the authors suggest a participative knowledge management approach to empower manufacturing workers. Starting from a comprehensive empirical analysis of the existing work practices in a manufacturing company, the authors have developed and validated a knowledge management system...... prototype. The prototype is aimed for training, problem solving, and facilitating the discovery, acquisition, and sharing of manufacturing knowledge. The conducted evaluation of the prototype indicates that workers' skills and level of work satisfaction will increase since the knowledge management system...

  4. Argo_CUDA: Exhaustive GPU based approach for motif discovery in large DNA datasets.

    Science.gov (United States)

    Vishnevsky, Oleg V; Bocharnikov, Andrey V; Kolchanov, Nikolay A

    2018-02-01

    The development of chromatin immunoprecipitation sequencing (ChIP-seq) technology has revolutionized the genetic analysis of the basic mechanisms underlying transcription regulation and led to accumulation of information about a huge amount of DNA sequences. There are a lot of web services which are currently available for de novo motif discovery in datasets containing information about DNA/protein binding. An enormous motif diversity makes their finding challenging. In order to avoid the difficulties, researchers use different stochastic approaches. Unfortunately, the efficiency of the motif discovery programs dramatically declines with the query set size increase. This leads to the fact that only a fraction of top "peak" ChIP-Seq segments can be analyzed or the area of analysis should be narrowed. Thus, the motif discovery in massive datasets remains a challenging issue. Argo_Compute Unified Device Architecture (CUDA) web service is designed to process the massive DNA data. It is a program for the detection of degenerate oligonucleotide motifs of fixed length written in 15-letter IUPAC code. Argo_CUDA is a full-exhaustive approach based on the high-performance GPU technologies. Compared with the existing motif discovery web services, Argo_CUDA shows good prediction quality on simulated sets. The analysis of ChIP-Seq sequences revealed the motifs which correspond to known transcription factor binding sites.

  5. Ontology Learning for Chinese Information Organization and Knowledge Discovery in Ethnology and Anthropology

    Directory of Open Access Journals (Sweden)

    Jing Kong

    2007-09-01

    Full Text Available This paper presents an ontology learning architecture that reflects the interaction between ontology learning and other applications such as ontology-engineering tools and information systems. Based on this architecture, we have developed a prototype system CHOL: a Chinese ontology learning tool. CHOL learns domain ontology from Chinese domain specific texts. On the one hand, it supports a semi-automatic domain ontology acquisition and dynamic maintenance, and on the other hand, it supports an auto-indexing and auto-classification of Chinese scholarly literature. CHOL has been applied in ethnology and anthropology for Chinese information organization and knowledge discovery.

  6. Knowledge discovery for pancreatic cancer using inductive logic programming.

    Science.gov (United States)

    Qiu, Yushan; Shimada, Kazuaki; Hiraoka, Nobuyoshi; Maeshiro, Kensei; Ching, Wai-Ki; Aoki-Kinoshita, Kiyoko F; Furuta, Koh

    2014-08-01

    Pancreatic cancer is a devastating disease and predicting the status of the patients becomes an important and urgent issue. The authors explore the applicability of inductive logic programming (ILP) method in the disease and show that the accumulated clinical laboratory data can be used to predict disease characteristics, and this will contribute to the selection of therapeutic modalities of pancreatic cancer. The availability of a large amount of clinical laboratory data provides clues to aid in the knowledge discovery of diseases. In predicting the differentiation of tumour and the status of lymph node metastasis in pancreatic cancer, using the ILP model, three rules are developed that are consistent with descriptions in the literature. The rules that are identified are useful to detect the differentiation of tumour and the status of lymph node metastasis in pancreatic cancer and therefore contributed significantly to the decision of therapeutic strategies. In addition, the proposed method is compared with the other typical classification techniques and the results further confirm the superiority and merit of the proposed method.

  7. Discovery of resources using MADM approaches for parallel and distributed computing

    Directory of Open Access Journals (Sweden)

    Mandeep Kaur

    2017-06-01

    Full Text Available Grid, a form of parallel and distributed computing, allows the sharing of data and computational resources among its users from various geographical locations. The grid resources are diverse in terms of their underlying attributes. The majority of the state-of-the-art resource discovery techniques rely on the static resource attributes during resource selection. However, the matching resources based on the static resource attributes may not be the most appropriate resources for the execution of user applications because they may have heavy job loads, less storage space or less working memory (RAM. Hence, there is a need to consider the current state of the resources in order to find the most suitable resources. In this paper, we have proposed a two-phased multi-attribute decision making (MADM approach for discovery of grid resources by using P2P formalism. The proposed approach considers multiple resource attributes for decision making of resource selection and provides the best suitable resource(s to grid users. The first phase describes a mechanism to discover all matching resources and applies SAW method to shortlist the top ranked resources, which are communicated to the requesting super-peer. The second phase of our proposed methodology applies integrated MADM approach (AHP enriched PROMETHEE-II on the list of selected resources received from different super-peers. The pairwise comparison of the resources with respect to their attributes is made and the rank of each resource is determined. The top ranked resource is then communicated to the grid user by the grid scheduler. Our proposed methodology enables the grid scheduler to allocate the most suitable resource to the user application and also reduces the search complexity by filtering out the less suitable resources during resource discovery.

  8. The discovery of radioactivity: the centenary

    International Nuclear Information System (INIS)

    Patil, S.K.

    1995-01-01

    In the last decade of the nineteenth century, a number of fundamental discoveries of outstanding importance were made unexpectedly which marked the beginning of a new era in physics. A cascade of spectacular discoveries began with the announcement of the discovery of x-rays by Roentgen followed by the discoveries, in quick succession, of radioactivity by Becquerel, of Zeeman effect, of electron by J.J. Thomson, and of polonium and radium by the Curies. Both x-rays and radioactivity have wide applications in scientific, medical and industrial fields and have made outstanding contribution to the advancement of human knowledge and welfare. Radioactivity is well known and no other discovery in the field of physics or chemistry has had a more profound effect on our fundamental knowledge of nature. Present article, on the occasion of the centenary of the discovery of radioactivity, makes an attempt to describe some glimpses of the history of radioactivity. (author). 59 refs

  9. Knowledge-Based Approaches: Two cases of applicability

    DEFF Research Database (Denmark)

    Andersen, Tom

    1997-01-01

    Basic issues of the term: A knowledge-based approach (KBA) are discussed. Two cases of applicable to KBA are presented, and its concluded that KBA is more than just IT.......Basic issues of the term: A knowledge-based approach (KBA) are discussed. Two cases of applicable to KBA are presented, and its concluded that KBA is more than just IT....

  10. Chemogenomic discovery of allosteric antagonists at the GPRC6A receptor

    DEFF Research Database (Denmark)

    Gloriam, David E.; Wellendorph, Petrine; Johansen, Lars Dan

    2011-01-01

    and pharmacological character: (1) chemogenomic lead identification through the first, to our knowledge, ligand inference between two different GPCR families, Families A and C; and (2) the discovery of the most selective GPRC6A allosteric antagonists discovered to date. The unprecedented inference of...... pharmacological activity across GPCR families provides proof-of-concept for in silico approaches against Family C targets based on Family A templates, greatly expanding the prospects of successful drug design and discovery. The antagonists were tested against a panel of seven Family A and C G protein-coupled receptors...

  11. Application of data mining and artificial intelligence techniques to mass spectrometry data for knowledge discovery

    Directory of Open Access Journals (Sweden)

    Hugo López-Fernández

    2016-05-01

    Full Text Available Mass spectrometry using matrix assisted laser desorption ionization coupled to time of flight analyzers (MALDI-TOF MS has become popular during the last decade due to its high speed, sensitivity and robustness for detecting proteins and peptides. This allows quickly analyzing large sets of samples are in one single batch and doing high-throughput proteomics. In this scenario, bioinformatics methods and computational tools play a key role in MALDI-TOF data analysis, as they are able handle the large amounts of raw data generated in order to extract new knowledge and useful conclusions. A typical MALDI-TOF MS data analysis workflow has three main stages: data acquisition, preprocessing and analysis. Although the most popular use of this technology is to identify proteins through their peptides, analyses that make use of artificial intelligence (AI, machine learning (ML, and statistical methods can be also carried out in order to perform biomarker discovery, automatic diagnosis, and knowledge discovery. In this research work, this workflow is deeply explored and new solutions based on the application of AI, ML, and statistical methods are proposed. In addition, an integrated software platform that supports the full MALDI-TOF MS data analysis workflow that facilitate the work of proteomics researchers without advanced bioinformatics skills has been developed and released to the scientific community.

  12. Knowledge Discovery, Integration and Communication for Extreme Weather and Flood Resilience Using Artificial Intelligence: Flood AI Alpha

    Science.gov (United States)

    Demir, I.; Sermet, M. Y.

    2016-12-01

    Nobody is immune from extreme events or natural hazards that can lead to large-scale consequences for the nation and public. One of the solutions to reduce the impacts of extreme events is to invest in improving resilience with the ability to better prepare, plan, recover, and adapt to disasters. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This abstracts presents our project on developing a resilience framework for flooding to improve societal preparedness with objectives; (a) develop a generalized ontology for extreme events with primary focus on flooding; (b) develop a knowledge engine with voice recognition, artificial intelligence, natural language processing, and inference engine. The knowledge engine will utilize the flood ontology and concepts to connect user input to relevant knowledge discovery outputs on flooding; (c) develop a data acquisition and processing framework from existing environmental observations, forecast models, and social networks. The system will utilize the framework, capabilities and user base of the Iowa Flood Information System (IFIS) to populate and test the system; (d) develop a communication framework to support user interaction and delivery of information to users. The interaction and delivery channels will include voice and text input via web-based system (e.g. IFIS), agent-based bots (e.g. Microsoft Skype, Facebook Messenger), smartphone and augmented reality applications (e.g. smart assistant), and automated web workflows (e.g. IFTTT, CloudWork) to open the knowledge discovery for flooding to thousands of community extensible web workflows.

  13. Rough Sets as a Knowledge Discovery and Classification Tool for the Diagnosis of Students with Learning Disabilities

    Directory of Open Access Journals (Sweden)

    Yu-Chi Lin

    2011-02-01

    Full Text Available Due to the implicit characteristics of learning disabilities (LDs, the diagnosis of students with learning disabilities has long been a difficult issue. Artificial intelligence techniques like artificial neural network (ANN and support vector machine (SVM have been applied to the LD diagnosis problem with satisfactory outcomes. However, special education teachers or professionals tend to be skeptical to these kinds of black-box predictors. In this study, we adopt the rough set theory (RST, which can not only perform as a classifier, but may also produce meaningful explanations or rules, to the LD diagnosis application. Our experiments indicate that the RST approach is competitive as a tool for feature selection, and it performs better in term of prediction accuracy than other rulebased algorithms such as decision tree and ripper algorithms. We also propose to mix samples collected from sources with different LD diagnosis procedure and criteria. By pre-processing these mixed samples with simple and readily available clustering algorithms, we are able to improve the quality and support of rules generated by the RST. Overall, our study shows that the rough set approach, as a classification and knowledge discovery tool, may have great potential in playing an essential role in LD diagnosis.

  14. Fragment-based approaches to the discovery of kinase inhibitors.

    Science.gov (United States)

    Mortenson, Paul N; Berdini, Valerio; O'Reilly, Marc

    2014-01-01

    Protein kinases are one of the most important families of drug targets, and aberrant kinase activity has been linked to a large number of disease areas. Although eminently targetable using small molecules, kinases present a number of challenges as drug targets, not least obtaining selectivity across such a large and relatively closely related target family. Fragment-based drug discovery involves screening simple, low-molecular weight compounds to generate initial hits against a target. These hits are then optimized to more potent compounds via medicinal chemistry, usually facilitated by structural biology. Here, we will present a number of recent examples of fragment-based approaches to the discovery of kinase inhibitors, detailing the construction of fragment-screening libraries, the identification and validation of fragment hits, and their optimization into potent and selective lead compounds. The advantages of fragment-based methodologies will be discussed, along with some of the challenges associated with using this route. Finally, we will present a number of key lessons derived both from our own experience running fragment screens against kinases and from a large number of published studies.

  15. Open-access public-private partnerships to enable drug discovery--new approaches.

    Science.gov (United States)

    Müller, Susanne; Weigelt, Johan

    2010-03-01

    The productivity of the pharmaceutical industry, as assessed by the number of NMEs produced per US dollar spent in R&D, has been in steady decline during the past 40 years. This decline in productivity not only poses a significant challenge to the pharmaceutical industry, but also to society because of the importance of developing drugs for the treatment of unmet medical needs. The major challenge in progressing a new drug to the market is the successful completion of clinical trials. However, the failure rate of drugs entering trials has not decreased, despite various technological and scientific breakthroughs in recent decades, and despite intense target validation efforts. This lack of success suggests limitations in the fundamental understanding of target biology and human pharmacology. One contributing factor may be the traditional secrecy of the pharmaceutical sector, a characteristic that does not promote scientific discovery in an optimal manner. Access to broader knowledge relating to target biology and human pharmacology is difficult to obtain because interactions between researchers in industry and academia are typically restricted to closed collaborations in which the knowledge gained is confidential.However, open-access collaborative partnerships are gaining momentum in industry, and are also favored by funding agencies. Such open-access collaborations may be a powerful alternative to closed collaborations; the sharing of early-stage research data is expected to enable scientific discovery by engaging a broader section of the scientific community in the exploration of new findings. Potentially, the sharing of data could contribute to an increased understanding of biological processes and a decrease in the attrition of clinical programs.

  16. Advances in Knowledge Discovery and Data Mining 21st Pacific Asia Conference, PAKDD 2017 Held in Jeju, South Korea, May 23 26, 2017. Proceedings Part I, Part II.

    Science.gov (United States)

    2017-06-27

    Data Mining 21’’ Pacific-Asia Conference, PAKDD 2017Jeju, South Korea, May 23-26, Sb. GRANT NUMBER 2017 Proceedings, Part I, Part II Sc. PROGRAM...Springer; Switzerland. 14. ABSTRACT The Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) is a leading international conference...in the areas of knowledge discovery and data mining (KDD). We had three keynote speeches, delivered by Sang Cha from Seoul National University

  17. Participatory knowledge-management design: A semiotic approach

    DEFF Research Database (Denmark)

    Valtolina, Stefano; Barricelli, Barbara Rita; Dittrich, Yvonne

    2012-01-01

    vocabularies, notations, and suitable visual structures for navigating among interface elements. To this end, the paper describes how our semiotic approach supports processes for representing, storing, accessing, and transferring knowledge through which the information architecture of an interactive system can......The aim of this paper is to present a design strategy for collaborative knowledge-management systems based on a semiotic approach. The contents and structure of experts' knowledge is highly dependent on professional or individual practice. Knowledge-management systems that support cooperation...... a semiotic perspective to computer application and human–computer interaction. From a semiotic perspective, the computer application is both a message from the designer to the user about the structure of the problem domain, as well as about interaction with it, and a structured channel for the user...

  18. A Framework for Automatic Web Service Discovery Based on Semantics and NLP Techniques

    Directory of Open Access Journals (Sweden)

    Asma Adala

    2011-01-01

    Full Text Available As a greater number of Web Services are made available today, automatic discovery is recognized as an important task. To promote the automation of service discovery, different semantic languages have been created that allow describing the functionality of services in a machine interpretable form using Semantic Web technologies. The problem is that users do not have intimate knowledge about semantic Web service languages and related toolkits. In this paper, we propose a discovery framework that enables semantic Web service discovery based on keywords written in natural language. We describe a novel approach for automatic discovery of semantic Web services which employs Natural Language Processing techniques to match a user request, expressed in natural language, with a semantic Web service description. Additionally, we present an efficient semantic matching technique to compute the semantic distance between ontological concepts.

  19. Renaissance in Antibiotic Discovery: Some Novel Approaches for Finding Drugs to Treat Bad Bugs.

    Science.gov (United States)

    Gadakh, Bharat; Van Aerschot, Arthur

    2015-01-01

    With the alarming resistance to currently used antibiotics, there is a serious worldwide threat to public health. Therefore, there is an urgent need to search for new antibiotics or new cellular targets which are essential for survival of the pathogens. However, during the past 50 years, only two new classes of antibiotics (oxazolidinone and lipopeptides) have reached the clinic. This suggests that the success rate in discovering new/novel antibiotics using conventional approaches is limited and that we must reconsider our antibiotic discovery approaches. While many new strategies are being pursued lately, this review primarily focuses only on a few of these novel/new approaches for antibiotic discovery. These include structure-based drug design (SBDD), the genomic approach, anti-virulence strategy, targeting nonmultiplying bacteria and the use of bacteriophages. In general, recent advancements in nuclear magnetic resonance, Xcrystallography, and genomic evolution have significant impact on antibacterial drug research. This review therefore aims to discuss recent strategies in searching new antibacterial agents making use of these technical novelties, their advantages, disadvantages and limitations.

  20. Using concepts in literature-based discovery : Simulating Swanson's Raynaud-fish oil and migraine-magnesium discoveries

    NARCIS (Netherlands)

    Weeber, M; Klein, Henny; de Jong-van den Berg, LTW; Vos, R

    Literature-based discovery has resulted in new knowledge. In the biomedical context, Don R. Swanson has generated several literature-based hypotheses that have been corroborated experimentally and clinically. In this paper, we propose a two-step model of the discovery process in which hypotheses are

  1. The emerging knowledge governance approach : challenges and characteristics

    OpenAIRE

    Foss, Nicolai Juul

    2006-01-01

    The “knowledge governance approach” is characterized as a distinctive, emerging approach that cuts across the fields of knowledge management, organisation studies, strategy, and human resource management. Knowledge governance is taken up with how the deployment of governance mechanisms influences knowledge processes, such as sharing, retaining and creating knowledge. It insists on clear micro (behavioural) foundations, adopts an economizing perspective, and examines the links between knowledg...

  2. A Design Thinking Approach to Teaching Knowledge Management

    Science.gov (United States)

    Wang, Shouhong; Wang, Hai

    2008-01-01

    Pedagogies for knowledge management courses are still undeveloped. This Teaching Tip introduces a design thinking approach to teaching knowledge management. An induction model used to guide students' real-life projects for knowledge management is presented. (Contains 1 figure.)

  3. Individual, social, and cultural approaches to knowledge sharing

    Directory of Open Access Journals (Sweden)

    Widen, Gunilla

    2017-09-01

    Full Text Available Workplace knowledge sharing is a complex process and there are a large number of studies in the area. In this article three theoretical approaches in library and information science are used to discuss knowledge sharing in the workplace. The approaches are information behavior, social capital, and information culture, and they bring important insights that need to be considered from a holistic management point of view when it comes to knowledge sharing. The individual's relation to different levels of context is important, meaning both in relation to work roles, work tasks, situations, organizational structures, and culture. The frameworks also shed light on where and how knowledge sharing activities are present in the organization. From a knowledge management point of view, it is important to acknowledge that when knowledge is valued, there is also an awareness of the knowledge sharing activities. Also, in addition to more traditional views of context, the frameworks bring forward different views on context, such as time and space as contextual factors.

  4. Segmented and Detailed Visualization of Anatomical Structures based on Augmented Reality for Health Education and Knowledge Discovery

    Directory of Open Access Journals (Sweden)

    Isabel Cristina Siqueira da Silva

    2017-05-01

    Full Text Available The evolution of technology has changed the face of education, especially when combined with appropriate pedagogical bases. This combination has created innovation opportunities in order to add quality to teaching through new perspectives for traditional methods applied in the classroom. In the Health field, particularly, augmented reality and interaction design techniques can assist the teacher in the exposition of theoretical concepts and/or concepts that need of training at specific medical procedures. Besides, visualization and interaction with Health data, from different sources and in different formats, helps to identify hidden patterns or anomalies, increases the flexibility in the search for certain values, allows the comparison of different units to obtain relative difference in quantities, provides human interaction in real time, etc. At this point, it is noted that the use of interactive visualization techniques such as augmented reality and virtual can collaborate with the process of knowledge discovery in medical and biomedical databases. This work discuss aspects related to the use of augmented reality and interaction design as a tool for teaching anatomy and knowledge discovery, with the proposition of an case study based on mobile application that can display targeted anatomical parts in high resolution and with detail of its parts.

  5. Discovery of Intermetallic Compounds from Traditional to Machine-Learning Approaches.

    Science.gov (United States)

    Oliynyk, Anton O; Mar, Arthur

    2018-01-16

    Intermetallic compounds are bestowed by diverse compositions, complex structures, and useful properties for many materials applications. How metallic elements react to form these compounds and what structures they adopt remain challenging questions that defy predictability. Traditional approaches offer some rational strategies to prepare specific classes of intermetallics, such as targeting members within a modular homologous series, manipulating building blocks to assemble new structures, and filling interstitial sites to create stuffed variants. Because these strategies rely on precedent, they cannot foresee surprising results, by definition. Exploratory synthesis, whether through systematic phase diagram investigations or serendipity, is still essential for expanding our knowledge base. Eventually, the relationships may become too complex for the pattern recognition skills to be reliably or practically performed by humans. Complementing these traditional approaches, new machine-learning approaches may be a viable alternative for materials discovery, not only among intermetallics but also more generally to other chemical compounds. In this Account, we survey our own efforts to discover new intermetallic compounds, encompassing gallides, germanides, phosphides, arsenides, and others. We apply various machine-learning methods (such as support vector machine and random forest algorithms) to confront two significant questions in solid state chemistry. First, what crystal structures are adopted by a compound given an arbitrary composition? Initial efforts have focused on binary equiatomic phases AB, ternary equiatomic phases ABC, and full Heusler phases AB 2 C. Our analysis emphasizes the use of real experimental data and places special value on confirming predictions through experiment. Chemical descriptors are carefully chosen through a rigorous procedure called cluster resolution feature selection. Predictions for crystal structures are quantified by evaluating

  6. Major accident prevention through applying safety knowledge management approach.

    Science.gov (United States)

    Kalatpour, Omid

    2016-01-01

    Many scattered resources of knowledge are available to use for chemical accident prevention purposes. The common approach to management process safety, including using databases and referring to the available knowledge has some drawbacks. The main goal of this article was to devise a new emerged knowledge base (KB) for the chemical accident prevention domain. The scattered sources of safety knowledge were identified and scanned. Then, the collected knowledge was formalized through a computerized program. The Protégé software was used to formalize and represent the stored safety knowledge. The domain knowledge retrieved as well as data and information. This optimized approach improved safety and health knowledge management (KM) process and resolved some typical problems in the KM process. Upgrading the traditional resources of safety databases into the KBs can improve the interaction between the users and knowledge repository.

  7. Constructivist Practicies Through Guided Discovery Approach: The Effect on Students' Cognitive Achievements in Nigerian Senior Secondary School Physics

    Directory of Open Access Journals (Sweden)

    A.O. Akinbobola

    2009-12-01

    Full Text Available The study investigated constructivist practices through guided discovery approach and the effect on students’ cognitive achievement in Nigerian senior secondary school Physics. The study adopted pretest-posttest control group design. A criterion sampling technique was used to select six schools out of nine schools that met the criteria. A total of 278 students took part in the study; this was made up of 141 male students and 137 female students in their respective intact classes. Physic Achievement Test (PAT with the internal consistency of 0.77 using Kuder-Richardson formula (21 was the instrument used in collecting data. The data were analyzed using Analysis of Covariance (ANCOVA and t-test. The results showed that guided discovery approaches was the most effective in facilitating students’ achievement in physics after being taught using a pictorial organizer. This was followed by demonstration while expository was found to be the least effective. Also, there exists no significant difference in the achievement of male and female physics students taught with guided discovery, demonstration and expository teaching approaches and corresponding exposure to a pictorial organizer. It is recommended that physics teachers should endeavor to use constructivist practices through guided discovery approach in order to engage students in problem solving activities, independent learning, critical thinking and understanding, and creative learning, rather than in rote learning and memorization.

  8. Integrating traditional knowledge when it appears to conflict with conservation: lessons from the discovery and protection of sitatunga in Ghana

    Directory of Open Access Journals (Sweden)

    Jana M. McPherson

    2016-03-01

    Full Text Available Cultural traditions can conflict with modern conservation goals when they promote damage to fragile environments or the harvest of imperiled species. We explore whether and how traditional, culturally motivated species exploitation can nonetheless aid conservation by examining the recent "discovery" in Avu Lagoon, Ghana, of sitatunga (Tragelaphus spekii gratus, a species familiar to locals, but not previously scientifically recorded in Ghana and regionally assumed extinct. Specifically, we investigate what role traditional beliefs, allied hunting practices, and the associated traditional ecological knowledge have played in the species' discovery and subsequent community-based conservation; how they might influence future conservation outcomes; and how they may themselves be shaped by conservation efforts. Our study serves to exemplify the complexities, risks, and benefits associated with building conservation efforts around traditional ecological knowledge and beliefs. Complexities arise from localized variation in beliefs (with cultural significance of sitatunga much stronger in one village than others, progressive dilution of traditional worldviews by mainstream religions, and the context dependence, both culturally and geographically, of the reliability of traditional ecological knowledge. Among the benefits, we highlight (1 information on the distribution and habitat needs of species that can help to discover, rediscover, or manage imperiled taxa if appropriately paired with scientific data collection; and (2 enhanced sustainability of conservation efforts given the cultivation of mutual trust, respect, and understanding between researchers and local communities. In turn, conservation attention to traditional ecological knowledge and traditionally important species can help reinvigorate cultural diversity by promoting the persistence of traditional belief and knowledge systems alongside mainstream worldviews and religions.

  9. G-protein-coupled receptors: new approaches to maximise the impact of GPCRS in drug discovery.

    Science.gov (United States)

    Davey, John

    2004-04-01

    IBC's Drug Discovery Technology Series is a group of conferences highlighting technological advances and applications in niche areas of the drug discovery pipeline. This 2-day meeting focused on G-protein-coupled receptors (GPCRs), probably the most important and certainly the most valuable class of targets for drug discovery. The meeting was chaired by J Beesley (Vice President, European Business Development for LifeSpan Biosciences, Seattle, USA) and included 17 presentations on various aspects of GPCR activity, drug screens and therapeutic analyses. Keynote Addresses covered two of the emerging areas in GPCR regulation; receptor dimerisation (G Milligan, Professor of Molecular Pharmacology and Biochemistry, University of Glasgow, UK) and proteins that interact with GPCRs (J Bockaert, Laboratory of Functional Genomics, CNRS Montpellier, France). A third Keynote Address from W Thomsen (Director of GPCR Drug Screening, Arena Pharmaceuticals, USA) discussed Arena's general approach to drug discovery and illustrated this with reference to the development of an agonist with potential efficacy in Type II diabetes.

  10. Predicting future discoveries from current scientific literature.

    Science.gov (United States)

    Petrič, Ingrid; Cestnik, Bojan

    2014-01-01

    Knowledge discovery in biomedicine is a time-consuming process starting from the basic research, through preclinical testing, towards possible clinical applications. Crossing of conceptual boundaries is often needed for groundbreaking biomedical research that generates highly inventive discoveries. We demonstrate the ability of a creative literature mining method to advance valuable new discoveries based on rare ideas from existing literature. When emerging ideas from scientific literature are put together as fragments of knowledge in a systematic way, they may lead to original, sometimes surprising, research findings. If enough scientific evidence is already published for the association of such findings, they can be considered as scientific hypotheses. In this chapter, we describe a method for the computer-aided generation of such hypotheses based on the existing scientific literature. Our literature-based discovery of NF-kappaB with its possible connections to autism was recently approved by scientific community, which confirms the ability of our literature mining methodology to accelerate future discoveries based on rare ideas from existing literature.

  11. From Medicinal Chemistry to Human Health: Current Approaches to Drug Discovery for Cancer and Neglected Tropical Diseases

    Directory of Open Access Journals (Sweden)

    LEONARDO G. FERREIRA

    2018-02-01

    Full Text Available ABSTRACT Scientific and technological breakthroughs have compelled the current players in drug discovery to increasingly incorporate knowledge-based approaches. This evolving paradigm, which has its roots attached to the recent advances in medicinal chemistry, molecular and structural biology, has unprecedentedly demanded the development of up-to-date computational approaches, such as bio- and chemo-informatics. These tools have been pivotal to catalyzing the ever-increasing amount of data generated by the molecular sciences, and to converting the data into insightful guidelines for use in the research pipeline. As a result, ligand- and structure-based drug design have emerged as key pathways to address the pharmaceutical industry’s striking demands for innovation. These approaches depend on a keen integration of experimental and molecular modeling methods to surmount the main challenges faced by drug candidates - in vivo efficacy, pharmacodynamics, metabolism, pharmacokinetics and safety. To that end, the Laboratório de Química Medicinal e Computacional (LQMC of the Universidade de São Paulo has developed forefront research on highly prevalent and life-threatening neglected tropical diseases and cancer. By taking part in global initiatives for pharmaceutical innovation, the laboratory has contributed to the advance of these critical therapeutic areas through the use of cutting-edge strategies in medicinal chemistry.

  12. Socratic Questioning-Guided Discovery

    Directory of Open Access Journals (Sweden)

    M. Hakan Türkçapar

    2012-04-01

    Full Text Available “Socratic Method” is a way of teaching philosophical thinking and knowledge by asking questions which was used by antique period greek philosopher Socrates. Socrates was teaching knowledge to his followers by asking questions and the conversation between them was named “Socratic Dialogues”. In this meaning, no novel knowledge is taught to the individual but only what is formerly known is reminded and rediscovered. The form of socratic questioning which is used during the process of cognitive behavioral therapy is known as Guided Discovery. In this method it is aimed to make the client notice the piece of knowledge which he could notice but is not aware with a series of questions. Socratic method or guided discovery consists of several steps which are: Identifying the problem by listening to the client and making reflections, finding alternatives by examining and evaluating, reidentification by using the newly found information and questioning the old distorted belief and reaching to a conclusion and applying it. Question types used during these procedures are, questions for gaining information, questions revealing the meanings, questions revealing the beliefs, questions about behaviours during the similar past experiences, analyse questions and analytic synthesis questions. In order to make the patient feel understood it is important to be empathetic and summarising the problem during the interview. In this text, steps of Socratic Questioning-Guided Discovery will be reviewed with sample dialogues after eachstep. [JCBPR 2012; 1(1.000: 15-20

  13. Recent advances in inkjet dispensing technologies: applications in drug discovery.

    Science.gov (United States)

    Zhu, Xiangcheng; Zheng, Qiang; Yang, Hu; Cai, Jin; Huang, Lei; Duan, Yanwen; Xu, Zhinan; Cen, Peilin

    2012-09-01

    Inkjet dispensing technology is a promising fabrication methodology widely applied in drug discovery. The automated programmable characteristics and high-throughput efficiency makes this approach potentially very useful in miniaturizing the design patterns for assays and drug screening. Various custom-made inkjet dispensing systems as well as specialized bio-ink and substrates have been developed and applied to fulfill the increasing demands of basic drug discovery studies. The incorporation of other modern technologies has further exploited the potential of inkjet dispensing technology in drug discovery and development. This paper reviews and discusses the recent developments and practical applications of inkjet dispensing technology in several areas of drug discovery and development including fundamental assays of cells and proteins, microarrays, biosensors, tissue engineering, basic biological and pharmaceutical studies. Progression in a number of areas of research including biomaterials, inkjet mechanical systems and modern analytical techniques as well as the exploration and accumulation of profound biological knowledge has enabled different inkjet dispensing technologies to be developed and adapted for high-throughput pattern fabrication and miniaturization. This in turn presents a great opportunity to propel inkjet dispensing technology into drug discovery.

  14. A systematic approach to novel virus discovery in emerging infectious disease outbreaks.

    Science.gov (United States)

    Sridhar, Siddharth; To, Kelvin K W; Chan, Jasper F W; Lau, Susanna K P; Woo, Patrick C Y; Yuen, Kwok-Yung

    2015-05-01

    The discovery of novel viruses is of great importance to human health-both in the setting of emerging infectious disease outbreaks and in disease syndromes of unknown etiology. Despite the recent proliferation of many efficient virus discovery methods, careful selection of a combination of methods is important to demonstrate a novel virus, its clinical associations, and its relevance in a timely manner. The identification of a patient or an outbreak with distinctive clinical features and negative routine microbiological workup is often the starting point for virus hunting. This review appraises the roles of culture, electron microscopy, and nucleic acid detection-based methods in optimizing virus discovery. Cell culture is generally slow but may yield viable virus. Although the choice of cell line often involves trial and error, it may be guided by the clinical syndrome. Electron microscopy is insensitive but fast, and may provide morphological clues to choice of cell line or consensus primers for nucleic acid detection. Consensus primer PCR can be used to detect viruses that are closely related to known virus families. Random primer amplification and high-throughput sequencing can catch any virus genome but cannot yield an infectious virion for testing Koch postulates. A systematic approach that incorporates carefully chosen combinations of virus detection techniques is required for successful virus discovery. Copyright © 2015 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  15. PENGUATAN KARAKTER RASA INGIN TAHU DAN PEDULI SOSIAL MELALUI DISCOVERY LEARNING

    Directory of Open Access Journals (Sweden)

    Achmad Fauzi

    2018-01-01

    Full Text Available Efforts to strengthen the character become the basis in the implementation of the curriculum 2013. Application of the 2013 curriculum provides a paradigm shift, which in the end result of learning students not only master the knowledge but also master the attitude and skills. One of the two characters developed is curiosity and social care. To form the character, it needs an educational instrument such as a competent teacher, adequate learning resources, and the most important is the action of learning in the form of approach, model, method, or appropriate learning strategy. So the application of discovery learning model with scientific approach. Which model is effective and efficient in bring up the character of curiosity and social care.   Keywords Curiosity, Social Care, Discovery Learning   http://dx.doi.org/10.17977/um022v2i22017p079

  16. Data Linkage Graph: computation, querying and knowledge discovery of life science database networks

    Directory of Open Access Journals (Sweden)

    Lange Matthias

    2007-12-01

    Full Text Available To support the interpretation of measured molecular facts, like gene expression experiments or EST sequencing, the functional or the system biological context has to be considered. Doing so, the relationship to existing biological knowledge has to be discovered. In general, biological knowledge is worldwide represented in a network of databases. In this paper we present a method for knowledge extraction in life science databases, which prevents the scientists from screen scraping and web clicking approaches.

  17. Integrating GIS components with knowledge discovery technology for environmental health decision support.

    Science.gov (United States)

    Bédard, Yvan; Gosselin, Pierre; Rivest, Sonia; Proulx, Marie-Josée; Nadeau, Martin; Lebel, Germain; Gagnon, Marie-France

    2003-04-01

    This paper presents a new category of decision-support tools that builds on today's Geographic Information Systems (GIS) and On-Line Analytical Processing (OLAP) technologies to facilitate Geographic Knowledge Discovery (GKD). This new category, named Spatial OLAP (SOLAP), has been an R&D topic for about 5 years in a few university labs and is now being implemented by early adopters in different fields, including public health where it provides numerous advantages. In this paper, we present an example of a SOLAP application in the field of environmental health: the ICEM-SE project. After having presented this example, we describe the design of this system and explain how it provides fast and easy access to the detailed and aggregated data that are needed for GKD and decision-making in public health. The SOLAP concept is also described and a comparison is made with traditional GIS applications.

  18. Approaching Knowledge Management through the Lens of the Knowledge Life Cycle: A Case Study Investigation

    Science.gov (United States)

    Fowlin, Julaine M.; Cennamo, Katherine S.

    2017-01-01

    More organizational leaders are recognizing that their greatest competitive advantage is the knowledge base of their employees and for organizations to thrive knowledge management (KM) systems need to be in place that encourage the natural interplay and flow of tacit and explicit knowledge. Approaching KM through the lens of the knowledge life…

  19. Reuse-oriented common structure discovery in assembly models

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Pan; Zhang Jie; Li, Yuan; Yu, Jian Feng [The Ministry of Education Key Lab of Contemporary Design and Integrated Manufacturing Technology, Northwestern Polytechnical University, Xian (China)

    2017-01-15

    Discovering the common structures in assembly models provides designers with the commonalities that carry significant design knowledge across multiple products, which helps to improve design efficiency and accelerate the design process. In this paper, a discovery method has been developed to obtain the common structure in assembly models. First, this work proposes a graph descriptor that captures both the geometrical and topological information of the assembly model, in which shape vectors and link vectors quantitatively describe the part models and mating relationships, respectively. Then, a clustering step is introduced into the discovery, which clusters the similar parts by comparing the similarities between them. In addition, some rules are also provided to filter the frequent subgraphs in order to obtain the expected results. Compared with the existing method, the proposed approach could overcome the disadvantages by providing an independent description of the part model and taking into consideration the similar parts in assemblies, which leads to a more reasonable result. Finally, some experiments have been carried out and the experimental results demonstrate the effectiveness of the proposed approach.

  20. Reuse-oriented common structure discovery in assembly models

    International Nuclear Information System (INIS)

    Wang, Pan; Zhang Jie; Li, Yuan; Yu, Jian Feng

    2017-01-01

    Discovering the common structures in assembly models provides designers with the commonalities that carry significant design knowledge across multiple products, which helps to improve design efficiency and accelerate the design process. In this paper, a discovery method has been developed to obtain the common structure in assembly models. First, this work proposes a graph descriptor that captures both the geometrical and topological information of the assembly model, in which shape vectors and link vectors quantitatively describe the part models and mating relationships, respectively. Then, a clustering step is introduced into the discovery, which clusters the similar parts by comparing the similarities between them. In addition, some rules are also provided to filter the frequent subgraphs in order to obtain the expected results. Compared with the existing method, the proposed approach could overcome the disadvantages by providing an independent description of the part model and taking into consideration the similar parts in assemblies, which leads to a more reasonable result. Finally, some experiments have been carried out and the experimental results demonstrate the effectiveness of the proposed approach

  1. Object-graphs for context-aware visual category discovery.

    Science.gov (United States)

    Lee, Yong Jae; Grauman, Kristen

    2012-02-01

    How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.

  2. Targeting cysteine proteases in trypanosomatid disease drug discovery.

    Science.gov (United States)

    Ferreira, Leonardo G; Andricopulo, Adriano D

    2017-12-01

    Chagas disease and human African trypanosomiasis are endemic conditions in Latin America and Africa, respectively, for which no effective and safe therapy is available. Efforts in drug discovery have focused on several enzymes from these protozoans, among which cysteine proteases have been validated as molecular targets for pharmacological intervention. These enzymes are expressed during the entire life cycle of trypanosomatid parasites and are essential to many biological processes, including infectivity to the human host. As a result of advances in the knowledge of the structural aspects of cysteine proteases and their role in disease physiopathology, inhibition of these enzymes by small molecules has been demonstrated to be a worthwhile approach to trypanosomatid drug research. This review provides an update on drug discovery strategies targeting the cysteine peptidases cruzain from Trypanosoma cruzi and rhodesain and cathepsin B from Trypanosoma brucei. Given that current chemotherapy for Chagas disease and human African trypanosomiasis has several drawbacks, cysteine proteases will continue to be actively pursued as valuable molecular targets in trypanosomatid disease drug discovery efforts. Copyright © 2017. Published by Elsevier Inc.

  3. PKDE4J: Entity and relation extraction for public knowledge discovery.

    Science.gov (United States)

    Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young

    2015-10-01

    Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Structure and organization of drug-target networks: insights from genomic approaches for drug discovery.

    Science.gov (United States)

    Janga, Sarath Chandra; Tzakos, Andreas

    2009-12-01

    Recent years have seen an explosion in the amount of "omics" data and the integration of several disciplines, which has influenced all areas of life sciences including that of drug discovery. Several lines of evidence now suggest that the traditional notion of "one drug-one protein" for one disease does not hold any more and that treatment for most complex diseases can best be attempted using polypharmacological approaches. In this review, we formalize the definition of a drug-target network by decomposing it into drug, target and disease spaces and provide an overview of our understanding in recent years about its structure and organizational principles. We discuss advances made in developing promiscuous drugs following the paradigm of polypharmacology and reveal their advantages over traditional drugs for targeting diseases such as cancer. We suggest that drug-target networks can be decomposed to be studied at a variety of levels and argue that such network-based approaches have important implications in understanding disease phenotypes and in accelerating drug discovery. We also discuss the potential and scope network pharmacology promises in harnessing the vast amount of data from high-throughput approaches for therapeutic advantage.

  5. Discovery of Boolean metabolic networks: integer linear programming based approach.

    Science.gov (United States)

    Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing

    2018-04-11

    Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".

  6. Approaches to Maintaining and Building Organisational Knowledge

    International Nuclear Information System (INIS)

    Juurmaa, Tellervo

    2014-01-01

    Conclusions: • Involvement of people is one of the most important enablers of successful KM; • KM focuses on organisational knowledge that is needed for achieving business goals; • Working culture and KM activities embedded in the ways of working are essential for management of organisational knowledge; • Formal KM approach is needed as well, and one of its objectives is to support informal KM activities; • For a successful management of organisational knowledge, KM related functions need to be identified and understood as one entity

  7. Knowledge discovery: Extracting usable information from large amounts of data

    International Nuclear Information System (INIS)

    Whiteson, R.

    1998-01-01

    The threat of nuclear weapons proliferation is a problem of world wide concern. Safeguards are the key to nuclear nonproliferation and data is the key to safeguards. The safeguards community has access to a huge and steadily growing volume of data. The advantages of this data rich environment are obvious, there is a great deal of information which can be utilized. The challenge is to effectively apply proven and developing technologies to find and extract usable information from that data. That information must then be assessed and evaluated to produce the knowledge needed for crucial decision making. Efficient and effective analysis of safeguards data will depend on utilizing technologies to interpret the large, heterogeneous data sets that are available from diverse sources. With an order-of-magnitude increase in the amount of data from a wide variety of technical, textual, and historical sources there is a vital need to apply advanced computer technologies to support all-source analysis. There are techniques of data warehousing, data mining, and data analysis that can provide analysts with tools that will expedite their extracting useable information from the huge amounts of data to which they have access. Computerized tools can aid analysts by integrating heterogeneous data, evaluating diverse data streams, automating retrieval of database information, prioritizing inputs, reconciling conflicting data, doing preliminary interpretations, discovering patterns or trends in data, and automating some of the simpler prescreening tasks that are time consuming and tedious. Thus knowledge discovery technologies can provide a foundation of support for the analyst. Rather than spending time sifting through often irrelevant information, analysts could use their specialized skills in a focused, productive fashion. This would allow them to make their analytical judgments with more confidence and spend more of their time doing what they do best

  8. Improving the performance of DomainDiscovery of protein domain boundary assignment using inter-domain linker index

    Directory of Open Access Journals (Sweden)

    Zomaya Albert Y

    2006-12-01

    Full Text Available Abstract Background Knowledge of protein domain boundaries is critical for the characterisation and understanding of protein function. The ability to identify domains without the knowledge of the structure – by using sequence information only – is an essential step in many types of protein analyses. In this present study, we demonstrate that the performance of DomainDiscovery is improved significantly by including the inter-domain linker index value for domain identification from sequence-based information. Improved DomainDiscovery uses a Support Vector Machine (SVM approach and a unique training dataset built on the principle of consensus among experts in defining domains in protein structure. The SVM was trained using a PSSM (Position Specific Scoring Matrix, secondary structure, solvent accessibility information and inter-domain linker index to detect possible domain boundaries for a target sequence. Results Improved DomainDiscovery is compared with other methods by benchmarking against a structurally non-redundant dataset and also CASP5 targets. Improved DomainDiscovery achieves 70% accuracy for domain boundary identification in multi-domains proteins. Conclusion Improved DomainDiscovery compares favourably to the performance of other methods and excels in the identification of domain boundaries for multi-domain proteins as a result of introducing support vector machine with benchmark_2 dataset.

  9. Knowledge brokering for healthy aging: a scoping review of potential approaches.

    Science.gov (United States)

    Van Eerd, Dwayne; Newman, Kristine; DeForge, Ryan; Urquhart, Robin; Cornelissen, Evelyn; Dainty, Katie N

    2016-10-19

    Developing a healthcare delivery system that is more responsive to the future challenges of an aging population is a priority in Canada. The World Health Organization acknowledges the need for knowledge translation frameworks in aging and health. Knowledge brokering (KB) is a specific knowledge translation approach that includes making connections between people to facilitate the use of evidence. Knowledge gaps exist about KB roles, approaches, and guiding frameworks. The objective of the scoping review is to identify and describe KB approaches and the underlying conceptual frameworks (models, theories) used to guide the approaches that could support healthy aging. Literature searches were done in PubMed, EMBASE, PsycINFO, EBM reviews (Cochrane Database of systematic reviews), CINAHL, and SCOPUS, as well as Google and Google Scholar using terms related to knowledge brokering. Titles, abstracts, and full reports were reviewed independently by two reviewers who came to consensus on all screening criteria. Documents were included if they described a KB approach and details about the underlying conceptual basis. Data about KB approach, target stakeholders, KB outcomes, and context were extracted independently by two reviewers. Searches identified 248 unique references. Screening for inclusion revealed 19 documents that described 15 accounts of knowledge brokering and details about conceptual guidance and could be applied in healthy aging contexts. Eight KB elements were detected in the approaches though not all approaches incorporated all elements. The underlying conceptual guidance for KB approaches varied. Specific KB frameworks were referenced or developed for nine KB approaches while the remaining six cited more general KT frameworks (or multiple frameworks) as guidance. The KB approaches that we found varied greatly depending on the context and stakeholders involved. Three of the approaches were explicitly employed in the context of health aging. Common elements

  10. A projection and density estimation method for knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Adam Stanski

    Full Text Available A key ingredient to modern data analysis is probability density estimation. However, it is well known that the curse of dimensionality prevents a proper estimation of densities in high dimensions. The problem is typically circumvented by using a fixed set of assumptions about the data, e.g., by assuming partial independence of features, data on a manifold or a customized kernel. These fixed assumptions limit the applicability of a method. In this paper we propose a framework that uses a flexible set of assumptions instead. It allows to tailor a model to various problems by means of 1d-decompositions. The approach achieves a fast runtime and is not limited by the curse of dimensionality as all estimations are performed in 1d-space. The wide range of applications is demonstrated at two very different real world examples. The first is a data mining software that allows the fully automatic discovery of patterns. The software is publicly available for evaluation. As a second example an image segmentation method is realized. It achieves state of the art performance on a benchmark dataset although it uses only a fraction of the training data and very simple features.

  11. Synthetic biology approaches in drug discovery and pharmaceutical biotechnology.

    Science.gov (United States)

    Neumann, Heinz; Neumann-Staubitz, Petra

    2010-06-01

    Synthetic biology is the attempt to apply the concepts of engineering to biological systems with the aim to create organisms with new emergent properties. These organisms might have desirable novel biosynthetic capabilities, act as biosensors or help us to understand the intricacies of living systems. This approach has the potential to assist the discovery and production of pharmaceutical compounds at various stages. New sources of bioactive compounds can be created in the form of genetically encoded small molecule libraries. The recombination of individual parts has been employed to design proteins that act as biosensors, which could be used to identify and quantify molecules of interest. New biosynthetic pathways may be designed by stitching together enzymes with desired activities, and genetic code expansion can be used to introduce new functionalities into peptides and proteins to increase their chemical scope and biological stability. This review aims to give an insight into recently developed individual components and modules that might serve as parts in a synthetic biology approach to pharmaceutical biotechnology.

  12. A KNOWLEDGE DISCOVERY STRATEGY FOR RELATING SEA SURFACE TEMPERATURES TO FREQUENCIES OF TROPICAL STORMS AND GENERATING PREDICTIONS OF HURRICANES UNDER 21ST-CENTURY GLOBAL WARMING SCENARIOS

    Data.gov (United States)

    National Aeronautics and Space Administration — A KNOWLEDGE DISCOVERY STRATEGY FOR RELATING SEA SURFACE TEMPERATURES TO FREQUENCIES OF TROPICAL STORMS AND GENERATING PREDICTIONS OF HURRICANES UNDER 21ST-CENTURY...

  13. Nuclear Knowledge Management: the IAEA Approach

    International Nuclear Information System (INIS)

    Sbaffoni, M.; De Grosbois, J.

    2015-01-01

    Knowledge in an organization is residing in people, processes and technology. Adequate awareness of their knowledge assets and of the risk of losing them is vital for safe and secure operations of nuclear installations. Senior managers understand this important linkage, and in the last years there is an increasing tendency in nuclear organizations to implement knowledge management strategies to ensure that the adequate and necessary knowledge is available at the right time, in the right place. Specific and advanced levels of knowledge are clearly required to achieve and maintain technical expertise, and experience must be developed and be available throughout the nuclear technology lifecycle. If a nuclear organization does not possess or have access to the required technical knowledge, a full understanding of the potential consequences of decisions and actions may not be possible, and safety, security and safeguards might be compromised. Effective decision making during design, licencing, procurement, construction, commissioning, operation, maintenance, refurbishment, and decommissioning of nuclear facilities needs to be risk-informed and knowledge-driven. Nuclear technology is complex and brings with it inherent and unique risks that must be managed to acceptably low levels. Nuclear managers have a responsibility not only to establish adequate technical knowledge and experience in their nuclear organizations but also to maintain it. The consequences of failing to manage the organizations key knowledge assets can result in serious degradations or accidents. The IAEA Nuclear Knowledge Management (NKM) sub-programme was established more than 10 years ago to support Nuclear Organizations, at Member States request, in the implementation and dissemination of the NKM methodology, through the development of guidance and tools, and by providing knowledge management services and assistance. The paper will briefly present IAEA understanding of and approach to knowledge

  14. Fragment-based approaches to anti-HIV drug discovery: state of the art and future opportunities.

    Science.gov (United States)

    Huang, Boshi; Kang, Dongwei; Zhan, Peng; Liu, Xinyong

    2015-12-01

    The search for additional drugs to treat HIV infection is a continuing effort due to the emergence and spread of HIV strains resistant to nearly all current drugs. The recent literature reveals that fragment-based drug design/discovery (FBDD) has become an effective alternative to conventional high-throughput screening strategies for drug discovery. In this critical review, the authors describe the state of the art in FBDD strategies for the discovery of anti-HIV drug-like compounds. The article focuses on fragment screening techniques, direct fragment-based design and early hit-to-lead progress. Rapid progress in biophysical detection and in silico techniques has greatly aided the application of FBDD to discover candidate agents directed at a variety of anti-HIV targets. Growing evidence suggests that structural insights on key proteins in the HIV life cycle can be applied in the early phase of drug discovery campaigns, providing valuable information on the binding modes and efficiently prompting fragment hit-to-lead progression. The combination of structural insights with improved methodologies for FBDD, including the privileged fragment-based reconstruction approach, fragment hybridization based on crystallographic overlays, fragment growth exploiting dynamic combinatorial chemistry, and high-speed fragment assembly via diversity-oriented synthesis followed by in situ screening, offers the possibility of more efficient and rapid discovery of novel drugs for HIV-1 prevention or treatment. Though the use of FBDD in anti-HIV drug discovery is still in its infancy, it is anticipated that anti-HIV agents developed via fragment-based strategies will be introduced into the clinic in the future.

  15. A Ligand-observed Mass Spectrometry Approach Integrated into the Fragment Based Lead Discovery Pipeline

    Science.gov (United States)

    Chen, Xin; Qin, Shanshan; Chen, Shuai; Li, Jinlong; Li, Lixin; Wang, Zhongling; Wang, Quan; Lin, Jianping; Yang, Cheng; Shui, Wenqing

    2015-01-01

    In fragment-based lead discovery (FBLD), a cascade combining multiple orthogonal technologies is required for reliable detection and characterization of fragment binding to the target. Given the limitations of the mainstream screening techniques, we presented a ligand-observed mass spectrometry approach to expand the toolkits and increase the flexibility of building a FBLD pipeline especially for tough targets. In this study, this approach was integrated into a FBLD program targeting the HCV RNA polymerase NS5B. Our ligand-observed mass spectrometry analysis resulted in the discovery of 10 hits from a 384-member fragment library through two independent screens of complex cocktails and a follow-up validation assay. Moreover, this MS-based approach enabled quantitative measurement of weak binding affinities of fragments which was in general consistent with SPR analysis. Five out of the ten hits were then successfully translated to X-ray structures of fragment-bound complexes to lay a foundation for structure-based inhibitor design. With distinctive strengths in terms of high capacity and speed, minimal method development, easy sample preparation, low material consumption and quantitative capability, this MS-based assay is anticipated to be a valuable addition to the repertoire of current fragment screening techniques. PMID:25666181

  16. Facilitating Students' Interaction with Real Gas Properties Using a Discovery-Based Approach and Molecular Dynamics Simulations

    Science.gov (United States)

    Sweet, Chelsea; Akinfenwa, Oyewumi; Foley, Jonathan J., IV

    2018-01-01

    We present an interactive discovery-based approach to studying the properties of real gases using simple, yet realistic, molecular dynamics software. Use of this approach opens up a variety of opportunities for students to interact with the behaviors and underlying theories of real gases. Students can visualize gas behavior under a variety of…

  17. A role for physicians in ethnopharmacology and drug discovery.

    Science.gov (United States)

    Raza, Mohsin

    2006-04-06

    Ethnopharmacology investigations classically involved traditional healers, botanists, anthropologists, chemists and pharmacologists. The role of some groups of researchers but not of physician has been highlighted and well defined in ethnopharmacological investigations. Historical data shows that discovery of several important modern drugs of herbal origin owe to the medical knowledge and clinical expertise of physicians. Current trends indicate negligible role of physicians in ethnopharmacological studies. Rising cost of modern drug development is attributed to the lack of classical ethnopharmacological approach. Physicians can play multiple roles in the ethnopharmacological studies to facilitate drug discovery as well as to rescue authentic traditional knowledge of use of medicinal plants. These include: (1) Ethnopharmacological field work which involves interviewing healers, interpreting traditional terminologies into their modern counterparts, examining patients consuming herbal remedies and identifying the disease for which an herbal remedy is used. (2) Interpretation of signs and symptoms mentioned in ancient texts and suggesting proper use of old traditional remedies in the light of modern medicine. (3) Clinical studies on herbs and their interaction with modern medicines. (4) Advising pharmacologists to carryout laboratory studies on herbs observed during field studies. (5) Work in collaboration with local healers to strengthen traditional system of medicine in a community. In conclusion, physician's involvement in ethnopharmacological studies will lead to more reliable information on traditional use of medicinal plants both from field and ancient texts, more focused and cheaper natural product based drug discovery, as well as bridge the gap between traditional and modern medicine.

  18. Analysing Discourse. An Approach From the Sociology of Knowledge

    Directory of Open Access Journals (Sweden)

    Reiner Keller

    2005-09-01

    Full Text Available The contribution outlines a research pro­gramme which I have coined the "sociology of knowledge approach to discourse" (Wissens­sozio­logische Diskursanalyse. This approach to dis­course integrates important insights of FOU­CAULT's theory of discourse into the interpretative paradigm in the social sciences, especially the "German" approach of hermeneutic sociology of knowledge (Hermeneutische Wissenssoziologie. Accordingly, in this approach discourses are con­sidered as "structured and structuring structures" which shape social practices of enunciation. Un­like some Foucauldian approaches, this form of discourse analysis recognises the importance of socially constituted actors in the social production and circulation of knowledge. Furthermore, it com­bines research questions related to the concept of "discourse" with the methodical toolbox of qual­itative social research. Going beyond ques­tions of language in use, "the sociology of knowl­edge ap­proach to discourse" (Wissenssozio­logi­sche Dis­kurs­analyse addresses sociological inter­ests, the analyses of social relations and politics of knowl­edge as well as the discursive construction of re­al­ity as an empirical ("material" process. For empiri­cal research on discourse the approach proposes the use of analytical concepts from the sociology of knowledge tradition, such as inter­pretative schemes or frames (Deutungsmuster, "clas­sifi­ca­tions", "phenomenal structure" (Phäno­men­struktur, "narrative structure", "dispositif" etc., and the use of the methodological strategies of "grounded theory". URN: urn:nbn:de:0114-fqs0503327

  19. User-based and Cognitive Approaches to Knowledge Organization

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2013-01-01

    ’s PageRank are not based on the empirical studies of users. In knowledge organization, the Book House System is one example of a system based on user studies. In cognitive science the important WordNet database is claimed to be based on psychological research. This article considers such examples......In the 1970s and 1980s, forms of user-based and cognitive approaches to knowledge organization came to the forefront as part of the overall development in library and information science and in the broader society. The specific nature of userbased approaches is their basis in the empirical studies...

  20. Computational methods in drug discovery

    Directory of Open Access Journals (Sweden)

    Sumudu P. Leelananda

    2016-12-01

    Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.

  1. Learning spaces as representational scaffolds for learning conceptual knowledge of system behaviour

    NARCIS (Netherlands)

    Bredeweg, B.; Liem, J.; Beek, W.; Salles, P.; Linnebank, F.; Wolpers, M.; Kirschner, P.A.; Scheffel, M.; Lindstaedt, S.; Dimitrova, V.

    2010-01-01

    Scaffolding is a well-known approach to bridge the gap between novice and expert capabilities in a discovery-oriented learning environment. This paper discusses a set of knowledge representations referred to as Learning Spaces (LSs) that can be used to support learners in acquiring conceptual

  2. Discovery of Bovine Digital Dermatitis-Associated Treponema spp. in the Dairy Herd Environment by a Targeted Deep-Sequencing Approach

    DEFF Research Database (Denmark)

    Schou, Kirstine Klitgaard; Weiss Nielsen, Martin; Ingerslev, Hans-Christian

    2014-01-01

    The bacteria associated with the infectious claw disease bovine digital dermatitis (DD) are spirochetes of the genus Treponema; however, their environmental reservoir remains unknown. To our knowledge, the current study is the first report of the discovery and phylogenetic characterization of r...... of this disease among cows within a herd as well as between herds. To address the issue of DD infection reservoirs, we searched for evidence of DD-associated treponemes in fresh feces, in slurry, and in hoof lesions by deep sequencing of the V3 and V4 hypervariable regions of the 16S rRNA gene coupled...... with identification at the operational-taxonomic-unit level. Using treponeme-specific primers in this high-throughput approach, we identified small amounts of DNA (on average 0.6% of the total amount of sequence reads) from DD-associated treponemes in 43 of 64 samples from slurry and cow feces collected from six...

  3. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  4. The Universe Discovery Guides: A Collaborative Approach to Educating with NASA Science

    Science.gov (United States)

    Manning, James G.; Lawton, Brandon L.; Gurton, Suzanne; Smith, Denise Anne; Schultz, Gregory; Astrophysics Community, NASA

    2015-08-01

    For the 2009 International Year of Astronomy, the then-existing NASA Origins Forum collaborated with the Astronomical Society of the Pacific (ASP) to create a series of monthly “Discovery Guides” for informal educator and amateur astronomer use in educating the public about featured sky objects and associated NASA science themes. Today’s NASA Astrophysics Science Education and Public Outreach Forum (SEPOF), one of the current generation of forums coordinating the work of NASA Science Mission Directorate (SMD) EPO efforts—in collaboration with the ASP and NASA SMD missions and programs--has adapted the Discovery Guides into “evergreen” educational resources suitable for a variety of audiences. The Guides focus on “deep sky” objects and astrophysics themes (stars and stellar evolution, galaxies and the universe, and exoplanets), showcasing EPO resources from more than 30 NASA astrophysics missions and programs in a coordinated and cohesive “big picture” approach across the electromagnetic spectrum, grounded in best practices to best serve the needs of the target audiences.Each monthly guide features a theme and a representative object well-placed for viewing, with an accompanying interpretive story, finding charts, strategies for conveying the topics, and complementary supporting NASA-approved education activities and background information from a spectrum of NASA missions and programs. The Universe Discovery Guides are downloadable from the NASA Night Sky Network web site at nightsky.jpl.nasa.gov and specifically from http://nightsky.jpl.nasa.gov/news-display.cfm?News_ID=611.The presentation will describe the collaborative’s experience in developing the guides, how they place individual science discoveries and learning resources into context for audiences, and how the Guides can be readily used in scientist public outreach efforts, in college and university introductory astronomy classes, and in other engagements between scientists, instructors

  5. EVALUATING HUMAN CAPITAL IN A KNOWLEDGE – BASED APPROACH

    Directory of Open Access Journals (Sweden)

    Emanoil MUSCALU

    2014-04-01

    Full Text Available The widespread enthusiasm for a knowledge-based approach to understanding the nature of a business and the possible basis for sustained competitive advantage have renewed interest in human capital evaluation or measurement. While many attempts have been made to develop methods for measuring intellectual capital, none have been widely adopted in the business world. In the knowledge-based organizations, and generally, in the information society, human capital is recognized as the fundamental factor of overall progress, and experts agree that long-term investment in human capital has strong drive-propagation effects at the individual, organizational, national and global level. In this paper, we consider that a knowledge-based approach can offer new possibilities and answers to illustrate the importance of evaluation the human capital and knowledge assets by consistently generating added value in the business world.

  6. Radioactivity. Centenary of radioactivity discovery

    International Nuclear Information System (INIS)

    Charpak, G.; Tubiana, M.; Bimbot, R.

    1997-01-01

    This small booklet was edited for the occasion of the exhibitions of the celebration of the centenary of radioactivity discovery which took place in various locations in France from 1996 to 1998. It recalls some basic knowledge concerning radioactivity and its applications: history of discovery, atoms and isotopes, radiations, measurement of ionizing radiations, natural and artificial radioactivity, isotope dating and labelling, radiotherapy, nuclear power and reactors, fission and fusion, nuclear wastes, dosimetry, effects and radioprotection. (J.S.)

  7. Discovery radiomics via evolutionary deep radiomic sequencer discovery for pathologically proven lung cancer detection.

    Science.gov (United States)

    Shafiee, Mohammad Javad; Chung, Audrey G; Khalvati, Farzad; Haider, Masoom A; Wong, Alexander

    2017-10-01

    While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features that may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose an evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically proven diagnostic data from the LIDC-IDRI dataset. The EDRS shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.

  8. DISCOVERY LEARNING APPROACH IN IMPROVING ARABIC ABILITY OF PRE-SERVICE TEACHERS IN RELIGIOUS TRAINING CENTRE OF MAKASSAR

    Directory of Open Access Journals (Sweden)

    Masrariah Amin

    2017-04-01

    Full Text Available Discovery Learning can be defined as the learning that takes place when the student is not presented with subject matter in the final form, rather he/she is required to find out the concepts by him/her self. This research aims to describe and analyze discovery learning method to strategically improve the comprehension and reasoning ability of Arabic pre-service teachers, which can motivate and enhance their creativity in order to enrich their insight about Arabic teaching as well, especially those who are in training centre. This research was undertaken in two classes of Makassar Religious Training Centre during June-August 2016. The design of this research is experiment with discovery learning approach with randomized pretest-posttest control group design. It was done randomly when to choosing the participants to be experiment and control group. Based on hypothesis testing, discovery learning has positive effects on the pre-service teachers’ Arabic ability in training centre to understand and analyze Arabic. Therefore, based on two-variance analysis; control and experiment group, there is difference on teachers’ comprehension and reasoning ability in learning Arabic between experiment and control group by using discovery learning and conventional method.

  9. Equation Discovery for Model Identification in Respiratory Mechanics of the Mechanically Ventilated Human Lung

    Science.gov (United States)

    Ganzert, Steven; Guttmann, Josef; Steinmann, Daniel; Kramer, Stefan

    Lung protective ventilation strategies reduce the risk of ventilator associated lung injury. To develop such strategies, knowledge about mechanical properties of the mechanically ventilated human lung is essential. This study was designed to develop an equation discovery system to identify mathematical models of the respiratory system in time-series data obtained from mechanically ventilated patients. Two techniques were combined: (i) the usage of declarative bias to reduce search space complexity and inherently providing the processing of background knowledge. (ii) A newly developed heuristic for traversing the hypothesis space with a greedy, randomized strategy analogical to the GSAT algorithm. In 96.8% of all runs the applied equation discovery system was capable to detect the well-established equation of motion model of the respiratory system in the provided data. We see the potential of this semi-automatic approach to detect more complex mathematical descriptions of the respiratory system from respiratory data.

  10. Transferring Codified Knowledge: Socio-Technical versus Top-Down Approaches

    Science.gov (United States)

    Guzman, Gustavo; Trivelato, Luiz F.

    2008-01-01

    Purpose: This paper aims to analyse and evaluate the transfer process of codified knowledge (CK) performed under two different approaches: the "socio-technical" and the "top-down". It is argued that the socio-technical approach supports the transfer of CK better than the top-down approach. Design/methodology/approach: Case study methodology was…

  11. A Knowledge-driven Approach to Composite Activity Recognition in Smart Environments

    OpenAIRE

    Chen, Liming; Wang, H.; Sterritt, Roy; Okeyo, George

    2012-01-01

    Knowledge-driven activity recognition has recently attracted increasing attention but mainly focused on simple activities. This paper extends previous work to introduce a knowledge-driven approach to recognition of composite activities such as interleaved and concurrent activities. The approach combines ontological and temporal knowledge modelling formalisms for composite activity modelling. It exploits ontological reasoning for simple activity recognition and rule-based temporal inference to...

  12. Knowledge-based approach to video content classification

    Science.gov (United States)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  13. Combinatorial pattern discovery approach for the folding trajectory analysis of a beta-hairpin.

    Directory of Open Access Journals (Sweden)

    Laxmi Parida

    2005-06-01

    Full Text Available The study of protein folding mechanisms continues to be one of the most challenging problems in computational biology. Currently, the protein folding mechanism is often characterized by calculating the free energy landscape versus various reaction coordinates, such as the fraction of native contacts, the radius of gyration, RMSD from the native structure, and so on. In this paper, we present a combinatorial pattern discovery approach toward understanding the global state changes during the folding process. This is a first step toward an unsupervised (and perhaps eventually automated approach toward identification of global states. The approach is based on computing biclusters (or patterned clusters-each cluster is a combination of various reaction coordinates, and its signature pattern facilitates the computation of the Z-score for the cluster. For this discovery process, we present an algorithm of time complexity c in RO((N + nm log n, where N is the size of the output patterns and (n x m is the size of the input with n time frames and m reaction coordinates. To date, this is the best time complexity for this problem. We next apply this to a beta-hairpin folding trajectory and demonstrate that this approach extracts crucial information about protein folding intermediate states and mechanism. We make three observations about the approach: (1 The method recovers states previously obtained by visually analyzing free energy surfaces. (2 It also succeeds in extracting meaningful patterns and structures that had been overlooked in previous works, which provides a better understanding of the folding mechanism of the beta-hairpin. These new patterns also interconnect various states in existing free energy surfaces versus different reaction coordinates. (3 The approach does not require calculating the free energy values, yet it offers an analysis comparable to, and sometimes better than, the methods that use free energy landscapes, thus validating the

  14. Combinatorial Pattern Discovery Approach for the Folding Trajectory Analysis of a beta-Hairpin.

    Directory of Open Access Journals (Sweden)

    2005-06-01

    Full Text Available The study of protein folding mechanisms continues to be one of the most challenging problems in computational biology. Currently, the protein folding mechanism is often characterized by calculating the free energy landscape versus various reaction coordinates, such as the fraction of native contacts, the radius of gyration, RMSD from the native structure, and so on. In this paper, we present a combinatorial pattern discovery approach toward understanding the global state changes during the folding process. This is a first step toward an unsupervised (and perhaps eventually automated approach toward identification of global states. The approach is based on computing biclusters (or patterned clusters-each cluster is a combination of various reaction coordinates, and its signature pattern facilitates the computation of the Z-score for the cluster. For this discovery process, we present an algorithm of time complexity cinRO((N + nm log n, where N is the size of the output patterns and (n x m is the size of the input with n time frames and m reaction coordinates. To date, this is the best time complexity for this problem. We next apply this to a beta-hairpin folding trajectory and demonstrate that this approach extracts crucial information about protein folding intermediate states and mechanism. We make three observations about the approach: (1 The method recovers states previously obtained by visually analyzing free energy surfaces. (2 It also succeeds in extracting meaningful patterns and structures that had been overlooked in previous works, which provides a better understanding of the folding mechanism of the beta-hairpin. These new patterns also interconnect various states in existing free energy surfaces versus different reaction coordinates. (3 The approach does not require calculating the free energy values, yet it offers an analysis comparable to, and sometimes better than, the methods that use free energy landscapes, thus validating the

  15. (CBTP) on knowledge, problem-solving and learning approach

    African Journals Online (AJOL)

    In the first instance attention is paid to the effect of a computer-based teaching programme (CBTP) on the knowledge, problem-solving skills and learning approach of student ... In the practice group (oncology wards) no statistically significant change in the learning approach of respondents was found after using the CBTP.

  16. Personal discovery in diabetes self-management: Discovering cause and effect using self-monitoring data.

    Science.gov (United States)

    Mamykina, Lena; Heitkemper, Elizabeth M; Smaldone, Arlene M; Kukafka, Rita; Cole-Lewis, Heather J; Davidson, Patricia G; Mynatt, Elizabeth D; Cassells, Andrea; Tobin, Jonathan N; Hripcsak, George

    2017-12-01

    To outline new design directions for informatics solutions that facilitate personal discovery with self-monitoring data. We investigate this question in the context of chronic disease self-management with the focus on type 2 diabetes. We conducted an observational qualitative study of discovery with personal data among adults attending a diabetes self-management education (DSME) program that utilized a discovery-based curriculum. The study included observations of class sessions, and interviews and focus groups with the educator and attendees of the program (n = 14). The main discovery in diabetes self-management evolved around discovering patterns of association between characteristics of individuals' activities and changes in their blood glucose levels that the participants referred to as "cause and effect". This discovery empowered individuals to actively engage in self-management and provided a desired flexibility in selection of personalized self-management strategies. We show that discovery of cause and effect involves four essential phases: (1) feature selection, (2) hypothesis generation, (3) feature evaluation, and (4) goal specification. Further, we identify opportunities to support discovery at each stage with informatics and data visualization solutions by providing assistance with: (1) active manipulation of collected data (e.g., grouping, filtering and side-by-side inspection), (2) hypotheses formulation (e.g., using natural language statements or constructing visual queries), (3) inference evaluation (e.g., through aggregation and visual comparison, and statistical analysis of associations), and (4) translation of discoveries into actionable goals (e.g., tailored selection from computable knowledge sources of effective diabetes self-management behaviors). The study suggests that discovery of cause and effect in diabetes can be a powerful approach to helping individuals to improve their self-management strategies, and that self-monitoring data can

  17. National Heart, Lung, and Blood Institute and the translation of cardiovascular discoveries into therapeutic approaches.

    Science.gov (United States)

    Galis, Zorina S; Black, Jodi B; Skarlatos, Sonia I

    2013-04-26

    The molecular causes of ≈4000 medical conditions have been described, yet only 5% have associated therapies. For decades, the average time for drug development through approval has taken 10 to 20 years. In recent years, the serious challenges that confront the private sector have made it difficult to capitalize on new opportunities presented by advances in genomics and cellular therapies. Current trends are disturbing. Pharmaceutical companies are reducing their investments in research, and biotechnology companies are struggling to obtain venture funds. To support early-stage translation of the discoveries in basic science, the National Institutes of Health and the National Heart, Lung, and Blood Institute have developed new approaches to facilitating the translation of basic discoveries into clinical applications and will continue to develop a variety of programs that create teams of academic investigators and industry partners. The goal of these programs is to maximize the public benefit of investment of taxpayer dollars in biomedical research and to lessen the risk required for industry partners to make substantial investments. This article highlights several examples of National Heart, Lung, and Blood Institute-initiated translational programs and National Institutes of Health translational resources designed to catalyze and enable the earliest stages of the biomedical product development process. The translation of latest discoveries into therapeutic approaches depends on continued federal funding to enhance the early stages of the product development process and to stimulate and catalyze partnerships between academia, industry, and other sources of capital.

  18. A Technique Socratic Questioning-Guided Discovery

    Directory of Open Access Journals (Sweden)

    M. Hakan Türkçapar

    2012-03-01

    Full Text Available “Socratic Method” is a way of teaching philosophical thinking and knowledge by asking questions which was used by antique period greek philosopher Socrates. Socrates was teaching knowledge to his followers by asking questions and the conversation between them was named “Socratic Dialogues”. In this meaning, no novel knowledge is taught to the individual but only what is formerly known is reminded and rediscovered. The form of socratic questioning which is used during the process of cognitive behavioral therapy is known as Guided Discovery. In this method it is aimed to make the client notice the piece of knowledge which he could notice but is not aware with a series of questions. Socratic method or guided discovery consists of several steps which are: Identifying the problem by listening to the client and making reflections, finding alternatives by examining and evaluating, reidentification by using the newly found information and questioning the old distorted belief and reaching to a conclusion and applying it. Question types used during these procedures are, questions for gaining information, questions revealing the meanings, questions revealing the beliefs, questions about behaviours during the similar past experiences, analyse questions and analytic synthesis questions. In order to make the patient feel understood it is important to be empathetic and summarising the problem during the interview. In this text, steps of Socratic Questioning-Guided Discovery will be reviewed with sample dialogues after each step

  19. A Knowledge Engineering Approach to Developing Educational Computer Games for Improving Students' Differentiating Knowledge

    Science.gov (United States)

    Hwang, Gwo-Jen; Sung, Han-Yu; Hung, Chun-Ming; Yang, Li-Hsueh; Huang, Iwen

    2013-01-01

    Educational computer games have been recognized as being a promising approach for motivating students to learn. Nevertheless, previous studies have shown that without proper learning strategies or supportive models, the learning achievement of students might not be as good as expected. In this study, a knowledge engineering approach is proposed…

  20. Discovery of inhibitors of bacterial histidine kinases

    NARCIS (Netherlands)

    Velikova, N.R.

    2014-01-01

    Discovery of Inhibitors of Bacterial Histidine Kinases Summary

    The thesis is on novel antibacterial drug discovery (http://youtu.be/NRMWOGgeysM). Using structure-based and fragment-based drug discovery approach, we have identified small-molecule histidine-kinase

  1. Knowledge Representation in Patient Safety Reporting: An Ontological Approach

    OpenAIRE

    Liang Chen; Yang Gong

    2016-01-01

    Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. To improve patient safety reporting, the present paper suggests an ontological representation of patient safety knowledge. Design/methodology/approach: We propose a framework for constructing an ontological knowledge base of patient safety. The present paper describes our desig...

  2. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Institute of Scientific and Technical Information of China (English)

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  3. Resource Discovery in Activity-Based Sensor Networks

    DEFF Research Database (Denmark)

    Bucur, Doina; Bardram, Jakob

    This paper proposes a service discovery protocol for sensor networks that is specifically tailored for use in humancentered pervasive environments. It uses the high-level concept of computational activities (as logical bundles of data and resources) to give sensors in Activity-Based Sensor Networks...... (ABSNs) knowledge about their usage even at the network layer. ABSN redesigns classical network-level service discovery protocols to include and use this logical structuring of the network for a more practically applicable service discovery scheme. Noting that in practical settings activity-based sensor...

  4. Improved accuracy of supervised CRM discovery with interpolated Markov models and cross-species comparison.

    Science.gov (United States)

    Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh

    2011-12-01

    Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.

  5. A knowledge-based approach for recognition of handwritten Pitman ...

    Indian Academy of Sciences (India)

    The paper describes a knowledge-based approach for the recognition of PSL strokes. Information about location and the direction of the starting point and final point of strokes are considered the knowledge base for recognition of strokes. The work comprises preprocessing, determination of starting and final points, ...

  6. A diagnostic expert system for NPP based on hybrid knowledge approach

    International Nuclear Information System (INIS)

    Yang, Joon On; Chang, Soon Heung

    1989-01-01

    This paper describes a diagnostic expert system, HYPOSS (Hybrid Knowledge Based Plant Operation Supporting System), which has been developed to support operators' decision making during the transients of nuclear power plant. HYPOSS adopts the hybrid knowledge approach which combines shallow and deep knowledge to couple the merits of both approaches. In HYPOSS, four types of knowledge are used according to the steps of diagnosis procedure: structural, functional, behavioral and heuristic knowledge. The structural and functional knowledge is represented by three fundamental primitives and five types of functions respectively. The behavioral knowledge is represented using constraints. The inference procedure is based on the human problem solving behavior modeled in HYPOSS. For the validation of HYPOSS, several tests have been performed based on the data produced by a plant simulator. The results of validation studies showed a good applicability of HYPOSS to the anomaly diagnosis of nuclear power plant

  7. Resource Discovery within the Networked "Hybrid" Library.

    Science.gov (United States)

    Leigh, Sally-Anne

    This paper focuses on the development, adoption, and integration of resource discovery, knowledge management, and/or knowledge sharing interfaces such as interactive portals, and the use of the library's World Wide Web presence to increase the availability and usability of information services. The introduction addresses changes in library…

  8. The development of a classification schema for arts-based approaches to knowledge translation.

    Science.gov (United States)

    Archibald, Mandy M; Caine, Vera; Scott, Shannon D

    2014-10-01

    Arts-based approaches to knowledge translation are emerging as powerful interprofessional strategies with potential to facilitate evidence uptake, communication, knowledge, attitude, and behavior change across healthcare provider and consumer groups. These strategies are in the early stages of development. To date, no classification system for arts-based knowledge translation exists, which limits development and understandings of effectiveness in evidence syntheses. We developed a classification schema of arts-based knowledge translation strategies based on two mechanisms by which these approaches function: (a) the degree of precision in key message delivery, and (b) the degree of end-user participation. We demonstrate how this classification is necessary to explore how context, time, and location shape arts-based knowledge translation strategies. Classifying arts-based knowledge translation strategies according to their core attributes extends understandings of the appropriateness of these approaches for various healthcare settings and provider groups. The classification schema developed may enhance understanding of how, where, and for whom arts-based knowledge translation approaches are effective, and enable theorizing of essential knowledge translation constructs, such as the influence of context, time, and location on utilization strategies. The classification schema developed may encourage systematic inquiry into the effectiveness of these approaches in diverse interprofessional contexts. © 2014 Sigma Theta Tau International.

  9. Data mining and knowledge discovery technologies

    National Research Council Canada - National Science Library

    Taniar, David

    2008-01-01

    "This book presents researchers and practitioners in fields such as knowledge management, information science, Web engineering, and medical informatics, with comprehensive, innovative research on data...

  10. The Implementation of Discovery Learning Model with Scientific Learning Approach to Improve Students’ Critical Thinking in Learning History

    Directory of Open Access Journals (Sweden)

    Edi Nurcahyo

    2018-03-01

    Full Text Available Historical learning has not reached optimal in the learning process. It is caused by the history teachers’ learning model has not used the innovative learning models. Furthermore, it supported by the perception of students to the history subject because it does not become final exam (UN subject so it makes less improvement and builds less critical thinking in students’ daily learning. This is due to the lack of awareness of historical events and the availability of history books for students and teachers in the library are still lacking. Discovery learning with scientific approach encourages students to solve problems actively and able to improve students' critical thinking skills with scientific approach so student can build scientific thinking include observing, asking, reasoning, trying, and networking   Keywords: discovery learning, scientific, critical thinking

  11. An approach to knowledge dynamic maintenance for emotional agents

    OpenAIRE

    Fulladoza Dalibón, Santiago E.; Martínez, Diego C.; Simari, Guillermo Ricardo

    2014-01-01

    In this work we present an approach to emotional reasoning for believable agents, by introducing a mechanism to progressively build a map of knowledge for reasoning. We present the notion of inference graph for progressive reasoning in an emotional context. In this model, knowledge is partially highlighted and noticed by the agent.

  12. Publication, discovery and interoperability of Clinical Decision Support Systems: A Linked Data approach.

    Science.gov (United States)

    Marco-Ruiz, Luis; Pedrinaci, Carlos; Maldonado, J A; Panziera, Luca; Chen, Rong; Bellika, J Gustav

    2016-08-01

    The high costs involved in the development of Clinical Decision Support Systems (CDSS) make it necessary to share their functionality across different systems and organizations. Service Oriented Architectures (SOA) have been proposed to allow reusing CDSS by encapsulating them in a Web service. However, strong barriers in sharing CDS functionality are still present as a consequence of lack of expressiveness of services' interfaces. Linked Services are the evolution of the Semantic Web Services paradigm to process Linked Data. They aim to provide semantic descriptions over SOA implementations to overcome the limitations derived from the syntactic nature of Web services technologies. To facilitate the publication, discovery and interoperability of CDS services by evolving them into Linked Services that expose their interfaces as Linked Data. We developed methods and models to enhance CDS SOA as Linked Services that define a rich semantic layer based on machine interpretable ontologies that powers their interoperability and reuse. These ontologies provided unambiguous descriptions of CDS services properties to expose them to the Web of Data. We developed models compliant with Linked Data principles to create a semantic representation of the components that compose CDS services. To evaluate our approach we implemented a set of CDS Linked Services using a Web service definition ontology. The definitions of Web services were linked to the models developed in order to attach unambiguous semantics to the service components. All models were bound to SNOMED-CT and public ontologies (e.g. Dublin Core) in order to count on a lingua franca to explore them. Discovery and analysis of CDS services based on machine interpretable models was performed reasoning over the ontologies built. Linked Services can be used effectively to expose CDS services to the Web of Data by building on current CDS standards. This allows building shared Linked Knowledge Bases to provide machine

  13. BayesMD: flexible biological modeling for motif discovery

    DEFF Research Database (Denmark)

    Tang, Man-Hung Eric; Krogh, Anders; Winther, Ole

    2008-01-01

    We present BayesMD, a Bayesian Motif Discovery model with several new features. Three different types of biological a priori knowledge are built into the framework in a modular fashion. A mixture of Dirichlets is used as prior over nucleotide probabilities in binding sites. It is trained on trans......We present BayesMD, a Bayesian Motif Discovery model with several new features. Three different types of biological a priori knowledge are built into the framework in a modular fashion. A mixture of Dirichlets is used as prior over nucleotide probabilities in binding sites. It is trained...

  14. Resource Discovery in Activity-Based Sensor Networks

    DEFF Research Database (Denmark)

    Bucur, Doina; Bardram, Jakob

    This paper proposes a service discovery protocol for sensor networks that is specifically tailored for use in humancentered pervasive environments. It uses the high-level concept of computational activities (as logical bundles of data and resources) to give sensors in Activity-Based Sensor Networ....... ABSN enhances the generic Extended Zone Routing Protocol with logical sensor grouping and greatly lowers network overhead during the process of discovery, while keeping discovery latency close to optimal.......This paper proposes a service discovery protocol for sensor networks that is specifically tailored for use in humancentered pervasive environments. It uses the high-level concept of computational activities (as logical bundles of data and resources) to give sensors in Activity-Based Sensor Networks...... (ABSNs) knowledge about their usage even at the network layer. ABSN redesigns classical network-level service discovery protocols to include and use this logical structuring of the network for a more practically applicable service discovery scheme. Noting that in practical settings activity-based sensor...

  15. Discovery of potent, reversible MetAP2 inhibitors via fragment based drug discovery and structure based drug design-Part 2.

    Science.gov (United States)

    McBride, Christopher; Cheruvallath, Zacharia; Komandla, Mallareddy; Tang, Mingnam; Farrell, Pamela; Lawson, J David; Vanderpool, Darin; Wu, Yiqin; Dougan, Douglas R; Plonowski, Artur; Holub, Corine; Larson, Chris

    2016-06-15

    Methionine aminopeptidase-2 (MetAP2) is an enzyme that cleaves an N-terminal methionine residue from a number of newly synthesized proteins. This step is required before they will fold or function correctly. Pre-clinical and clinical studies with a MetAP2 inhibitor suggest that they could be used as a novel treatment for obesity. Herein we describe the discovery of a series of pyrazolo[4,3-b]indoles as reversible MetAP2 inhibitors. A fragment-based drug discovery (FBDD) approach was used, beginning with the screening of fragment libraries to generate hits with high ligand-efficiency (LE). An indazole core was selected for further elaboration, guided by structural information. SAR from the indazole series led to the design of a pyrazolo[4,3-b]indole core and accelerated knowledge-based fragment growth resulted in potent and efficient MetAP2 inhibitors, which have shown robust and sustainable body weight loss in DIO mice when dosed orally. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. A knowledge-driven approach to cluster validity assessment.

    Science.gov (United States)

    Bolshakova, Nadia; Azuaje, Francisco; Cunningham, Pádraig

    2005-05-15

    This paper presents an approach to assessing cluster validity based on similarity knowledge extracted from the Gene Ontology. The program is freely available for non-profit use on request from the authors.

  17. Representation Discovery using Harmonic Analysis

    CERN Document Server

    Mahadevan, Sridhar

    2008-01-01

    Representations are at the heart of artificial intelligence (AI). This book is devoted to the problem of representation discovery: how can an intelligent system construct representations from its experience? Representation discovery re-parameterizes the state space - prior to the application of information retrieval, machine learning, or optimization techniques - facilitating later inference processes by constructing new task-specific bases adapted to the state space geometry. This book presents a general approach to representation discovery using the framework of harmonic analysis, in particu

  18. A knowledge integration approach to flood vulnerability

    Science.gov (United States)

    Mazzorana, Bruno; Fuchs, Sven

    2014-05-01

    Understanding, qualifying and quantifying vulnerability is an essential need for implementing effective and efficient flood risk mitigation strategies; in particular if possible synergies between different mitigation alternatives, such as active and passive measures, should be achieved. In order to combine different risk management options it is necessary to take an interdisciplinary approach to vulnerability reduction, and as a result the affected society may be willing to accept a certain degree of self-responsibility. However, due to differing mono-disciplinary approaches and regional foci undertaken until now, different aspects of vulnerability to natural hazards in general and to floods in particular remain uncovered and as a result the developed management options remain sub-optimal. Taking an even more fundamental viewpoint, the empirical vulnerability functions used in risk assessment specifically fail to capture physical principles of the damage-generating mechanisms to the build environment. The aim of this paper is to partially close this gap by discussing a balanced knowledge integration approach which can be used to resolve the multidisciplinary disorder in flood vulnerability research. Modelling techniques such as mathematical-physical modelling of the flood hazard impact to and response from the building envelope affected, and formative scenario analyses of possible consequences in terms of damage and loss are used in synergy to provide an enhanced understanding of vulnerability and to render the derived knowledge into interdisciplinary mitigation strategies. The outlined formal procedure allows for a convincing knowledge alignment of quantified, but partial, information about vulnerability as a result of the application of physical and engineering notions and valuable, but often underspecified, qualitative argumentation strings emerging from the adopted socio-economic viewpoint.

  19. Artificial intelligence and tutoring systems computational and cognitive approaches to the communication of knowledge

    CERN Document Server

    Wenger, Etienne

    2014-01-01

    Artificial Intelligence and Tutoring Systems: Computational and Cognitive Approaches to the Communication of Knowledge focuses on the cognitive approaches, methodologies, principles, and concepts involved in the communication of knowledge. The publication first elaborates on knowledge communication systems, basic issues, and tutorial dialogues. Concerns cover natural reasoning and tutorial dialogues, shift from local strategies to multiple mental models, domain knowledge, pedagogical knowledge, implicit versus explicit encoding of knowledge, knowledge communication, and practical and theoretic

  20. Decision-theoretic approaches to non-knowledge in economics

    OpenAIRE

    Svetlova, Ekaterina; van Elst, Henk

    2014-01-01

    We review two strands of conceptual approaches to the formal representation of a decision maker's non-knowledge at the initial stage of a static one-person, one-shot decision problem in economic theory. One focuses on representations of non-knowledge in terms of probability measures over sets of mutually exclusive and exhaustive consequence-relevant states of Nature, the other deals with unawareness of potentially important events by means of sets of states that are less complete than the ful...

  1. Automated cell type discovery and classification through knowledge transfer

    Science.gov (United States)

    Lee, Hao-Chih; Kosoy, Roman; Becker, Christine E.

    2017-01-01

    Abstract Motivation: Recent advances in mass cytometry allow simultaneous measurements of up to 50 markers at single-cell resolution. However, the high dimensionality of mass cytometry data introduces computational challenges for automated data analysis and hinders translation of new biological understanding into clinical applications. Previous studies have applied machine learning to facilitate processing of mass cytometry data. However, manual inspection is still inevitable and becoming the barrier to reliable large-scale analysis. Results: We present a new algorithm called Automated Cell-type Discovery and Classification (ACDC) that fully automates the classification of canonical cell populations and highlights novel cell types in mass cytometry data. Evaluations on real-world data show ACDC provides accurate and reliable estimations compared to manual gating results. Additionally, ACDC automatically classifies previously ambiguous cell types to facilitate discovery. Our findings suggest that ACDC substantially improves both reliability and interpretability of results obtained from high-dimensional mass cytometry profiling data. Availability and Implementation: A Python package (Python 3) and analysis scripts for reproducing the results are availability on https://bitbucket.org/dudleylab/acdc. Contact: brian.kidd@mssm.edu or joel.dudley@mssm.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28158442

  2. Characterization of GM events by insert knowledge adapted re-sequencing approaches.

    Science.gov (United States)

    Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing

    2013-10-03

    Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-adapted approaches for characterization GM events (TT51-1 and T1c-19 rice as examples) based on paired-end re-sequencing with the advantages of comprehensiveness, accuracy, and automation. The comprehensive molecular characteristics of two rice events were revealed with additional unintended insertions comparing with the results from PCR and Southern blotting. Comprehensive transgene characterization of TT51-1 and T1c-19 is shown to be independent of a priori knowledge of the insert and vector sequences employing the developed approaches. This provides an opportunity to identify and characterize also unknown GM events.

  3. The data bonanza improving knowledge discovery in science, engineering, and business

    CERN Document Server

    Atkinson, Malcolm; Brezany, Peter; Corcho, Oscar; Galea, Michelle; Parsons, Mark; Snelling, David; van Hemert, Jano

    2013-01-01

    Complete guidance for mastering the tools and techniques of the digital revolution With the digital revolution opening up tremendous opportunities in many fields, there is a growing need for skilled professionals who can develop data-intensive systems and extract information and knowledge from them. This book frames for the first time a new systematic approach for tackling the challenges of data-intensive computing, providing decision makers and technical experts alike with practical tools for dealing with our exploding data collections. Emphasizing data-intensive thinking an

  4. Toward a knowledge infrastructure for traits-based ecological risk assessment.

    Science.gov (United States)

    Baird, Donald J; Baker, Christopher J O; Brua, Robert B; Hajibabaei, Mehrdad; McNicol, Kearon; Pascoe, Timothy J; de Zwart, Dick

    2011-04-01

    The trait approach has already indicated significant potential as a tool in understanding natural variation among species in sensitivity to contaminants in the process of ecological risk assessment. However, to realize its full potential, a defined nomenclature for traits is urgently required, and significant effort is required to populate databases of species-trait relationships. Recently, there have been significant advances in the area of information management and discovery in the area of the semantic web. Combined with continuing progress in biological trait knowledge, these suggest that the time is right for a reevaluation of how trait information from divergent research traditions is collated and made available for end users in the field of environmental management. Although there has already been a great deal of work on traits, the information is scattered throughout databases, literature, and undiscovered sources. Further progress will require better leverage of this existing data and research to fill in the gaps. We review and discuss a number of technical and social challenges to bringing together existing information and moving toward a new, collaborative approach. Finally, we outline a path toward enhanced knowledge discovery within the traits domain space, showing that, by linking knowledge management infrastructure, semantic metadata (trait ontologies), and Web 2.0 and 3.0 technologies, we can begin to construct a dedicated platform for TERA science. Copyright © 2010 SETAC.

  5. Retrieval of criminal trajectories with an FCA-based approach

    NARCIS (Netherlands)

    Poelmans, J.; Elzinga, P.; Dedene, G.

    2013-01-01

    In this paper we briefly discuss the possibilities of Formal Concept Analysis for gaining insight in large amounts of unstructured police reports. We present a generic human centred knowledge discovery approach and showcase promising results obtained during empirical validation. The first case study

  6. Space nuclear reactor system diagnosis: Knowledge-based approach

    International Nuclear Information System (INIS)

    Ting, Y.T.D.

    1990-01-01

    SP-100 space nuclear reactor system development is a joint effort by the Department of Energy, the Department of Defense and the National Aeronautics and Space Administration. The system is designed to operate in isolation for many years, and is possibly subject to little or no remote maintenance. This dissertation proposes a knowledge based diagnostic system which, in principle, can diagnose the faults which can either cause reactor shutdown or lead to another serious problem. This framework in general can be applied to the fully specified system if detailed design information becomes available. The set of faults considered herein is identified based on heuristic knowledge about the system operation. The suitable approach to diagnostic problem solving is proposed after investigating the most prevalent methodologies in Artificial Intelligence as well as the causal analysis of the system. Deep causal knowledge modeling based on digraph, fault-tree or logic flowgraph methodology would present a need for some knowledge representation to handle the time dependent system behavior. A proposed qualitative temporal knowledge modeling methodology, using rules with specified time delay among the process variables, has been proposed and is used to develop the diagnostic sufficient rule set. The rule set has been modified by using a time zone approach to have a robust system design. The sufficient rule set is transformed to a sufficient and necessary one by searching the whole knowledge base. Qualitative data analysis is proposed in analyzing the measured data if in a real time situation. An expert system shell - Intelligence Compiler is used to develop the prototype system. Frames are used for the process variables. Forward chaining rules are used in monitoring and backward chaining rules are used in diagnosis

  7. A Comparative Assessment of Knowledge Management Leadership Approaches within the Department of Defense

    Science.gov (United States)

    2007-03-01

    A COMPARATIVE ASSESSMENT OF KNOWLEDGE MANAGEMENT LEADERSHIP APPROACHES WITHIN THE DEPARTMENT OF DEFENSE... MANAGEMENT LEADERSHIP APPROACHES WITHIN THE DEPARTMENT OF DEFENSE THESIS Presented to the Faculty Department of Systems and Engineering...KNOWLEDGE MANAGEMENT LEADERSHIP APPROACHES WITHIN THE DEPARTMENT OF DEFENSE Tommy V. S. Marshall II, BS Captain, USAF Approved

  8. The Effect of Concept Mapping-Guided Discovery Integrated Teaching Approach on Chemistry Students' Achievement and Retention

    Science.gov (United States)

    Fatokun, K. V. F.; Eniayeju, P. A.

    2014-01-01

    This study investigates the effects of Concept Mapping-Guided Discovery Integrated Teaching Approach on the achievement and retention of chemistry students. The sample comprised 162 Senior Secondary two (SS 2) students drawn from two Science Schools in Nasarawa State, Central Nigeria with equivalent mean scores of 9.68 and 9.49 in their pre-test.…

  9. Synthetic biology of antimicrobial discovery.

    Science.gov (United States)

    Zakeri, Bijan; Lu, Timothy K

    2013-07-19

    Antibiotic discovery has a storied history. From the discovery of penicillin by Sir Alexander Fleming to the relentless quest for antibiotics by Selman Waksman, the stories have become like folklore used to inspire future generations of scientists. However, recent discovery pipelines have run dry at a time when multidrug-resistant pathogens are on the rise. Nature has proven to be a valuable reservoir of antimicrobial agents, which are primarily produced by modularized biochemical pathways. Such modularization is well suited to remodeling by an interdisciplinary approach that spans science and engineering. Herein, we discuss the biological engineering of small molecules, peptides, and non-traditional antimicrobials and provide an overview of the growing applicability of synthetic biology to antimicrobials discovery.

  10. Enhancing the Teaching-Learning Process: A Knowledge Management Approach

    Science.gov (United States)

    Bhusry, Mamta; Ranjan, Jayanthi

    2012-01-01

    Purpose: The purpose of this paper is to emphasize the need for knowledge management (KM) in the teaching-learning process in technical educational institutions (TEIs) in India, and to assert the impact of information technology (IT) based KM intervention in the teaching-learning process. Design/methodology/approach: The approach of the paper is…

  11. Combinatorial thin film materials science: From alloy discovery and optimization to alloy design

    Energy Technology Data Exchange (ETDEWEB)

    Gebhardt, Thomas, E-mail: gebhardt@mch.rwth-aachen.de; Music, Denis; Takahashi, Tetsuya; Schneider, Jochen M.

    2012-06-30

    This paper provides an overview of modern alloy development, from discovery and optimization towards alloy design, based on combinatorial thin film materials science. The combinatorial approach, combining combinatorial materials synthesis of thin film composition-spreads with high-throughput property characterization has proven to be a powerful tool to delineate composition-structure-property relationships, and hence to efficiently identify composition windows with enhanced properties. Furthermore, and most importantly for alloy design, theoretical models and hypotheses can be critically appraised. Examples for alloy discovery, optimization, and alloy design of functional as well as structural materials are presented. Using Fe-Mn based alloys as an example, we show that the combination of modern electronic-structure calculations with the highly efficient combinatorial thin film composition-spread method constitutes an effective tool for knowledge-based alloy design.

  12. Combinatorial thin film materials science: From alloy discovery and optimization to alloy design

    International Nuclear Information System (INIS)

    Gebhardt, Thomas; Music, Denis; Takahashi, Tetsuya; Schneider, Jochen M.

    2012-01-01

    This paper provides an overview of modern alloy development, from discovery and optimization towards alloy design, based on combinatorial thin film materials science. The combinatorial approach, combining combinatorial materials synthesis of thin film composition-spreads with high-throughput property characterization has proven to be a powerful tool to delineate composition–structure–property relationships, and hence to efficiently identify composition windows with enhanced properties. Furthermore, and most importantly for alloy design, theoretical models and hypotheses can be critically appraised. Examples for alloy discovery, optimization, and alloy design of functional as well as structural materials are presented. Using Fe-Mn based alloys as an example, we show that the combination of modern electronic-structure calculations with the highly efficient combinatorial thin film composition-spread method constitutes an effective tool for knowledge-based alloy design.

  13. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  14. Computational and Experimental Approaches to Cancer Biomarker Discovery

    DEFF Research Database (Denmark)

    Krzystanek, Marcin

    of a patient’s response to a particular treatment, thus helping to avoid unnecessary treatment and unwanted side effects in non-responding individuals.Currently biomarker discovery is facilitated by recent advances in high-throughput technologies when association between a given biological phenotype...... and the state or level of a large number of molecular entities is investigated. Such associative analysis could be confounded by several factors, leading to false discoveries. For example, it is assumed that with the exception of the true biomarkers most molecular entities such as gene expression levels show...... random distribution in a given cohort. However, gene expression levels may also be affected by technical bias when the actual measurement technology or sample handling may introduce a systematic error. If the distribution of systematic errors correlates with the biological phenotype then the risk...

  15. A System Theoretical Inspired Approach to Knowledge Construction

    DEFF Research Database (Denmark)

    Mathiasen, Helle

    2008-01-01

    student's knowledge construction, in the light of operative constructivism, inspired by the German sociologist N. Luhmann's system theoretical approach to epistemology. Taking observations as operations based on distinction and indication (selection) contingency becomes a fundamental condition in learning......  Abstract The aim of this paper is to discuss the relation between teaching and learning. The point of departure is that teaching environments (communication forums) is a potential facilitator for learning processes and knowledge construction. The paper present a theoretical frame work, to discuss...... processes, and a condition which teaching must address as far as teaching strives to stimulate non-random learning outcomes. Thus learning outcomes understood as the individual learner's knowledge construction cannot be directly predicted from events and characteristics in the environment. This has...

  16. Guided Discovery with Socratic Questioning

    Directory of Open Access Journals (Sweden)

    M. Hakan Türkçapar

    2015-04-01

    Full Text Available “The Socratic method” is a way of teaching philosophical thinking and knowledge by asking questions. It was first used by in ancient times by the Greek philosopher Socrates who taught his followers by asking questions; these conversations between them are known as “Socratic dialogues”. In this methodology, no new knowledge is taught to the individual; rather, the individual is guided to remember and rediscover what was formerly known through this process. The main method used in cognitive therapy is guided discovery. There are various methods of guided discovery in cognitive therapy. The form of verbal exchange between the therapist and client which is used during the process of cognitive behavioral therapy is known as “socratic questioning”. In this method the goal is to make the client rediscover, with a series of questions, a piece of knowledge which he could otherwise know but is not presently conscious of. The Socratic Questioning consists of several steps, including: identifying the problem by listening to the client and making reflections, finding alternatives by examining and evaluating, reidentification by using the newly rediscovered information and questioning the old distorted belief, and reaching a new conclusion and applying it. Question types used during these procedures are: questions for collecting information, questions revealing meanings, questions revealing beliefs, questions about behaviours during similar past experiences, analytic questions and analytic synthesis questions. In order to make the patient feel understood, it is important to be empathetic and summarize the problem during the interview. In this text, steps of Socratic Questioning-Guided Discovery will be reviewed with sample dialogues provided for each step. [JCBPR 2015; 4(1.000: 47-53

  17. A knowledge translation project on community-centred approaches in public health.

    Science.gov (United States)

    Stansfield, J; South, J

    2018-03-01

    This article examines the development and impact of a national knowledge translation project aimed at improving access to evidence and learning on community-centred approaches for health and wellbeing. Structural changes in the English health system meant that knowledge on community engagement was becoming lost and a fragmented evidence base was seen to impact negatively on policy and practice. A partnership started between Public Health England, NHS England and Leeds Beckett University in 2014 to address these issues. Following a literature review and stakeholder consultation, evidence was published in a national guide to community-centred approaches. This was followed by a programme of work to translate the evidence into national strategy and local practice.The article outlines the key features of the knowledge translation framework developed. Results include positive impacts on local practice and national policy, for example adoption within National Institute for Health and Care Evidence (NICE) guidance and Local Authority public health plans and utilization as a tool for local audit of practice and commissioning. The framework was successful in its non-linear approach to knowledge translation across a range of inter-connected activity, built on national leadership, knowledge brokerage, coalition building and a strong collaboration between research institute and government agency.

  18. Approaches of Knowledge Management System for the Decommissioning of Nuclear Facilities

    International Nuclear Information System (INIS)

    Iguchi, Y.; Yanagihara, S.; Kato, Y.; Tezuka, M.; Koda, Y.

    2016-01-01

    Full text: The decommissioning of a nuclear facility is a long term project, handling information beginning with design, construction and operation. Moreover, the decommissioning project is likely to be extended because of the lack of the waste disposal site. In this situation, as the transfer of knowledge to the next generation is a crucial issue, approaches of knowledge management (KM) are necessary. For this purpose, the total system of decommissioning knowledge management system (KMS) is proposed. In this system, we should arrange, organize and systematize the data and information of the plant design, maintenance history, trouble events, waste management records etc. The collected data, information and records should be organized by computer support systems. It becomes a base of the explicit knowledge. Moreover, measures of extracting tacit knowledge from retiring employees are necessary. The experience of the retirees should be documented as much as possible through effective questionnaire or interview process. In this way, various KM approaches become an integrated KMS as a whole. The system should be used for daily accumulation of knowledge thorough the planning, implementation and evaluation of decommissioning activities and it will contribute to the transfer of knowledge. (author

  19. Evolution of the clinical and epidemiological knowledge about Chagas disease 90 years after its discovery

    Directory of Open Access Journals (Sweden)

    Prata Aluízio

    1999-01-01

    Full Text Available Three different periods may be considered in the evolution of knowledge about the clinical and epidemiological aspects of Chagas disease since its discovery: (a early period concerning the studies carried out by Carlos Chagas in Lassance with the collaboration of other investigators of the Manguinhos School. At that time the disease was described and the parasite, transmitters and reservoirs were studied. The coexistence of endemic goiter in the same region generated some confusion about the clinical forms of the disease; (b second period involving uncertainty and the description of isolated cases, which lasted until the 1940 decade. Many acute cases were described during this period and the disease was recognized in many Latin American countries. Particularly important were the studies of the Argentine Mission of Regional Pathology Studies, which culminated with the description of the Romaña sign in the 1930 decade, facilitating the diagnosis of the early phase of the disease. However, the chronic phase, which was the most important, continued to be difficult to recognize; (c period of consolidation of knowledge and recognition of the importance of Chagas disease. Studies conducted by Laranja, Dias and Nóbrega in Bambuí updated the description of Chagas heart disease made by Carlos Chagas and Eurico Villela. From then on, the disease was more easily recognized, especially with the emphasis on the use of a serologic diagnosis; (d period of enlargement of knowledges on the disease. The studies on denervation conducted in Ribeirão Preto by Fritz Köberle starting in the 1950 decade led to a better understanding of the relations between Chagas disease and megaesophagus and other visceral megas detected in endemic areas.

  20. Evaluation of gene association methods for coexpression network construction and biological knowledge discovery.

    Directory of Open Access Journals (Sweden)

    Sapna Kumari

    Full Text Available BACKGROUND: Constructing coexpression networks and performing network analysis using large-scale gene expression data sets is an effective way to uncover new biological knowledge; however, the methods used for gene association in constructing these coexpression networks have not been thoroughly evaluated. Since different methods lead to structurally different coexpression networks and provide different information, selecting the optimal gene association method is critical. METHODS AND RESULTS: In this study, we compared eight gene association methods - Spearman rank correlation, Weighted Rank Correlation, Kendall, Hoeffding's D measure, Theil-Sen, Rank Theil-Sen, Distance Covariance, and Pearson - and focused on their true knowledge discovery rates in associating pathway genes and construction coordination networks of regulatory genes. We also examined the behaviors of different methods to microarray data with different properties, and whether the biological processes affect the efficiency of different methods. CONCLUSIONS: We found that the Spearman, Hoeffding and Kendall methods are effective in identifying coexpressed pathway genes, whereas the Theil-sen, Rank Theil-Sen, Spearman, and Weighted Rank methods perform well in identifying coordinated transcription factors that control the same biological processes and traits. Surprisingly, the widely used Pearson method is generally less efficient, and so is the Distance Covariance method that can find gene pairs of multiple relationships. Some analyses we did clearly show Pearson and Distance Covariance methods have distinct behaviors as compared to all other six methods. The efficiencies of different methods vary with the data properties to some degree and are largely contingent upon the biological processes, which necessitates the pre-analysis to identify the best performing method for gene association and coexpression network construction.

  1. Conceptual dissonance: evaluating the efficacy of natural language processing techniques for validating translational knowledge constructs.

    Science.gov (United States)

    Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B

    2009-03-01

    The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.

  2. Synthetic biology of antimicrobial discovery

    Science.gov (United States)

    Zakeri, Bijan; Lu, Timothy K.

    2012-01-01

    Antibiotic discovery has a storied history. From the discovery of penicillin by Sir Alexander Fleming to the relentless quest for antibiotics by Selman Waksman, the stories have become like folklore, used to inspire future generations of scientists. However, recent discovery pipelines have run dry at a time when multidrug resistant pathogens are on the rise. Nature has proven to be a valuable reservoir of antimicrobial agents, which are primarily produced by modularized biochemical pathways. Such modularization is well suited to remodeling by an interdisciplinary approach that spans science and engineering. Herein, we discuss the biological engineering of small molecules, peptides, and non-traditional antimicrobials and provide an overview of the growing applicability of synthetic biology to antimicrobials discovery. PMID:23654251

  3. Pharmacological screening technologies for venom peptide discovery.

    Science.gov (United States)

    Prashanth, Jutty Rajan; Hasaballah, Nojod; Vetter, Irina

    2017-12-01

    Venomous animals occupy one of the most successful evolutionary niches and occur on nearly every continent. They deliver venoms via biting and stinging apparatuses with the aim to rapidly incapacitate prey and deter predators. This has led to the evolution of venom components that act at a number of biological targets - including ion channels, G-protein coupled receptors, transporters and enzymes - with exquisite selectivity and potency, making venom-derived components attractive pharmacological tool compounds and drug leads. In recent years, plate-based pharmacological screening approaches have been introduced to accelerate venom-derived drug discovery. A range of assays are amenable to this purpose, including high-throughput electrophysiology, fluorescence-based functional and binding assays. However, despite these technological advances, the traditional activity-guided fractionation approach is time-consuming and resource-intensive. The combination of screening techniques suitable for miniaturization with sequence-based discovery approaches - supported by advanced proteomics, mass spectrometry, chromatography as well as synthesis and expression techniques - promises to further improve venom peptide discovery. Here, we discuss practical aspects of establishing a pipeline for venom peptide drug discovery with a particular emphasis on pharmacology and pharmacological screening approaches. This article is part of the Special Issue entitled 'Venom-derived Peptides as Pharmacological Tools.' Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Discovery of a novel amino acid racemase through exploration of natural variation in Arabidopsis thaliana

    Science.gov (United States)

    Strauch, Renee C.; Svedin, Elisabeth; Dilkes, Brian; Chapple, Clint; Li, Xu

    2015-01-01

    Plants produce diverse low-molecular-weight compounds via specialized metabolism. Discovery of the pathways underlying production of these metabolites is an important challenge for harnessing the huge chemical diversity and catalytic potential in the plant kingdom for human uses, but this effort is often encumbered by the necessity to initially identify compounds of interest or purify a catalyst involved in their synthesis. As an alternative approach, we have performed untargeted metabolite profiling and genome-wide association analysis on 440 natural accessions of Arabidopsis thaliana. This approach allowed us to establish genetic linkages between metabolites and genes. Investigation of one of the metabolite–gene associations led to the identification of N-malonyl-d-allo-isoleucine, and the discovery of a novel amino acid racemase involved in its biosynthesis. This finding provides, to our knowledge, the first functional characterization of a eukaryotic member of a large and widely conserved phenazine biosynthesis protein PhzF-like protein family. Unlike most of known eukaryotic amino acid racemases, the newly discovered enzyme does not require pyridoxal 5′-phosphate for its activity. This study thus identifies a new d-amino acid racemase gene family and advances our knowledge of plant d-amino acid metabolism that is currently largely unexplored. It also demonstrates that exploitation of natural metabolic variation by integrating metabolomics with genome-wide association is a powerful approach for functional genomics study of specialized metabolism. PMID:26324904

  5. Computer-Aided Experiment Planning toward Causal Discovery in Neuroscience.

    Science.gov (United States)

    Matiasz, Nicholas J; Wood, Justin; Wang, Wei; Silva, Alcino J; Hsu, William

    2017-01-01

    Computers help neuroscientists to analyze experimental results by automating the application of statistics; however, computer-aided experiment planning is far less common, due to a lack of similar quantitative formalisms for systematically assessing evidence and uncertainty. While ontologies and other Semantic Web resources help neuroscientists to assimilate required domain knowledge, experiment planning requires not only ontological but also epistemological (e.g., methodological) information regarding how knowledge was obtained. Here, we outline how epistemological principles and graphical representations of causality can be used to formalize experiment planning toward causal discovery. We outline two complementary approaches to experiment planning: one that quantifies evidence per the principles of convergence and consistency, and another that quantifies uncertainty using logical representations of constraints on causal structure. These approaches operationalize experiment planning as the search for an experiment that either maximizes evidence or minimizes uncertainty. Despite work in laboratory automation, humans must still plan experiments and will likely continue to do so for some time. There is thus a great need for experiment-planning frameworks that are not only amenable to machine computation but also useful as aids in human reasoning.

  6. Learning about knowledge: A complex network approach

    International Nuclear Information System (INIS)

    Fontoura Costa, Luciano da

    2006-01-01

    An approach to modeling knowledge acquisition in terms of walks along complex networks is described. Each subset of knowledge is represented as a node, and relations between such knowledge are expressed as edges. Two types of edges are considered, corresponding to free and conditional transitions. The latter case implies that a node can only be reached after visiting previously a set of nodes (the required conditions). The process of knowledge acquisition can then be simulated by considering the number of nodes visited as a single agent moves along the network, starting from its lowest layer. It is shown that hierarchical networks--i.e., networks composed of successive interconnected layers--are related to compositions of the prerequisite relationships between the nodes. In order to avoid deadlocks--i.e., unreachable nodes--the subnetwork in each layer is assumed to be a connected component. Several configurations of such hierarchical knowledge networks are simulated and the performance of the moving agent quantified in terms of the percentage of visited nodes after each movement. The Barabasi-Albert and random models are considered for the layer and interconnecting subnetworks. Although all subnetworks in each realization have the same number of nodes, several interconnectivities, defined by the average node degree of the interconnection networks, have been considered. Two visiting strategies are investigated: random choice among the existing edges and preferential choice to so far untracked edges. A series of interesting results are obtained, including the identification of a series of plateaus of knowledge stagnation in the case of the preferential movement strategy in the presence of conditional edges

  7. Metabolic Network Discovery by Top-Down and Bottom-Up Approaches and Paths for Reconciliation

    Energy Technology Data Exchange (ETDEWEB)

    Çakır, Tunahan, E-mail: tcakir@gyte.edu.tr [Computational Systems Biology Group, Department of Bioengineering, Gebze Technical University (formerly known as Gebze Institute of Technology), Gebze (Turkey); Khatibipour, Mohammad Jafar [Computational Systems Biology Group, Department of Bioengineering, Gebze Technical University (formerly known as Gebze Institute of Technology), Gebze (Turkey); Department of Chemical Engineering, Gebze Technical University (formerly known as Gebze Institute of Technology), Gebze (Turkey)

    2014-12-03

    The primary focus in the network-centric analysis of cellular metabolism by systems biology approaches is to identify the active metabolic network for the condition of interest. Two major approaches are available for the discovery of the condition-specific metabolic networks. One approach starts from genome-scale metabolic networks, which cover all possible reactions known to occur in the related organism in a condition-independent manner, and applies methods such as the optimization-based Flux-Balance Analysis to elucidate the active network. The other approach starts from the condition-specific metabolome data, and processes the data with statistical or optimization-based methods to extract information content of the data such that the active network is inferred. These approaches, termed bottom-up and top-down, respectively, are currently employed independently. However, considering that both approaches have the same goal, they can both benefit from each other paving the way for the novel integrative analysis methods of metabolome data- and flux-analysis approaches in the post-genomic era. This study reviews the strengths of constraint-based analysis and network inference methods reported in the metabolic systems biology field; then elaborates on the potential paths to reconcile the two approaches to shed better light on how the metabolism functions.

  8. Metabolic Network Discovery by Top-Down and Bottom-Up Approaches and Paths for Reconciliation

    International Nuclear Information System (INIS)

    Çakır, Tunahan; Khatibipour, Mohammad Jafar

    2014-01-01

    The primary focus in the network-centric analysis of cellular metabolism by systems biology approaches is to identify the active metabolic network for the condition of interest. Two major approaches are available for the discovery of the condition-specific metabolic networks. One approach starts from genome-scale metabolic networks, which cover all possible reactions known to occur in the related organism in a condition-independent manner, and applies methods such as the optimization-based Flux-Balance Analysis to elucidate the active network. The other approach starts from the condition-specific metabolome data, and processes the data with statistical or optimization-based methods to extract information content of the data such that the active network is inferred. These approaches, termed bottom-up and top-down, respectively, are currently employed independently. However, considering that both approaches have the same goal, they can both benefit from each other paving the way for the novel integrative analysis methods of metabolome data- and flux-analysis approaches in the post-genomic era. This study reviews the strengths of constraint-based analysis and network inference methods reported in the metabolic systems biology field; then elaborates on the potential paths to reconcile the two approaches to shed better light on how the metabolism functions.

  9. Incremental Knowledge Discovery in Social Media

    Science.gov (United States)

    Tang, Xuning

    2013-01-01

    In light of the prosperity of online social media, Web users are shifting from data consumers to data producers. To catch the pulse of this rapidly changing world, it is critical to transform online social media data to information and to knowledge. This dissertation centers on the issue of modeling the dynamics of user communities, trending…

  10. Antibody informatics for drug discovery

    DEFF Research Database (Denmark)

    Shirai, Hiroki; Prades, Catherine; Vita, Randi

    2014-01-01

    to the antibody science in every project in antibody drug discovery. Recent experimental technologies allow for the rapid generation of large-scale data on antibody sequences, affinity, potency, structures, and biological functions; this should accelerate drug discovery research. Therefore, a robust bioinformatic...... infrastructure for these large data sets has become necessary. In this article, we first identify and discuss the typical obstacles faced during the antibody drug discovery process. We then summarize the current status of three sub-fields of antibody informatics as follows: (i) recent progress in technologies...... for antibody rational design using computational approaches to affinity and stability improvement, as well as ab-initio and homology-based antibody modeling; (ii) resources for antibody sequences, structures, and immune epitopes and open drug discovery resources for development of antibody drugs; and (iii...

  11. An Object-Oriented Approach to Knowledge Representation in a Biomedical Domain

    NARCIS (Netherlands)

    Ensing, M.; Paton, R.; Speel, P.H.W.M.; Speel, P.H.W.M.; Rada, R.

    1994-01-01

    An object-oriented approach has been applied to the different stages involved in developing a knowledge base about insulin metabolism. At an early stage the separation of terminological and assertional knowledge was made. The terminological component was developed by medical experts and represented

  12. Computational discovery of picomolar Q(o) site inhibitors of cytochrome bc1 complex.

    Science.gov (United States)

    Hao, Ge-Fei; Wang, Fu; Li, Hui; Zhu, Xiao-Lei; Yang, Wen-Chao; Huang, Li-Shar; Wu, Jia-Wei; Berry, Edward A; Yang, Guang-Fu

    2012-07-11

    A critical challenge to the fragment-based drug discovery (FBDD) is its low-throughput nature due to the necessity of biophysical method-based fragment screening. Herein, a method of pharmacophore-linked fragment virtual screening (PFVS) was successfully developed. Its application yielded the first picomolar-range Q(o) site inhibitors of the cytochrome bc(1) complex, an important membrane protein for drug and fungicide discovery. Compared with the original hit compound 4 (K(i) = 881.80 nM, porcine bc(1)), the most potent compound 4f displayed 20 507-fold improved binding affinity (K(i) = 43.00 pM). Compound 4f was proved to be a noncompetitive inhibitor with respect to the substrate cytochrome c, but a competitive inhibitor with respect to the substrate ubiquinol. Additionally, we determined the crystal structure of compound 4e (K(i) = 83.00 pM) bound to the chicken bc(1) at 2.70 Å resolution, providing a molecular basis for understanding its ultrapotency. To our knowledge, this study is the first application of the FBDD method in the discovery of picomolar inhibitors of a membrane protein. This work demonstrates that the novel PFVS approach is a high-throughput drug discovery method, independent of biophysical screening techniques.

  13. FluKB: A Knowledge-Based System for Influenza Vaccine Target Discovery and Analysis of the Immunological Properties of Influenza Viruses

    DEFF Research Database (Denmark)

    Simon, Christian; Kudahl, Ulrich Johan; Sun, Jing

    2015-01-01

    FluKB is a knowledge-based system focusing on data and analytical tools for influenza vaccine discovery. The main goal of FluKB is to provide access to curated influenza sequence and epitope data and enhance the analysis of influenza sequence diversity and the analysis of targets of immune...... responses. FluKB consists of more than 400,000 influenza protein sequences, known epitope data (357 verified T-cell epitopes, 685 HLA binders, and 16 naturally processed MHC ligands), and a collection of 28 influenza antibodies and their structurally defined B-cell epitopes. FluKB was built using amodular...

  14. Peningkatan Hasil Belajar Kompetensi Dasar Mengklasifikasikan Jenis Bisnis Ritel melalui Model Discovery Learning dengan Media Mind Mapping

    Directory of Open Access Journals (Sweden)

    Sri Lestari

    2016-11-01

    Full Text Available This research aimed to determine how is the implementation of discovery learning model with mind mapping media on the basic comptences of clasify kinds of retail business and whether the implementation of the discovery learning model with mind mapping media can increase learning outcomes of student. This research conducted using qualitative approach and quantitative in the planning of class action research by two cycles with research time for every cycle 2 meeting @ 3x45 minute. The result showed that by using discovery learning model with mind mapping media learning outcomes of student was good in the aspect of knowledge, skill, and the attitude have increased. Abstrak : Penelitian ini bertujuan untuk mengetahui bagaimana penerapan model discovery learning dengan menggunakan media mind mapping pada kompetensi dasar mengklasifikasikan jenis bisnis ritel dan apakah penerapan model discovery learning dengan menggunakan media mind mapping dapat meningkatkan hasil belajar siswa. Penelitian dilakukan dengan pendekatan kualitatif dan kuantitatif dalam rancangan penelitian tindakan kelas dengan dua siklus dengan waktu penelitian untuk masing-masing siklus 2 pertemuan @ 3 x 45 menit.  yang hasilnya menunjukkan bahwa melalui penggunaan model discovery learning dengan media mind mapping hasil belajar siswa baik dalam ranah pengetahuan, ketrampilan, maupun sikap mengalami peningkatan.

  15. Characterization of GM events by insert knowledge adapted re-sequencing approaches

    OpenAIRE

    Yang, Litao; Wang, Congmao; Holst-Jensen, Arne; Morisset, Dany; Lin, Yongjun; Zhang, Dabing

    2013-01-01

    Detection methods and data from molecular characterization of genetically modified (GM) events are needed by stakeholders of public risk assessors and regulators. Generally, the molecular characteristics of GM events are incomprehensively revealed by current approaches and biased towards detecting transformation vector derived sequences. GM events are classified based on available knowledge of the sequences of vectors and inserts (insert knowledge). Herein we present three insert knowledge-ad...

  16. Strategic approaches and assessment techniques-Potential for knowledge brokerage towards sustainability

    International Nuclear Information System (INIS)

    Sheate, William R.; Partidario, Maria Rosario

    2010-01-01

    The role of science in policy and decision-making has been an issue of intensive debate over the past decade. The concept of knowledge brokerage has been developing in this context contemplating issues of communication, interaction, sharing of knowledge, contribution to common understandings, as well as to effective and efficient action. For environmental and sustainability policy and decision-making the discussion has addressed more the essence of the issue rather than the techniques that can be used to enable knowledge brokerage. This paper aims to contribute to covering this apparent gap in current discussion by selecting and examining empirical cases from Portugal and the United Kingdom that can help to explore how certain environmental and sustainability assessment approaches can contribute, if well applied, to strengthen the science-policy link. The cases show that strategic assessment approaches and techniques have the potential to promote knowledge brokerage, but a conscious effort will be required to design in genuine opportunities to facilitate knowledge exchange and transfer as part of assessment processes.

  17. Contextual Approach with Guided Discovery Learning and Brain Based Learning in Geometry Learning

    Science.gov (United States)

    Kartikaningtyas, V.; Kusmayadi, T. A.; Riyadi

    2017-09-01

    The aim of this study was to combine the contextual approach with Guided Discovery Learning (GDL) and Brain Based Learning (BBL) in geometry learning of junior high school. Furthermore, this study analysed the effect of contextual approach with GDL and BBL in geometry learning. GDL-contextual and BBL-contextual was built from the steps of GDL and BBL that combined with the principles of contextual approach. To validate the models, it uses quasi experiment which used two experiment groups. The sample had been chosen by stratified cluster random sampling. The sample was 150 students of grade 8th in junior high school. The data were collected through the student’s mathematics achievement test that given after the treatment of each group. The data analysed by using one way ANOVA with different cell. The result shows that GDL-contextual has not different effect than BBL-contextual on mathematics achievement in geometry learning. It means both the two models could be used in mathematics learning as the innovative way in geometry learning.

  18. Case-based approaches for knowledge application and organisational learning

    DEFF Research Database (Denmark)

    Wang, Chengbo; Johansen, John; Luxhøj, James T.

    2005-01-01

    In dealing with the strategic issues within a manufacturing system, it is necessary to facilitate formulating the composing elements of a set of strategic manufacturing practices and activity patterns that will support an enterprise to reinforce and increase its competitive advantage....... These practices and activity patterns are based on learning and applying the knowledge internal and external to an organisation. To ensure their smooth formulation process, there are two important techniques designed – an expert adaptation approach and an expert evaluation approach. These two approaches provide...

  19. Knowledge discovery in databases of biomechanical variables: application to the sit to stand motor task

    Directory of Open Access Journals (Sweden)

    Benvenuti Francesco

    2004-10-01

    Full Text Available Abstract Background The interpretation of data obtained in a movement analysis laboratory is a crucial issue in clinical contexts. Collection of such data in large databases might encourage the use of modern techniques of data mining to discover additional knowledge with automated methods. In order to maximise the size of the database, simple and low-cost experimental set-ups are preferable. The aim of this study was to extract knowledge inherent in the sit-to-stand task as performed by healthy adults, by searching relationships among measured and estimated biomechanical quantities. An automated method was applied to a large amount of data stored in a database. The sit-to-stand motor task was already shown to be adequate for determining the level of individual motor ability. Methods The technique of search for association rules was chosen to discover patterns as part of a Knowledge Discovery in Databases (KDD process applied to a sit-to-stand motor task observed with a simple experimental set-up and analysed by means of a minimum measured input model. Selected parameters and variables of a database containing data from 110 healthy adults, of both genders and of a large range of age, performing the task were considered in the analysis. Results A set of rules and definitions were found characterising the patterns shared by the investigated subjects. Time events of the task turned out to be highly interdependent at least in their average values, showing a high level of repeatability of the timing of the performance of the task. Conclusions The distinctive patterns of the sit-to-stand task found in this study, associated to those that could be found in similar studies focusing on subjects with pathologies, could be used as a reference for the functional evaluation of specific subjects performing the sit-to-stand motor task.

  20. A decision support system based on hybrid knowledge approach for nuclear power plant operation

    International Nuclear Information System (INIS)

    Yang, J.O.; Chang, S.H.

    1991-01-01

    This paper describes a diagnostic expert system, HYPOSS (Hybrid Knowledge Based Plant Operation Supporting System), which has been developed to support operators' decision making during the transients of nuclear power plant. HYPOSS adopts the hybrid knowledge approach which combines shallow and deep knowledge to couple the merits of both approaches. In HYPOSS, four types of knowledge are used according to the steps of diagnosis procedure: structural, functional, behavioral and heuristic knowledge. Frames and rules are adopted to represent the various knowledge types. Rule-based deduction and abduction are used for shallow and deep knowledge based reasoning respectively. The event-based operational guidelines are provided to the operator according to the diagnosed results

  1. Does scientism undermine other forms of knowledge?

    Directory of Open Access Journals (Sweden)

    Ndubuisi C. Ani

    2016-03-01

    Full Text Available Science has continually bridged the gaps in knowledge about reality by exerting its prowess in explanation, discovery and invention. Astonished by the successes of science coupled with the demonstrability and (purported objectivity of scientific knowledge, scholars are lured to nurse the impression that science is the answer to all questions that need to be asked about reality. This has led to an intellectual fanaticism called scientism where science is seen as the only bona fide way of attaining any true knowledge whatsoever. Consequently, other fields of knowledge suffer grievously from being abandoned, belittled or modified to operate using the scientific method of inquiry. Against this backdrop, this paper argues that science is not the only way of knowing reality. Other fields of knowledge and their traditional methods of inquiry are vital in the understanding of reality that abandoning or constructing them in the scientific light is tantamount to having a parochial view of reality. Through its arguments, the research advances pluralistic, inclusive and complementary approaches.Intradisciplinary and/or interdisciplinary implications: This research challenges the claims and influence of scientism, which holds that science has the answer to every question about reality. The paper contends that other epistemological methods of philosophical, religious, mythical and artistic forms are essential epistemological methods. Hence, the research advances a pluralistic and complementary approach in epistemology.

  2. The use of web ontology languages and other semantic web tools in drug discovery.

    Science.gov (United States)

    Chen, Huajun; Xie, Guotong

    2010-05-01

    To optimize drug development processes, pharmaceutical companies require principled approaches to integrate disparate data on a unified infrastructure, such as the web. The semantic web, developed on the web technology, provides a common, open framework capable of harmonizing diversified resources to enable networked and collaborative drug discovery. We survey the state of art of utilizing web ontologies and other semantic web technologies to interlink both data and people to support integrated drug discovery across domains and multiple disciplines. Particularly, the survey covers three major application categories including: i) semantic integration and open data linking; ii) semantic web service and scientific collaboration and iii) semantic data mining and integrative network analysis. The reader will gain: i) basic knowledge of the semantic web technologies; ii) an overview of the web ontology landscape for drug discovery and iii) a basic understanding of the values and benefits of utilizing the web ontologies in drug discovery. i) The semantic web enables a network effect for linking open data for integrated drug discovery; ii) The semantic web service technology can support instant ad hoc collaboration to improve pipeline productivity and iii) The semantic web encourages publishing data in a semantic way such as resource description framework attributes and thus helps move away from a reliance on pure textual content analysis toward more efficient semantic data mining.

  3. Computational Design and Discovery of Ni-Based Alloys and Coatings: Thermodynamic Approaches Validated by Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zi-Kui [Pennsylvania State University; Gleeson, Brian [University of Pittsburgh; Shang, Shunli [Pennsylvania State University; Gheno, Thomas [University of Pittsburgh; Lindwall, Greta [Pennsylvania State University; Zhou, Bi-Cheng [Pennsylvania State University; Liu, Xuan [Pennsylvania State University; Ross, Austin [Pennsylvania State University

    2018-04-23

    This project developed computational tools that can complement and support experimental efforts in order to enable discovery and more efficient development of Ni-base structural materials and coatings. The project goal was reached through an integrated computation-predictive and experimental-validation approach, including first-principles calculations, thermodynamic CALPHAD (CALculation of PHAse Diagram), and experimental investigations on compositions relevant to Ni-base superalloys and coatings in terms of oxide layer growth and microstructure stabilities. The developed description included composition ranges typical for coating alloys and, hence, allow for prediction of thermodynamic properties for these material systems. The calculation of phase compositions, phase fraction, and phase stabilities, which are directly related to properties such as ductility and strength, was a valuable contribution, along with the collection of computational tools that are required to meet the increasing demands for strong, ductile and environmentally-protective coatings. Specifically, a suitable thermodynamic description for the Ni-Al-Cr-Co-Si-Hf-Y system was developed for bulk alloy and coating compositions. Experiments were performed to validate and refine the thermodynamics from the CALPHAD modeling approach. Additionally, alloys produced using predictions from the current computational models were studied in terms of their oxidation performance. Finally, results obtained from experiments aided in the development of a thermodynamic modeling automation tool called ESPEI/pycalphad - for more rapid discovery and development of new materials.

  4. Knowledge-based public health situation awareness

    Science.gov (United States)

    Mirhaji, Parsa; Zhang, Jiajie; Srinivasan, Arunkumar; Richesson, Rachel L.; Smith, Jack W.

    2004-09-01

    There have been numerous efforts to create comprehensive databases from multiple sources to monitor the dynamics of public health and most specifically to detect the potential threats of bioterrorism before widespread dissemination. But there are not many evidences for the assertion that these systems are timely and dependable, or can reliably identify man made from natural incident. One must evaluate the value of so called 'syndromic surveillance systems' along with the costs involved in design, development, implementation and maintenance of such systems and the costs involved in investigation of the inevitable false alarms1. In this article we will introduce a new perspective to the problem domain with a shift in paradigm from 'surveillance' toward 'awareness'. As we conceptualize a rather different approach to tackle the problem, we will introduce a different methodology in application of information science, computer science, cognitive science and human-computer interaction concepts in design and development of so called 'public health situation awareness systems'. We will share some of our design and implementation concepts for the prototype system that is under development in the Center for Biosecurity and Public Health Informatics Research, in the University of Texas Health Science Center at Houston. The system is based on a knowledgebase containing ontologies with different layers of abstraction, from multiple domains, that provide the context for information integration, knowledge discovery, interactive data mining, information visualization, information sharing and communications. The modular design of the knowledgebase and its knowledge representation formalism enables incremental evolution of the system from a partial system to a comprehensive knowledgebase of 'public health situation awareness' as it acquires new knowledge through interactions with domain experts or automatic discovery of new knowledge.

  5. Integrative Sparse K-Means With Overlapping Group Lasso in Genomic Applications for Disease Subtype Discovery.

    Science.gov (United States)

    Huo, Zhiguang; Tseng, George

    2017-06-01

    Cancer subtypes discovery is the first step to deliver personalized medicine to cancer patients. With the accumulation of massive multi-level omics datasets and established biological knowledge databases, omics data integration with incorporation of rich existing biological knowledge is essential for deciphering a biological mechanism behind the complex diseases. In this manuscript, we propose an integrative sparse K -means (is- K means) approach to discover disease subtypes with the guidance of prior biological knowledge via sparse overlapping group lasso. An algorithm using an alternating direction method of multiplier (ADMM) will be applied for fast optimization. Simulation and three real applications in breast cancer and leukemia will be used to compare is- K means with existing methods and demonstrate its superior clustering accuracy, feature selection, functional annotation of detected molecular features and computing efficiency.

  6. A Systematic Knowledge Management Approach Using Object-Oriented Theory in Customer Complaint Management

    Directory of Open Access Journals (Sweden)

    Wusheng Zhang

    2010-12-01

    Full Text Available Research into the effectiveness of customer complaint management has attracted researchers, yet there has been little discussion on customer complaint management in the context of systematic knowledge management approach particularly in the domain of hotel industry. This paper aims to address such gap through the application of object-oriented theory for which the notation of unified modelling language has been adopted for the representation of the concepts, objects, relationships and vocabularies in the domain. The paper used data from forty seven hotel management staff and academics in hospitalitymanagement to investigate lessons learned and best practices in customer complaint management and knowledge management. By providing insights into the potential of a knowledge management approach using object oriented theory, this study advances our understanding on how a knowledge management approach can systematically support the management of hotel customer complaints.

  7. Discovery of novel bacterial toxins by genomics and computational biology.

    Science.gov (United States)

    Doxey, Andrew C; Mansfield, Michael J; Montecucco, Cesare

    2018-06-01

    Hundreds and hundreds of bacterial protein toxins are presently known. Traditionally, toxin identification begins with pathological studies of bacterial infectious disease. Following identification and cultivation of a bacterial pathogen, the protein toxin is purified from the culture medium and its pathogenic activity is studied using the methods of biochemistry and structural biology, cell biology, tissue and organ biology, and appropriate animal models, supplemented by bioimaging techniques. The ongoing and explosive development of high-throughput DNA sequencing and bioinformatic approaches have set in motion a revolution in many fields of biology, including microbiology. One consequence is that genes encoding novel bacterial toxins can be identified by bioinformatic and computational methods based on previous knowledge accumulated from studies of the biology and pathology of thousands of known bacterial protein toxins. Starting from the paradigmatic cases of diphtheria toxin, tetanus and botulinum neurotoxins, this review discusses traditional experimental approaches as well as bioinformatics and genomics-driven approaches that facilitate the discovery of novel bacterial toxins. We discuss recent work on the identification of novel botulinum-like toxins from genera such as Weissella, Chryseobacterium, and Enteroccocus, and the implications of these computationally identified toxins in the field. Finally, we discuss the promise of metagenomics in the discovery of novel toxins and their ecological niches, and present data suggesting the existence of uncharacterized, botulinum-like toxin genes in insect gut metagenomes. Copyright © 2018. Published by Elsevier Ltd.

  8. An Analysis of Knowledge Sharing Approaches for Emerging-technology-based Strategic Alliances in Electronic Industry

    Institute of Scientific and Technical Information of China (English)

    LIU Ju; LI Yong-jian

    2006-01-01

    Emerging technologies are now initiating new industries and transforming old ones with tremendous power. They are different games compared with established technologies with distinctive characteristics of knowledge management in knowledge-based and technological-innovation-based competition. How to obtain knowledge advantage and enhance competences by knowledge sharing for emerging-technology-based strategic alliances (ETBSA) is what we concern in this paper. On the basis of our previous work on emerging technologies'distinctive attributes, we counter the wide spread presumption that the primary purpose of strategic alliances is knowledge acquiring by means of learning. We offers new insight into the knowledge sharing approaches of ETBSAs - the knowledge integrating approach by which each member firm integrates its partner's complementary knowledge base into the products and services and maintains its own knowledge specialization at the same time. So that ETBSAs should plan and practice their knowledge sharing strategies from the angle of knowledge integrating rather than knowledge acquiring. A four-dimensional framework is developed to analyze the advantages and disadvantages of these two knowledge sharing approaches. Some cases in electronic industry are introduced to illustrate our point of view.

  9. The Genetics of Obsessive-Compulsive Disorder and Tourette Syndrome: An Epidemiological and Pathway-Based Approach for Gene Discovery

    Science.gov (United States)

    Grados, Marco A.

    2010-01-01

    Objective: To provide a contemporary perspective on genetic discovery methods applied to obsessive-compulsive disorder (OCD) and Tourette syndrome (TS). Method: A review of research trends in genetics research in OCD and TS is conducted, with emphasis on novel approaches. Results: Genome-wide association studies (GWAS) are now in progress in OCD…

  10. Accounting for discovery bias in genomic prediction

    Science.gov (United States)

    Our objective was to evaluate an approach to mitigating discovery bias in genomic prediction. Accuracy may be improved by placing greater emphasis on regions of the genome expected to be more influential on a trait. Methods emphasizing regions result in a phenomenon known as “discovery bias” if info...

  11. Microbial Dark Matter Investigations: How Microbial Studies Transform Biological Knowledge and Empirically Sketch a Logic of Scientific Discovery

    Science.gov (United States)

    Bernard, Guillaume; Pathmanathan, Jananan S; Lannes, Romain; Lopez, Philippe; Bapteste, Eric

    2018-01-01

    Abstract Microbes are the oldest and most widespread, phylogenetically and metabolically diverse life forms on Earth. However, they have been discovered only 334 years ago, and their diversity started to become seriously investigated even later. For these reasons, microbial studies that unveil novel microbial lineages and processes affecting or involving microbes deeply (and repeatedly) transform knowledge in biology. Considering the quantitative prevalence of taxonomically and functionally unassigned sequences in environmental genomics data sets, and that of uncultured microbes on the planet, we propose that unraveling the microbial dark matter should be identified as a central priority for biologists. Based on former empirical findings of microbial studies, we sketch a logic of discovery with the potential to further highlight the microbial unknowns. PMID:29420719

  12. A SOCIO-COGNITIVE APPROACH TO KNOWLEDGE CONSTRUCTION THROUGH BLENDED LEARNING

    Directory of Open Access Journals (Sweden)

    Tuba Kocaturk

    2017-01-01

    Full Text Available This paper results from an educational research project that was undertaken by the School of Architecture, at the University of Liverpool funded by the Higher Education Academy in UK. The research explored technology driven shifts in architectural design studio education, identified their cognitive effects on design learning and developed an innovative blended learning approach that was implemented at a masters level digital design studio. The contribution of the research and the proposed approach to the existing knowledge and practice are twofold. Firstly, it offers a new pedagogical framework which integrates social, technical and cognitive dimensions of knowledge construction. And secondly, it offers a unique operational model through the integration of both mediational and instrumental use of digital media. The proposed model provides a useful basis for the effective mobilization of next generation learning technologies which can effectively respond to the learning challenges specific to architectural design knowledge and its means of creation.

  13. Optogenetic Approaches to Drug Discovery in Neuroscience and Beyond.

    Science.gov (United States)

    Zhang, Hongkang; Cohen, Adam E

    2017-07-01

    Recent advances in optogenetics have opened new routes to drug discovery, particularly in neuroscience. Physiological cellular assays probe functional phenotypes that connect genomic data to patient health. Optogenetic tools, in particular tools for all-optical electrophysiology, now provide a means to probe cellular disease models with unprecedented throughput and information content. These techniques promise to identify functional phenotypes associated with disease states and to identify compounds that improve cellular function regardless of whether the compound acts directly on a target or through a bypass mechanism. This review discusses opportunities and unresolved challenges in applying optogenetic techniques throughout the discovery pipeline - from target identification and validation, to target-based and phenotypic screens, to clinical trials. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A knowledge discovery approach to explore some Sun/Earth's climate relationships

    Science.gov (United States)

    Pou, A.; Valdes, J.

    2009-09-01

    Delta O18/16 data would more readily respond to solar influences. This raises the suspicion that perhaps they do not only reflect temperatures but also solar activity, as well as other possible factors not directly related to atmospheric temperatures. These methodologies may be useful as exploratory tools, directing the attention to specific areas where further research should be required. This could be the case of the Delta O18/16 data, frequently considered to be a reliable and accurate proxy of temperatures. c) Another experiment was made using daily maximum temperatures from 10 Spanish meteorological stations for the period 1901-2005 [3]. Using a hybrid procedure (Differential Evolution and Fletcher-Reeves Classical Optimization) it was found that a subset was capable of preserving the 10-dimensional similarity when nonlinearly mapped into 1D. A daily index, F1 was applied to the whole dataset and grouped by years and transformed into a Kolmogorov-Smirnov dissimilarity matrix, space optimized and clustered giving the following landmarks: 1911-12, 1919-1920, 1960, 1973 and 1989. A visual comparison with the aa geomagnetic index may suggest a certain coupling with changes in the magnetic field behavior. The complexity of the patterns suggest that the possible relationships between Earth's climate and solar activity may occur in much more complex ways than just irradiance variations and simple linear correlations. REFERENCES: [1] Valdés, J.J., Bonham-Carter, G. " Time Dependent Neural Network Models For Detecting Changes of State in Complex Processes: Applications in Earth Sciences and Astronomy”. Neural Networks, vol 19, (2), pp 196-207, 2006. [2] Valdés, J., Pou, A. "Greenland Temperatures and Solar Activity: A Computational Intelligence Approach," Proceedings of the 2007 IEEE International Joint Conference on Neural Networks (IJCNN 2007). Orlando, Florida, USA. August 12-17, 2007. [3] Valdés, J., Pou, A., Orchard, B. "Characterization of Climatic Variations in

  15. Managing knowledge business intelligence: A cognitive analytic approach

    Science.gov (United States)

    Surbakti, Herison; Ta'a, Azman

    2017-10-01

    The purpose of this paper is to identify and analyze integration of Knowledge Management (KM) and Business Intelligence (BI) in order to achieve competitive edge in context of intellectual capital. Methodology includes review of literatures and analyzes the interviews data from managers in corporate sector and models established by different authors. BI technologies have strong association with process of KM for attaining competitive advantage. KM have strong influence from human and social factors and turn them to the most valuable assets with efficient system run under BI tactics and technologies. However, the term of predictive analytics is based on the field of BI. Extracting tacit knowledge is a big challenge to be used as a new source for BI to use in analyzing. The advanced approach of the analytic methods that address the diversity of data corpus - structured and unstructured - required a cognitive approach to provide estimative results and to yield actionable descriptive, predictive and prescriptive results. This is a big challenge nowadays, and this paper aims to elaborate detail in this initial work.

  16. A Structural Knowledge Representation Approach in Emergency Knowledge Reorganization

    OpenAIRE

    Wang, Qingquan; Rong, Lili

    2007-01-01

    Facing complicate problems in emergency responses, decision makers should acquire sufficient background knowledge for efficient decision-making. Emergency knowledge acquired can be a kind of special product that is transferred among emergency decision makers and functional departments. The processing of knowledge product motivates the emergency knowledge decomposition and event-oriented knowledge integration, i.e. knowledge reorganization. Supported by the semantic power of category theory, t...

  17. Bioinformatics in translational drug discovery.

    Science.gov (United States)

    Wooller, Sarah K; Benstead-Hume, Graeme; Chen, Xiangrong; Ali, Yusuf; Pearl, Frances M G

    2017-08-31

    Bioinformatics approaches are becoming ever more essential in translational drug discovery both in academia and within the pharmaceutical industry. Computational exploitation of the increasing volumes of data generated during all phases of drug discovery is enabling key challenges of the process to be addressed. Here, we highlight some of the areas in which bioinformatics resources and methods are being developed to support the drug discovery pipeline. These include the creation of large data warehouses, bioinformatics algorithms to analyse 'big data' that identify novel drug targets and/or biomarkers, programs to assess the tractability of targets, and prediction of repositioning opportunities that use licensed drugs to treat additional indications. © 2017 The Author(s).

  18. Integration and analysis of neighbor discovery and link quality estimation in wireless sensor networks.

    Science.gov (United States)

    Radi, Marjan; Dezfouli, Behnam; Abu Bakar, Kamalrulnizam; Abd Razak, Shukor

    2014-01-01

    Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications.

  19. Integration and Analysis of Neighbor Discovery and Link Quality Estimation in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Marjan Radi

    2014-01-01

    Full Text Available Network connectivity and link quality information are the fundamental requirements of wireless sensor network protocols to perform their desired functionality. Most of the existing discovery protocols have only focused on the neighbor discovery problem, while a few number of them provide an integrated neighbor search and link estimation. As these protocols require a careful parameter adjustment before network deployment, they cannot provide scalable and accurate network initialization in large-scale dense wireless sensor networks with random topology. Furthermore, performance of these protocols has not entirely been evaluated yet. In this paper, we perform a comprehensive simulation study on the efficiency of employing adaptive protocols compared to the existing nonadaptive protocols for initializing sensor networks with random topology. In this regard, we propose adaptive network initialization protocols which integrate the initial neighbor discovery with link quality estimation process to initialize large-scale dense wireless sensor networks without requiring any parameter adjustment before network deployment. To the best of our knowledge, this work is the first attempt to provide a detailed simulation study on the performance of integrated neighbor discovery and link quality estimation protocols for initializing sensor networks. This study can help system designers to determine the most appropriate approach for different applications.

  20. "Tacit Knowledge" versus "Explicit Knowledge"

    DEFF Research Database (Denmark)

    Sanchez, Ron

    creators and carriers. By contrast, the explicit knowledge approach emphasizes processes for articulating knowledge held by individuals, the design of organizational approaches for creating new knowledge, and the development of systems (including information systems) to disseminate articulated knowledge...

  1. A Task-Based Approach to Organization: Knowledge, Communication and Structure

    OpenAIRE

    Luis Garicano; Yanhui Wu

    2010-01-01

    We bridge a gap between organizational economics and strategy research by developing a task-based approach to analyze organizational knowledge, process and structure, and deriving testable implications for the relation between production and organizational structure. We argue that organization emerges to integrate disperse knowledge and to coordinate talent in production and is designed to complement the limitations of human ability. The complexity of the tasks undertaken determines the optim...

  2. Knowledge mobilisation for policy development: implementing systems approaches through participatory dynamic simulation modelling.

    Science.gov (United States)

    Freebairn, Louise; Rychetnik, Lucie; Atkinson, Jo-An; Kelly, Paul; McDonnell, Geoff; Roberts, Nick; Whittall, Christine; Redman, Sally

    2017-10-02

    Evidence-based decision-making is an important foundation for health policy and service planning decisions, yet there remain challenges in ensuring that the many forms of available evidence are considered when decisions are being made. Mobilising knowledge for policy and practice is an emergent process, and one that is highly relational, often messy and profoundly context dependent. Systems approaches, such as dynamic simulation modelling can be used to examine both complex health issues and the context in which they are embedded, and to develop decision support tools. This paper reports on the novel use of participatory simulation modelling as a knowledge mobilisation tool in Australian real-world policy settings. We describe how this approach combined systems science methodology and some of the core elements of knowledge mobilisation best practice. We describe the strategies adopted in three case studies to address both technical and socio-political issues, and compile the experiential lessons derived. Finally, we consider the implications of these knowledge mobilisation case studies and provide evidence for the feasibility of this approach in policy development settings. Participatory dynamic simulation modelling builds on contemporary knowledge mobilisation approaches for health stakeholders to collaborate and explore policy and health service scenarios for priority public health topics. The participatory methods place the decision-maker at the centre of the process and embed deliberative methods and co-production of knowledge. The simulation models function as health policy and programme dynamic decision support tools that integrate diverse forms of evidence, including research evidence, expert knowledge and localised contextual information. Further research is underway to determine the impact of these methods on health service decision-making.

  3. A Knowledge Management Approach to Support Software Process Improvement Implementation Initiatives

    Science.gov (United States)

    Montoni, Mariano Angel; Cerdeiral, Cristina; Zanetti, David; Cavalcanti da Rocha, Ana Regina

    The success of software process improvement (SPI) implementation initiatives depends fundamentally of the strategies adopted to support the execution of such initiatives. Therefore, it is essential to define adequate SPI implementation strategies aiming to facilitate the achievement of organizational business goals and to increase the benefits of process improvements. The objective of this work is to present an approach to support the execution of SPI implementation initiatives. We also describe a methodology applied to capture knowledge related to critical success factors that influence SPI initiatives. This knowledge was used to define effective SPI strategies aiming to increase the success of SPI initiatives coordinated by a specific SPI consultancy organization. This work also presents the functionalities of a set of tools integrated in a process-centered knowledge management environment, named CORE-KM, customized to support the presented approach.

  4. A Bioinorganic Approach to Fragment-Based Drug Discovery Targeting Metalloenzymes.

    Science.gov (United States)

    Cohen, Seth M

    2017-08-15

    Metal-dependent enzymes (i.e., metalloenzymes) make up a large fraction of all enzymes and are critically important in a wide range of biological processes, including DNA modification, protein homeostasis, antibiotic resistance, and many others. Consequently, metalloenzymes represent a vast and largely untapped space for drug development. The discovery of effective therapeutics that target metalloenzymes lies squarely at the interface of bioinorganic and medicinal chemistry and requires expertise, methods, and strategies from both fields to mount an effective campaign. In this Account, our research program that brings together the principles and methods of bioinorganic and medicinal chemistry are described, in an effort to bridge the gap between these fields and address an important class of medicinal targets. Fragment-based drug discovery (FBDD) is an important drug discovery approach that is particularly well suited for metalloenzyme inhibitor development. FBDD uses relatively small but diverse chemical structures that allow for the assembly of privileged molecular collections that focus on a specific feature of the target enzyme. For metalloenzyme inhibition, the specific feature is rather obvious, namely, a metal-dependent active site. Surprisingly, prior to our work, the exploration of diverse molecular fragments for binding the metal active sites of metalloenzymes was largely unexplored. By assembling a modest library of metal-binding pharmacophores (MBPs), we have been able to find lead hits for many metalloenzymes and, from these hits, develop inhibitors that act via novel mechanisms of action. A specific case study on the use of this strategy to identify a first-in-class inhibitor of zinc-dependent Rpn11 (a component of the proteasome) is highlighted. The application of FBDD for the development of metalloenzyme inhibitors has raised several other compelling questions, such as how the metalloenzyme active site influences the coordination chemistry of bound

  5. A Semantic Lexicon-Based Approach for Sense Disambiguation and Its WWW Application

    Science.gov (United States)

    di Lecce, Vincenzo; Calabrese, Marco; Soldo, Domenico

    This work proposes a basic framework for resolving sense disambiguation through the use of Semantic Lexicon, a machine readable dictionary managing both word senses and lexico-semantic relations. More specifically, polysemous ambiguity characterizing Web documents is discussed. The adopted Semantic Lexicon is WordNet, a lexical knowledge-base of English words widely adopted in many research studies referring to knowledge discovery. The proposed approach extends recent works on knowledge discovery by focusing on the sense disambiguation aspect. By exploiting the structure of WordNet database, lexico-semantic features are used to resolve the inherent sense ambiguity of written text with particular reference to HTML resources. The obtained results may be extended to generic hypertextual repositories as well. Experiments show that polysemy reduction can be used to hint about the meaning of specific senses in given contexts.

  6. An approach to build a knowledge base for reactor diagnostic system using statistical method

    International Nuclear Information System (INIS)

    Yokobayashi, Masao; Matsumoto, Kiyoshi; Kohsaka, Atsuo

    1988-01-01

    In the development of a rule-based expert system, one of the key issues is how to acquire knowledge and to build a knowledge base. When the knowledge base of DISKET was built, which is an expert system for nuclear reactor accident diagnosis developed in Japan Atomic Energy Research Institute, several problems have been experienced. To write rules is a time-consuming task, and it was difficult to keep the objectivity and consistency of rules as the number of rules increased. Certainty factors must be determined often according to engineering judgement, i.e. empirically or intuitively. A systematic approach was attempted to cope with these difficulties and to build efficiently an objective knowledge base. The approach described in this paper is based on the concept that a prototype knowledge base, colloquially speaking an initial guess, should first be generated in a systematic way, then it is modified or improved by human experts for practical use. Factor analysis was used as the systematic way. DISKET system, the procedure of building a knowledge base, and the verification of the approach are reported. (Kako, I.)

  7. Data mining-aided materials discovery and optimization

    Directory of Open Access Journals (Sweden)

    Wencong Lu

    2017-09-01

    Full Text Available Recent developments in data mining-aided materials discovery and optimization are reviewed in this paper, and an introduction to the materials data mining (MDM process is provided using case studies. Both qualitative and quantitative methods in machine learning can be adopted in the MDM process to accomplish different tasks in materials discovery, design, and optimization. State-of-the-art techniques in data mining-aided materials discovery and optimization are demonstrated by reviewing the controllable synthesis of dendritic Co3O4 superstructures, materials design of layered double hydroxide, battery materials discovery, and thermoelectric materials design. The results of the case studies indicate that MDM is a powerful approach for use in materials discovery and innovation, and will play an important role in the development of the Materials Genome Initiative and Materials Informatics.

  8. In Silico Mining for Antimalarial Structure-Activity Knowledge and Discovery of Novel Antimalarial Curcuminoids

    Directory of Open Access Journals (Sweden)

    Birgit Viira

    2016-06-01

    Full Text Available Malaria is a parasitic tropical disease that kills around 600,000 patients every year. The emergence of resistant Plasmodium falciparum parasites to artemisinin-based combination therapies (ACTs represents a significant public health threat, indicating the urgent need for new effective compounds to reverse ACT resistance and cure the disease. For this, extensive curation and homogenization of experimental anti-Plasmodium screening data from both in-house and ChEMBL sources were conducted. As a result, a coherent strategy was established that allowed compiling coherent training sets that associate compound structures to the respective antimalarial activity measurements. Seventeen of these training sets led to the successful generation of classification models discriminating whether a compound has a significant probability to be active under the specific conditions of the antimalarial test associated with each set. These models were used in consensus prediction of the most likely active from a series of curcuminoids available in-house. Positive predictions together with a few predicted as inactive were then submitted to experimental in vitro antimalarial testing. A large majority from predicted compounds showed antimalarial activity, but not those predicted as inactive, thus experimentally validating the in silico screening approach. The herein proposed consensus machine learning approach showed its potential to reduce the cost and duration of antimalarial drug discovery.

  9. In Silico Mining for Antimalarial Structure-Activity Knowledge and Discovery of Novel Antimalarial Curcuminoids.

    Science.gov (United States)

    Viira, Birgit; Gendron, Thibault; Lanfranchi, Don Antoine; Cojean, Sandrine; Horvath, Dragos; Marcou, Gilles; Varnek, Alexandre; Maes, Louis; Maran, Uko; Loiseau, Philippe M; Davioud-Charvet, Elisabeth

    2016-06-29

    Malaria is a parasitic tropical disease that kills around 600,000 patients every year. The emergence of resistant Plasmodium falciparum parasites to artemisinin-based combination therapies (ACTs) represents a significant public health threat, indicating the urgent need for new effective compounds to reverse ACT resistance and cure the disease. For this, extensive curation and homogenization of experimental anti-Plasmodium screening data from both in-house and ChEMBL sources were conducted. As a result, a coherent strategy was established that allowed compiling coherent training sets that associate compound structures to the respective antimalarial activity measurements. Seventeen of these training sets led to the successful generation of classification models discriminating whether a compound has a significant probability to be active under the specific conditions of the antimalarial test associated with each set. These models were used in consensus prediction of the most likely active from a series of curcuminoids available in-house. Positive predictions together with a few predicted as inactive were then submitted to experimental in vitro antimalarial testing. A large majority from predicted compounds showed antimalarial activity, but not those predicted as inactive, thus experimentally validating the in silico screening approach. The herein proposed consensus machine learning approach showed its potential to reduce the cost and duration of antimalarial drug discovery.

  10. Knowledge Discovery in Chess Using an Aesthetics Approach

    Science.gov (United States)

    Iqbal, Azlan

    2012-01-01

    Computational aesthetics is a relatively new subfield of artificial intelligence (AI). It includes research that enables computers to "recognize" (and evaluate) beauty in various domains such as visual art, music, and games. Aside from the benefit this gives to humans in terms of creating and appreciating art in these domains, there are perhaps…

  11. Discovery informatics in biological and biomedical sciences: research challenges and opportunities.

    Science.gov (United States)

    Honavar, Vasant

    2015-01-01

    New discoveries in biological, biomedical and health sciences are increasingly being driven by our ability to acquire, share, integrate and analyze, and construct and simulate predictive models of biological systems. While much attention has focused on automating routine aspects of management and analysis of "big data", realizing the full potential of "big data" to accelerate discovery calls for automating many other aspects of the scientific process that have so far largely resisted automation: identifying gaps in the current state of knowledge; generating and prioritizing questions; designing studies; designing, prioritizing, planning, and executing experiments; interpreting results; forming hypotheses; drawing conclusions; replicating studies; validating claims; documenting studies; communicating results; reviewing results; and integrating results into the larger body of knowledge in a discipline. Against this background, the PSB workshop on Discovery Informatics in Biological and Biomedical Sciences explores the opportunities and challenges of automating discovery or assisting humans in discovery through advances (i) Understanding, formalization, and information processing accounts of, the entire scientific process; (ii) Design, development, and evaluation of the computational artifacts (representations, processes) that embody such understanding; and (iii) Application of the resulting artifacts and systems to advance science (by augmenting individual or collective human efforts, or by fully automating science).

  12. Knowledge management as an approach to strengthen safety culture in nuclear organizations

    International Nuclear Information System (INIS)

    Karseka, T.S.; Yanev, Y.L.

    2013-01-01

    In the last 10 years knowledge management (KM) in nuclear organizations has emerged as a powerful strategy to deal with important and frequently critical issues of attrition, generation change and knowledge transfer. Applying KM practices in operating organizations, in technical support organizations and regulatory bodies has proven to be efficient and necessary for maintaining competence and skills for achieving high level of safety and operational performance. The IAEA defines KM as an integrated, systematic approach to identifying, acquiring, transforming, developing, disseminating, using, sharing, and preserving knowledge, relevant to achieving specified objectives. KM focuses on people and organizational culture to stimulate and nurture the sharing and use of knowledge; on processes or methods to find, create, capture and share knowledge; and on technology to store and assimilate knowledge and to make it readily accessible in a manner which will allow people to work together even if they are not located together. A main objective of this paper is to describe constructive actions which can sponsor knowledge sharing and solidarity in safety conscious attitude among all employees. All principles and approaches refer primarily to Nuclear Power Plant (NPP) operating organizations but are also applicable to other institutions involved into nuclear sector. (orig.)

  13. Knowledge management as an approach to strengthen safety culture in nuclear organizations

    Energy Technology Data Exchange (ETDEWEB)

    Karseka, T.S.; Yanev, Y.L. [International Atomic Energy Agency, Vienna (Austria). Nuclear Energy Dept.

    2013-04-15

    In the last 10 years knowledge management (KM) in nuclear organizations has emerged as a powerful strategy to deal with important and frequently critical issues of attrition, generation change and knowledge transfer. Applying KM practices in operating organizations, in technical support organizations and regulatory bodies has proven to be efficient and necessary for maintaining competence and skills for achieving high level of safety and operational performance. The IAEA defines KM as an integrated, systematic approach to identifying, acquiring, transforming, developing, disseminating, using, sharing, and preserving knowledge, relevant to achieving specified objectives. KM focuses on people and organizational culture to stimulate and nurture the sharing and use of knowledge; on processes or methods to find, create, capture and share knowledge; and on technology to store and assimilate knowledge and to make it readily accessible in a manner which will allow people to work together even if they are not located together. A main objective of this paper is to describe constructive actions which can sponsor knowledge sharing and solidarity in safety conscious attitude among all employees. All principles and approaches refer primarily to Nuclear Power Plant (NPP) operating organizations but are also applicable to other institutions involved into nuclear sector. (orig.)

  14. A Knowledge Model Sharing Based Approach to Privacy-Preserving Data Mining

    OpenAIRE

    Hongwei Tian; Weining Zhang; Shouhuai Xu; Patrick Sharkey

    2012-01-01

    Privacy-preserving data mining (PPDM) is an important problem and is currently studied in three approaches: the cryptographic approach, the data publishing, and the model publishing. However, each of these approaches has some problems. The cryptographic approach does not protect privacy of learned knowledge models and may have performance and scalability issues. The data publishing, although is popular, may suffer from too much utility loss for certain types of data mining applications. The m...

  15. Mathematical knowledge in teaching of fraction concepts using diagrammatical approach

    Science.gov (United States)

    Veloo, Palanisamy Kathir; Puteh, Marzita

    2017-05-01

    Teachers need various types of knowledge in order to deliver various fraction concepts at elementary level. In this paper, Balls' framework (2008) or, Mathematical Knowledge for Teaching (MKT) is used as benchmark guideline. This paper investigates and explores component of MKT knowledge among eight experienced teachers of the primary school. Data was collected using paper pencil test, interview and video recording. This paper, narrowed to teacher's knowledge and their practices while teaching of various fractions concepts using diagrammatical approach in present of MKT. The data gathered from teachers were analyzed using thematic analysis techniques. The results indicated that teachers lack various components of MKT knowledge as a proposal by various researchers and assumed that teaching as procedural more than enough due to lack of deep understanding of mathematics and the various types of MKT is not required due to the present of practices in the mathematics classroom.

  16. Open Knowledge Maps: Creating a Visual Interface to the World’s Scientific Knowledge Based on Natural Language Processing

    Directory of Open Access Journals (Sweden)

    Peter Kraker

    2016-11-01

    Full Text Available The goal of Open Knowledge Maps is to create a visual interface to the world’s scientific knowledge. The base for this visual interface consists of so-called knowledge maps, which enable the exploration of existing knowledge and the discovery of new knowledge. Our open source knowledge mapping software applies a mixture of summarization techniques and similarity measures on article metadata, which are iteratively chained together. After processing, the representation is saved in a database for use in a web visualization. In the future, we want to create a space for collective knowledge mapping that brings together individuals and communities involved in exploration and discovery. We want to enable people to guide each other in their discovery by collaboratively annotating and modifying the automatically created maps. Das Ziel von Open Knowledge Map ist es, ein visuelles Interface zum wissenschaftlichen Wissen der Welt bereitzustellen. Die Basis für die dieses Interface sind sogenannte “knowledge maps”, zu deutsch Wissenslandkarten. Wissenslandkarten ermöglichen die Exploration bestehenden Wissens und die Entdeckung neuen Wissens. Unsere Open Source Software wendet für die Erstellung der Wissenslandkarten eine Reihe von Text Mining Verfahren iterativ auf die Metadaten wissenschaftlicher Artikel an. Die daraus resultierende Repräsentation wird in einer Datenbank für die Anzeige in einer Web-Visualisierung abgespeichert. In Zukunft wollen wir einen Raum für das kollektive Erstellen von Wissenslandkarten schaffen, der die Personen und Communities, welche sich mit der Exploration und Entdeckung wissenschaftlichen Wissens beschäftigen, zusammenbringt. Wir wollen es den NutzerInnen ermöglichen, einander in der Literatursuche durch kollaboratives Annotieren und Modifizieren von automatisch erstellten Wissenslandkarten zu unterstützen.

  17. Building Scalable Knowledge Graphs for Earth Science

    Science.gov (United States)

    Ramachandran, Rahul; Maskey, Manil; Gatlin, Patrick; Zhang, Jia; Duan, Xiaoyi; Miller, J. J.; Bugbee, Kaylin; Christopher, Sundar; Freitag, Brian

    2017-01-01

    Knowledge Graphs link key entities in a specific domain with other entities via relationships. From these relationships, researchers can query knowledge graphs for probabilistic recommendations to infer new knowledge. Scientific papers are an untapped resource which knowledge graphs could leverage to accelerate research discovery. Goal: Develop an end-to-end (semi) automated methodology for constructing Knowledge Graphs for Earth Science.

  18. 75 FR 66766 - NIAID Blue Ribbon Panel Meeting on Adjuvant Discovery and Development

    Science.gov (United States)

    2010-10-29

    ..., identifies gaps in knowledge and capabilities, and defines NIAID's goals for the continued discovery...), will convene a Blue Ribbon Panel to provide expertise in developing a strategic plan and research... vaccines. NIAID has developed a draft Strategic Plan and Research Agenda for Adjuvant Discovery and...

  19. Multi-dimensional discovery of biomarker and phenotype complexes

    Directory of Open Access Journals (Sweden)

    Huang Kun

    2010-10-01

    Full Text Available Abstract Background Given the rapid growth of translational research and personalized healthcare paradigms, the ability to relate and reason upon networks of bio-molecular and phenotypic variables at various levels of granularity in order to diagnose, stage and plan treatments for disease states is highly desirable. Numerous techniques exist that can be used to develop networks of co-expressed or otherwise related genes and clinical features. Such techniques can also be used to create formalized knowledge collections based upon the information incumbent to ontologies and domain literature. However, reports of integrative approaches that bridge such networks to create systems-level models of disease or wellness are notably lacking in the contemporary literature. Results In response to the preceding gap in knowledge and practice, we report upon a prototypical series of experiments that utilize multi-modal approaches to network induction. These experiments are intended to elicit meaningful and significant biomarker-phenotype complexes spanning multiple levels of granularity. This work has been performed in the experimental context of a large-scale clinical and basic science data repository maintained by the National Cancer Institute (NCI funded Chronic Lymphocytic Leukemia Research Consortium. Conclusions Our results indicate that it is computationally tractable to link orthogonal networks of genes, clinical features, and conceptual knowledge to create multi-dimensional models of interrelated biomarkers and phenotypes. Further, our results indicate that such systems-level models contain interrelated bio-molecular and clinical markers capable of supporting hypothesis discovery and testing. Based on such findings, we propose a conceptual model intended to inform the cross-linkage of the results of such methods. This model has as its aim the identification of novel and knowledge-anchored biomarker-phenotype complexes.

  20. Knowledge management of eco-industrial park for efficient energy utilization through ontology-based approach

    International Nuclear Information System (INIS)

    Zhang, Chuan; Romagnoli, Alessandro; Zhou, Li; Kraft, Markus

    2017-01-01

    Highlights: •An intelligent energy management system for Eco-Industrial Park (EIP) is proposed. •An explicit domain ontology for EIP energy management is designed. •Ontology-based approach can increase knowledge interoperability within EIP. •Ontology-based approach can allow self-optimization without human intervention in EIP. •The proposed system harbours huge potential in the future scenario of Internet of Things. -- Abstract: An ontology-based approach for Eco-Industrial Park (EIP) knowledge management is proposed in this paper. The designed ontology in this study is formalized conceptualization of EIP. Based on such an ontological representation, a Knowledge-Based System (KBS) for EIP energy management named J-Park Simulator (JPS) is developed. By applying JPS to the solution of EIP waste heat utilization problem, the results of this study show that ontology is a powerful tool for knowledge management of complex systems such as EIP. The ontology-based approach can increase knowledge interoperability between different companies in EIP. The ontology-based approach can also allow intelligent decision making by using disparate data from remote databases, which implies the possibility of self-optimization without human intervention scenario of Internet of Things (IoT). It is shown through this study that KBS can bridge the communication gaps between different companies in EIP, sequentially more potential Industrial Symbiosis (IS) links can be established to improve the overall energy efficiency of the whole EIP.

  1. "Discoveries in Planetary Sciences": Slide Sets Highlighting New Advances for Astronomy Educators

    Science.gov (United States)

    Brain, D. A.; Schneider, N. M.; Beyer, R. A.

    2010-12-01

    Planetary science is a field that evolves rapidly, motivated by spacecraft mission results. Exciting new mission results are generally communicated rather quickly to the public in the form of press releases and news stories, but it can take several years for new advances to work their way into college textbooks. Yet it is important for students to have exposure to these new advances for a number of reasons. In some cases, new work renders older textbook knowledge incorrect or incomplete. In some cases, new discoveries make it possible to emphasize older textbook knowledge in a new way. In all cases, new advances provide exciting and accessible examples of the scientific process in action. To bridge the gap between textbooks and new advances in planetary sciences we have developed content on new discoveries for use by undergraduate instructors. Called 'Discoveries in Planetary Sciences', each new discovery is summarized in a 3-slide PowerPoint presentation. The first slide describes the discovery, the second slide discusses the underlying planetary science concepts, and the third presents the big picture implications of the discovery. A fourth slide includes links to associated press releases, images, and primary sources. This effort is generously sponsored by the Division for Planetary Sciences of the American Astronomical Society, and the slide sets are available at http://dps.aas.org/education/dpsdisc/. Sixteen slide sets have been released so far covering topics spanning all sub-disciplines of planetary science. Results from the following spacecraft missions have been highlighted: MESSENGER, the Spirit and Opportunity rovers, Cassini, LCROSS, EPOXI, Chandrayan, Mars Reconnaissance Orbiter, Mars Express, and Venus Express. Additionally, new results from Earth-orbiting and ground-based observing platforms and programs such as Hubble, Keck, IRTF, the Catalina Sky Survey, HARPS, MEarth, Spitzer, and amateur astronomers have been highlighted. 4-5 new slide sets are

  2. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    Science.gov (United States)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  3. Knowledge-based biomedical word sense disambiguation: comparison of approaches

    Directory of Open Access Journals (Sweden)

    Aronson Alan R

    2010-11-01

    Full Text Available Abstract Background Word sense disambiguation (WSD algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well

  4. Knowledge-based Fragment Binding Prediction

    Science.gov (United States)

    Tang, Grace W.; Altman, Russ B.

    2014-01-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  5. Discovery Mondays

    CERN Multimedia

    2003-01-01

    Many people don't realise quite how much is going on at CERN. Would you like to gain first-hand knowledge of CERN's scientific and technological activities and their many applications? Try out some experiments for yourself, or pick the brains of the people in charge? If so, then the «Lundis Découverte» or Discovery Mondays, will be right up your street. Starting on May 5th, on every first Monday of the month you will be introduced to a different facet of the Laboratory. CERN staff, non-scientists, and members of the general public, everyone is welcome. So tell your friends and neighbours and make sure you don't miss this opportunity to satisfy your curiosity and enjoy yourself at the same time. You won't have to listen to a lecture, as the idea is to have open exchange with the expert in question and for each subject to be illustrated with experiments and demonstrations. There's no need to book, as Microcosm, CERN's interactive museum, will be open non-stop from 7.30 p.m. to 9 p.m. On the first Discovery M...

  6. The ecosystem approach as a framework for understanding knowledge utilisation

    OpenAIRE

    Roy Haines-Young; Marion Potschin

    2014-01-01

    The ecosystem approach is used to analyse four case studies from England to determine what kind of ecosystem knowledge was used by people, and how it shaped their arguments. The results are reported across decision-making venues concerned with: innovation, conflict management, maintenance of ecosystem function, and recognising the environment as an asset. In each area we identify the sources and uses of conceptual, instrumental, political, and social knowledge. We found that the use of these ...

  7. Chirality - The forthcoming 160th Anniversary of Pasteur's Discovery

    OpenAIRE

    Molčanov, K.; Kojić-Prodić., B.

    2007-01-01

    The presented review on chirality is dedicated to the centennial birth anniversary of Nobel laureate Vladimir Prelog and 160 years of Pasteur's discovery of chirality on tartrates. Chirality has been recognized in nature by artists and architects, who have used it for decorations and basic constructions, as shown in the Introduction. The progress of science through history has enabled the gathering of knowledge on chirality and its many ways of application. The key historical discoveries abou...

  8. Mining the Quantified Self: Personal Knowledge Discovery as a Challenge for Data Science.

    Science.gov (United States)

    Fawcett, Tom

    2015-12-01

    The last several years have seen an explosion of interest in wearable computing, personal tracking devices, and the so-called quantified self (QS) movement. Quantified self involves ordinary people recording and analyzing numerous aspects of their lives to understand and improve themselves. This is now a mainstream phenomenon, attracting a great deal of attention, participation, and funding. As more people are attracted to the movement, companies are offering various new platforms (hardware and software) that allow ever more aspects of daily life to be tracked. Nearly every aspect of the QS ecosystem is advancing rapidly, except for analytic capabilities, which remain surprisingly primitive. With increasing numbers of qualified self participants collecting ever greater amounts and types of data, many people literally have more data than they know what to do with. This article reviews the opportunities and challenges posed by the QS movement. Data science provides well-tested techniques for knowledge discovery. But making these useful for the QS domain poses unique challenges that derive from the characteristics of the data collected as well as the specific types of actionable insights that people want from the data. Using a small sample of QS time series data containing information about personal health we provide a formulation of the QS problem that connects data to the decisions of interest to the user.

  9. Dual-acting of Hybrid Compounds - A New Dawn in the Discovery of Multi-target Drugs: Lead Generation Approaches.

    Science.gov (United States)

    Abdolmaleki, Azizeh; Ghasemi, Jahan B

    2017-01-01

    Finding high quality beginning compounds is a critical job at the start of the lead generation stage for multi-target drug discovery (MTDD). Designing hybrid compounds as selective multitarget chemical entity is a challenge, opportunity, and new idea to better act against specific multiple targets. One hybrid molecule is formed by two (or more) pharmacophore group's participation. So, these new compounds often exhibit two or more activities going about as multi-target drugs (mtdrugs) and may have superior safety or efficacy. Application of integrating a range of information and sophisticated new in silico, bioinformatics, structural biology, pharmacogenomics methods may be useful to discover/design, and synthesis of the new hybrid molecules. In this regard, many rational and screening approaches have followed by medicinal chemists for the lead generation in MTDD. Here, we review some popular lead generation approaches that have been used for designing multiple ligands (DMLs). This paper focuses on dual- acting chemical entities that incorporate a part of two drugs or bioactive compounds to compose hybrid molecules. Also, it presents some of key concepts and limitations/strengths of lead generation methods by comparing combination framework method with screening approaches. Besides, a number of examples to represent applications of hybrid molecules in the drug discovery are included. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  10. Pediatrician's knowledge on the approach of functional constipation

    OpenAIRE

    Vieira, Mario C.; Negrelle, Isadora Carolina Krueger; Webber, Karla Ulaf; Gosdal, Marjorie; Truppel, Sabine Krüger; Kusma, Solena Ziemer

    2016-01-01

    Abstract Objective: To evaluate the pediatrician's knowledge regarding the diagnostic and therapeutic approach of childhood functional constipation. Methods: A descriptive cross-sectional study was performed with the application of a self-administered questionnaire concerning a hypothetical clinical case of childhood functional constipation with fecal incontinence to physicians (n=297) randomly interviewed at the 36th Brazilian Congress of Pediatrics in 2013. Results: The majority of the p...

  11. Viral pathogen discovery

    Science.gov (United States)

    Chiu, Charles Y

    2015-01-01

    Viral pathogen discovery is of critical importance to clinical microbiology, infectious diseases, and public health. Genomic approaches for pathogen discovery, including consensus polymerase chain reaction (PCR), microarrays, and unbiased next-generation sequencing (NGS), have the capacity to comprehensively identify novel microbes present in clinical samples. Although numerous challenges remain to be addressed, including the bioinformatics analysis and interpretation of large datasets, these technologies have been successful in rapidly identifying emerging outbreak threats, screening vaccines and other biological products for microbial contamination, and discovering novel viruses associated with both acute and chronic illnesses. Downstream studies such as genome assembly, epidemiologic screening, and a culture system or animal model of infection are necessary to establish an association of a candidate pathogen with disease. PMID:23725672

  12. From Mollusks to Medicine: A Venomics Approach for the Discovery and Characterization of Therapeutics from Terebridae Peptide Toxins

    Directory of Open Access Journals (Sweden)

    Aida Verdes

    2016-04-01

    Full Text Available Animal venoms comprise a diversity of peptide toxins that manipulate molecular targets such as ion channels and receptors, making venom peptides attractive candidates for the development of therapeutics to benefit human health. However, identifying bioactive venom peptides remains a significant challenge. In this review we describe our particular venomics strategy for the discovery, characterization, and optimization of Terebridae venom peptides, teretoxins. Our strategy reflects the scientific path from mollusks to medicine in an integrative sequential approach with the following steps: (1 delimitation of venomous Terebridae lineages through taxonomic and phylogenetic analyses; (2 identification and classification of putative teretoxins through omics methodologies, including genomics, transcriptomics, and proteomics; (3 chemical and recombinant synthesis of promising peptide toxins; (4 structural characterization through experimental and computational methods; (5 determination of teretoxin bioactivity and molecular function through biological assays and computational modeling; (6 optimization of peptide toxin affinity and selectivity to molecular target; and (7 development of strategies for effective delivery of venom peptide therapeutics. While our research focuses on terebrids, the venomics approach outlined here can be applied to the discovery and characterization of peptide toxins from any venomous taxa.

  13. IMG-ABC: A Knowledge Base To Fuel Discovery of Biosynthetic Gene Clusters and Novel Secondary Metabolites.

    Science.gov (United States)

    Hadjithomas, Michalis; Chen, I-Min Amy; Chu, Ken; Ratner, Anna; Palaniappan, Krishna; Szeto, Ernest; Huang, Jinghua; Reddy, T B K; Cimermančič, Peter; Fischbach, Michael A; Ivanova, Natalia N; Markowitz, Victor M; Kyrpides, Nikos C; Pati, Amrita

    2015-07-14

    In the discovery of secondary metabolites, analysis of sequence data is a promising exploration path that remains largely underutilized due to the lack of computational platforms that enable such a systematic approach on a large scale. In this work, we present IMG-ABC (https://img.jgi.doe.gov/abc), an atlas of biosynthetic gene clusters within the Integrated Microbial Genomes (IMG) system, which is aimed at harnessing the power of "big" genomic data for discovering small molecules. IMG-ABC relies on IMG's comprehensive integrated structural and functional genomic data for the analysis of biosynthetic gene clusters (BCs) and associated secondary metabolites (SMs). SMs and BCs serve as the two main classes of objects in IMG-ABC, each with a rich collection of attributes. A unique feature of IMG-ABC is the incorporation of both experimentally validated and computationally predicted BCs in genomes as well as metagenomes, thus identifying BCs in uncultured populations and rare taxa. We demonstrate the strength of IMG-ABC's focused integrated analysis tools in enabling the exploration of microbial secondary metabolism on a global scale, through the discovery of phenazine-producing clusters for the first time in Alphaproteobacteria. IMG-ABC strives to fill the long-existent void of resources for computational exploration of the secondary metabolism universe; its underlying scalable framework enables traversal of uncovered phylogenetic and chemical structure space, serving as a doorway to a new era in the discovery of novel molecules. IMG-ABC is the largest publicly available database of predicted and experimental biosynthetic gene clusters and the secondary metabolites they produce. The system also includes powerful search and analysis tools that are integrated with IMG's extensive genomic/metagenomic data and analysis tool kits. As new research on biosynthetic gene clusters and secondary metabolites is published and more genomes are sequenced, IMG-ABC will continue to

  14. A Qualitative Approach to Examining Knowledge Sharing in Iran Tax Administration Reform Program

    Directory of Open Access Journals (Sweden)

    Mehdi Shami Zanjanie

    2012-02-01

    Full Text Available The paper aims to examine knowledge sharing infrastructure of "Iran Tax Administration Reform Program". The qualitative approach by using case study method was applied in this research. In order to meet the research goal, four infrastructural dimensions of knowledge sharing were studied: leadership & strategy, culture, structure, and information technology. To the authors’ knowledge, this was maybe the first paper which examined knowledge sharing infrastructure in programs environment

  15. Rediscovering Don Swanson: The Past, Present and Future of Literature-based Discovery

    Directory of Open Access Journals (Sweden)

    Neil R. Smalheiser

    2017-12-01

    Full Text Available Purpose: The late Don R. Swanson was well appreciated during his lifetime as Dean of the Graduate Library School at University of Chicago, as winner of the American Society for Information Science Award of Merit for 2000, and as author of many seminal articles. In this informal essay, I will give my personal perspective on Don’s contributions to science, and outline some current and future directions in literature-based discovery that are rooted in concepts that he developed. Design/methodology/approach: Personal recollections and literature review. Findings: The Swanson A-B-C model of literature-based discovery has been successfully used by laboratory investigators analyzing their findings and hypotheses. It continues to be a fertile area of research in a wide range of application areas including text mining, drug repurposing, studies of scientific innovation, knowledge discovery in databases, and bioinformatics. Recently, additional modes of discovery that do not follow the A-B-C model have also been proposed and explored (e.g. so-called storytelling, gaps, analogies, link prediction, negative consensus, outliers, and revival of neglected or discarded research questions. Research limitations: This paper reflects the opinions of the author and is not a comprehensive nor technically based review of literature-based discovery. Practical implications: The general scientific public is still not aware of the availability of tools for literature-based discovery. Our Arrowsmith project site maintains a suite of discovery tools that are free and open to the public (http://arrowsmith.psych.uic.edu, as does BITOLA which is maintained by Dmitar Hristovski (http:// http://ibmi.mf.uni-lj.si/bitola, and Epiphanet which is maintained by Trevor Cohen (http://epiphanet.uth.tmc.edu/. Bringing user-friendly tools to the public should be a high priority, since even more than advancing basic research in informatics, it is vital that we ensure that scientists

  16. How is the Current Nano/Microscopic Knowledge Implemented in Model Approaches?

    International Nuclear Information System (INIS)

    Rotenberg, Benjamin

    2013-01-01

    The recent developments of experimental techniques have opened new opportunities and challenges for the modelling and simulation of clay materials, on various scales. In this communication, several aspects of the interaction between experimental and modelling approaches will be presented and dis-cussed. What levels of modelling are available depending on the target property and what experimental input is required? How can experimental information be used to validate models? What knowledge can modelling on different scale bring to the knowledge on the physical properties of clays? Finally, what can we do when experimental information is not available? Models implement the current nano/microscopic knowledge using experimental input, taking advantage of multi-scale approaches, and providing data or insights complementary to experiments. Future work will greatly benefit from the recent experimental developments, in particular for 3D-imaging on intermediate scales, and should also address other properties, e.g. mechanical or thermal properties. (authors)

  17. On the threshold of discovery

    International Nuclear Information System (INIS)

    Cherenkov, P.A.

    1986-01-01

    The author, the discoverer of the Cherenkov radiation, recalls some interesting circumstances of his discoery 50 years ago and puts it into the context of the knowledge of the period. The discovery of Cherenkov radiation which today is in practice used especially for the detection of charged particles, was correctly understood and appreciated somewhat belatedly. At first the discovery was met with distrust and the original article announcing it was rejected by the magazine Nature. In effect, the discovery was not the result of any planned experiment but was the by-product of another research. It was, of course, allowed by previous achievements in various fields of physics, namely progress reached in the study of luminescence by S.I. Vavilov and his pupils. The discovery was made during an experimental study of luminescence induced in liquids by the β and γ radiations of uranyl salts. During his attempts to suppress the background radiation from vessel walls the autor found a ''background'' from pure solvent which differed from luminescence by being independent of the concentration, temperature and viscosity of the liquid. A closer examination of the phenomenon more or less by accident revealed its marked spatial asymmetry which had major importance for the development of the theory of the new phenomenon by I.V. Tamm and I.M. Frank. (A.K.)

  18. Therapeutic approaches to genetic ion channelopathies and perspectives in drug discovery

    Directory of Open Access Journals (Sweden)

    Paola eImbrici

    2016-05-01

    Full Text Available In the human genome more than 400 genes encode ion channels, which are transmembrane proteins mediating ion fluxes across membranes. Being expressed in all cell types, they are involved in almost all physiological processes, including sense perception, neurotransmission, muscle contraction, secretion, immune response, cell proliferation and differentiation. Due to the widespread tissue distribution of ion channels and their physiological functions, mutations in genes encoding ion channel subunits, or their interacting proteins, are responsible for inherited ion channelopathies. These diseases can range from common to very rare disorders and their severity can be mild, disabling, or life-threatening. In spite of this, ion channels are the primary target of only about 5% of the marketed drugs suggesting their potential in drug discovery. The current review summarizes the therapeutic management of the principal ion channelopathies of central and peripheral nervous system, heart, kidney, bone, skeletal muscle and pancreas, resulting from mutations in calcium, sodium, potassium and chloride ion channels. For most channelopathies the therapy is mainly empirical and symptomatic, often limited by lack of efficacy and tolerability for a significant number of patients. Other channelopathies can exploit ion channel targeted drugs, such as marketed sodium channel blockers. Developing new and more specific therapeutic approaches is therefore required. To this aim, a major advancement in the pharmacotherapy of channelopathies has been the discovery that ion channel mutations lead to change in biophysics that can in turn specifically modify the sensitivity to drugs: this opens the way to a pharmacogenetics strategy, allowing the development of a personalized therapy with increased efficacy and reduced side effects. In addition, the identification of disease modifiers in ion channelopathies appears an alternative strategy to discover novel druggable targets.

  19. Exome sequencing for gene discovery in lethal fetal disorders--harnessing the value of extreme phenotypes.

    Science.gov (United States)

    Filges, Isabel; Friedman, Jan M

    2015-10-01

    Massively parallel sequencing has revolutionized our understanding of Mendelian disorders, and many novel genes have been discovered to cause disease phenotypes when mutant. At the same time, next-generation sequencing approaches have enabled non-invasive prenatal testing of free fetal DNA in maternal blood. However, little attention has been paid to using whole exome and genome sequencing strategies for gene identification in fetal disorders that are lethal in utero, because they can appear to be sporadic and Mendelian inheritance may be missed. We present challenges and advantages of applying next-generation sequencing approaches to gene discovery in fetal malformation phenotypes and review recent successful discovery approaches. We discuss the implication and significance of recessive inheritance and cross-species phenotyping in fetal lethal conditions. Whole exome sequencing can be used in individual families with undiagnosed lethal congenital anomaly syndromes to discover causal mutations, provided that prior to data analysis, the fetal phenotype can be correlated to a particular developmental pathway in embryogenesis. Cross-species phenotyping allows providing further evidence for causality of discovered variants in genes involved in those extremely rare phenotypes and will increase our knowledge about normal and abnormal human developmental processes. Ultimately, families will benefit from the option of early prenatal diagnosis. © 2014 John Wiley & Sons, Ltd.

  20. A Wavelet-Based Approach to Pattern Discovery in Melodies

    DEFF Research Database (Denmark)

    Velarde, Gissel; Meredith, David; Weyde, Tillman

    2016-01-01

    We present a computational method for pattern discovery based on the application of the wavelet transform to symbolic representations of melodies or monophonic voices. We model the importance of a discovered pattern in terms of the compression ratio that can be achieved by using it to describe...

  1. Can knowledge exchange support the implementation of a health-promoting schools approach? Perceived outcomes of knowledge exchange in the COMPASS study.

    Science.gov (United States)

    Brown, Kristin M; Elliott, Susan J; Robertson-Wilson, Jennifer; Vine, Michelle M; Leatherdale, Scott T

    2018-03-13

    Despite the potential population-level impact of a health-promoting schools approach, schools face challenges in implementation, indicating a gap between school health research and practice. Knowledge exchange provides an opportunity to reduce this gap; however, there has been limited evaluation of these initiatives. This research explored researchers' and knowledge users' perceptions of outcomes associated with a knowledge exchange initiative within COMPASS, a longitudinal study of Canadian secondary students and schools. Schools received annual tailored summaries of their students' health behaviours and suggestions for action and were linked with knowledge brokers to support them in taking action to improve student health. Qualitative semi-structured interviews were conducted with COMPASS researchers (n = 13), school staff (n = 13), and public health stakeholders (n = 4) to explore their experiences with COMPASS knowledge exchange. Key issues included how knowledge users used school-specific findings, perceived outcomes of knowledge exchange, and suggestions for change. Outcomes for both knowledge users and researchers were identified; interestingly, knowledge users attributed more outcomes to using school-specific findings than knowledge brokering. School and public health participants indicated school-specific findings informed their programming and planning. Importantly, knowledge exchange provided a platform for partnerships between researchers, schools, and public health units. Knowledge brokering allowed researchers to gain feedback from knowledge users to enhance the study and a better understanding of the school environment. Interestingly, COMPASS knowledge exchange outcomes aligned with Samdal and Rowling's eight theory-driven implementation components for health-promoting schools. Hence, knowledge exchange may provide a mechanism to help schools implement a health-promoting schools approach. This research contributes to the limited

  2. High-efficiency combinatorial approach as an effective tool for accelerating metallic biomaterials research and discovery

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X.D. [School of Material Science and Engineering, Central South University, Changsha, Hunan, 410083 (China); Liu, L.B., E-mail: lbliu.csu@gmail.com [School of Material Science and Engineering, Central South University, Changsha, Hunan, 410083 (China); State Key Laboratory for Powder Metallurgy, Changsha, Hunan, 410083 (China); Zhao, J.-C. [State Key Laboratory for Powder Metallurgy, Changsha, Hunan, 410083 (China); Department of Materials Science and Engineering, The Ohio State University, 2041 College Road, Columbus, OH 43210 (United States); Wang, J.L.; Zheng, F.; Jin, Z.P. [School of Material Science and Engineering, Central South University, Changsha, Hunan, 410083 (China)

    2014-06-01

    A high-efficiency combinatorial approach has been applied to rapidly build the database of composition-dependent elastic modulus and hardness of the Ti–Ta and Ti–Zr–Ta systems. A diffusion multiple of the Ti–Zr–Ta system was manufactured, then annealed at 1173 K for 1800 h, and water quenched to room temperature. Extensive interdiffusion among Ti, Zr and Ta has taken place. Combining nanoindentation and electron probe micro-analysis (EPMA), the elastic modulus, hardness as well as composition across the diffusion multiple were determined. The composition/elastic modulus/hardness relationship of the Ti–Ta and Ti–Zr–Ta alloys has been obtained. It was found that the elastic modulus and hardness depend strongly on the Ta and Zr content. The result can be used to accelerate the discovery/development of bio-titanium alloys for different components in implant prosthesis. - Highlights: • High-efficiency diffusion multiple of Ti–Zr–Ta was manufactured. • Composition-dependent elastic modulus and hardness of the Ti–Ta and Ti–Zr–Ta systems have been obtained effectively, • The methodology and the information can be used to accelerate the discovery/development of bio-titanium alloys.

  3. SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety

    International Nuclear Information System (INIS)

    Salomons, G; Kelly, D

    2015-01-01

    Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes that the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective

  4. SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety

    Energy Technology Data Exchange (ETDEWEB)

    Salomons, G [Cancer Center of Southeastern Ontario & Queen’s University, Kingston, ON (Canada); Kelly, D [Royal Military College of Canada, Kingston, ON, CA (Canada)

    2015-06-15

    Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes that the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective.

  5. Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs

    Directory of Open Access Journals (Sweden)

    Haijian Chen

    2015-01-01

    Full Text Available In recent years, Massive Open Online Courses (MOOCs are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate.

  6. Handling Neighbor Discovery and Rendezvous Consistency with Weighted Quorum-Based Approach.

    Science.gov (United States)

    Own, Chung-Ming; Meng, Zhaopeng; Liu, Kehan

    2015-09-03

    Neighbor discovery and the power of sensors play an important role in the formation of Wireless Sensor Networks (WSNs) and mobile networks. Many asynchronous protocols based on wake-up time scheduling have been proposed to enable neighbor discovery among neighboring nodes for the energy saving, especially in the difficulty of clock synchronization. However, existing researches are divided two parts with the neighbor-discovery methods, one is the quorum-based protocols and the other is co-primality based protocols. Their distinction is on the arrangements of time slots, the former uses the quorums in the matrix, the latter adopts the numerical analysis. In our study, we propose the weighted heuristic quorum system (WQS), which is based on the quorum algorithm to eliminate redundant paths of active slots. We demonstrate the specification of our system: fewer active slots are required, the referring rate is balanced, and remaining power is considered particularly when a device maintains rendezvous with discovered neighbors. The evaluation results showed that our proposed method can effectively reschedule the active slots and save the computing time of the network system.

  7. Handling Neighbor Discovery and Rendezvous Consistency with Weighted Quorum-Based Approach

    Directory of Open Access Journals (Sweden)

    Chung-Ming Own

    2015-09-01

    Full Text Available Neighbor discovery and the power of sensors play an important role in the formation of Wireless Sensor Networks (WSNs and mobile networks. Many asynchronous protocols based on wake-up time scheduling have been proposed to enable neighbor discovery among neighboring nodes for the energy saving, especially in the difficulty of clock synchronization. However, existing researches are divided two parts with the neighbor-discovery methods, one is the quorum-based protocols and the other is co-primality based protocols. Their distinction is on the arrangements of time slots, the former uses the quorums in the matrix, the latter adopts the numerical analysis. In our study, we propose the weighted heuristic quorum system (WQS, which is based on the quorum algorithm to eliminate redundant paths of active slots. We demonstrate the specification of our system: fewer active slots are required, the referring rate is balanced, and remaining power is considered particularly when a device maintains rendezvous with discovered neighbors. The evaluation results showed that our proposed method can effectively reschedule the active slots and save the computing time of the network system.

  8. Immediate And Retention Effects Of Teaching Games For Understanding Approach On Basketball Knowledge

    Directory of Open Access Journals (Sweden)

    Olosová Gabriela

    2015-05-01

    Full Text Available Teaching Games for Understanding (TGfU links tactics and skills by emphasizing the appropriate timing and application within the tactical context of the game. It has been linked to the development of enhanced tactical knowledge. The purpose of the study was to determine immediate and delayed effects of TGfU on procedural and declarative knowledge of basketball and to compare it with a technical approach. Experimental group (EG (11 fifth graders + 18 sixth graders was taught by TGfU and a control group (CG (16 fifth graders + 24 sixth graders was taught by a technical approach for 8 weeks in Physical Education (PE classes, both. A written test was constructed to evaluate pupils’ declarative and procedural knowledge of basketball. The test was applied after the intervention to determine immediate effects and 8 months after the intervention to determine retention effects of the experimental programme. Shapiro-Wilk test, Wilcoxon T-test, Man-Whitney U-test were used for statistical analysis of obtained data. Cohen’s d was used to calculate effect size. Generally basketball knowledge was better in EG than in CG after the intervention (p<0.05 what confirms moderate effect size. When declarative and procedural knowledge were analysed separately there was no significant difference between EG and CG. Nevertheless, moderate effect sizes indicate that the data are particularly meaningful in terms of school practice. Retention effects of both approaches were similar. Total knowledge and declarative knowledge were worse after 8 months than immediately after the intervention in both groups (p<0.01. In both groups, there was no significant difference in procedural knowledge between the test written immediately after the intervention and 8 months later. Differences of changes were not significant between the groups.

  9. Concept relation discovery and innovation enabling technology (CORDIET)

    NARCIS (Netherlands)

    Poelmans, J.; Elzinga, P.; Neznanov, A.; Viaene, S.; Kuznetsov, S.O.; Ignatov, D.; Dedene, G.

    2011-01-01

    Concept Relation Discovery and Innovation Enabling Technology (CORDIET), is a toolbox for gaining new knowledge from unstructured text data. At the core of CORDIET is the C-K theory which captures the essential elements of innovation. The tool uses Formal Concept Analysis (FCA), Emergent Self

  10. From machine learning to deep learning: progress in machine intelligence for rational drug discovery.

    Science.gov (United States)

    Zhang, Lu; Tan, Jianjun; Han, Dan; Zhu, Hao

    2017-11-01

    Machine intelligence, which is normally presented as artificial intelligence, refers to the intelligence exhibited by computers. In the history of rational drug discovery, various machine intelligence approaches have been applied to guide traditional experiments, which are expensive and time-consuming. Over the past several decades, machine-learning tools, such as quantitative structure-activity relationship (QSAR) modeling, were developed that can identify potential biological active molecules from millions of candidate compounds quickly and cheaply. However, when drug discovery moved into the era of 'big' data, machine learning approaches evolved into deep learning approaches, which are a more powerful and efficient way to deal with the massive amounts of data generated from modern drug discovery approaches. Here, we summarize the history of machine learning and provide insight into recently developed deep learning approaches and their applications in rational drug discovery. We suggest that this evolution of machine intelligence now provides a guide for early-stage drug design and discovery in the current big data era. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Automated discovery systems and the inductivist controversy

    Science.gov (United States)

    Giza, Piotr

    2017-09-01

    The paper explores possible influences that some developments in the field of branches of AI, called automated discovery and machine learning systems, might have upon some aspects of the old debate between Francis Bacon's inductivism and Karl Popper's falsificationism. Donald Gillies facetiously calls this controversy 'the duel of two English knights', and claims, after some analysis of historical cases of discovery, that Baconian induction had been used in science very rarely, or not at all, although he argues that the situation has changed with the advent of machine learning systems. (Some clarification of terms machine learning and automated discovery is required here. The key idea of machine learning is that, given data with associated outcomes, software can be trained to make those associations in future cases which typically amounts to inducing some rules from individual cases classified by the experts. Automated discovery (also called machine discovery) deals with uncovering new knowledge that is valuable for human beings, and its key idea is that discovery is like other intellectual tasks and that the general idea of heuristic search in problem spaces applies also to discovery tasks. However, since machine learning systems discover (very low-level) regularities in data, throughout this paper I use the generic term automated discovery for both kinds of systems. I will elaborate on this later on). Gillies's line of argument can be generalised: thanks to automated discovery systems, philosophers of science have at their disposal a new tool for empirically testing their philosophical hypotheses. Accordingly, in the paper, I will address the question, which of the two philosophical conceptions of scientific method is better vindicated in view of the successes and failures of systems developed within three major research programmes in the field: machine learning systems in the Turing tradition, normative theory of scientific discovery formulated by Herbert Simon

  12. A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Varnum, Susan M.; Brown, Joseph N.; Riensche, Roderick M.; Adkins, Joshua N.; Jacobs, Jon M.; Hoidal, John R.; Scholand, Mary Beth; Pounds, Joel G.; Blackburn, Michael R.; Rodland, Karin D.; McDermott, Jason E.

    2013-10-01

    Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.

  13. Sense making of (Social) sustainability: A behavioral and knowledge approach

    NARCIS (Netherlands)

    Faber, N R; Peters, K; Maruster, L; Van Haren, R; Jorna, R

    2010-01-01

    Although sustainability is often discussed solely in ecological terms, it cannot be disconnected from the way humans behave in their social environment. This article presents a theoretical approach toward sustainability that takes a human behavior and knowledge view on sustainability as a starting

  14. Intelligent assembly time analysis, using a digital knowledge based approach

    NARCIS (Netherlands)

    Jin, Y.; Curran, R.; Butterfield, J.; Burke, R.; Welch, B.

    2009-01-01

    The implementation of effective time analysis methods fast and accurately in the era of digital manufacturing has become a significant challenge for aerospace manufacturers hoping to build and maintain a competitive advantage. This paper proposes a structure oriented, knowledge-based approach for

  15. Effective Online Group Discovery in Trajectory Databases

    DEFF Research Database (Denmark)

    Li, Xiaohui; Ceikute, Vaida; Jensen, Christian S.

    2013-01-01

    GPS-enabled devices are pervasive nowadays. Finding movement patterns in trajectory data stream is gaining in importance. We propose a group discovery framework that aims to efficiently support the online discovery of moving objects that travel together. The framework adopts a sampling-independen......GPS-enabled devices are pervasive nowadays. Finding movement patterns in trajectory data stream is gaining in importance. We propose a group discovery framework that aims to efficiently support the online discovery of moving objects that travel together. The framework adopts a sampling......-independent approach that makes no assumptions about when positions are sampled, gives no special importance to sampling points, and naturally supports the use of approximate trajectories. The framework's algorithms exploit state-of-the-art, density-based clustering (DBScan) to identify groups. The groups are scored...

  16. PENGARUH PENDEKATAN DISCOVERY YANG MENEKANKAN ASPEK ANALOGI TERHADAP PRESTASI BELAJAR, KEMAMPUAN PENALARAN, KECERDASAN EMOSIONAL SPIRITUAL

    Directory of Open Access Journals (Sweden)

    Nur Choiro Siregar

    2015-11-01

    Full Text Available Penelitian ini bertujuan menyelidiki pengaruh pembelajaran segiempat dan segitiga dengan pendekatan discovery yang menekankan aspek analogi terhadap prestasi belajar, kemampuan penalaran, dan kecerdasan emosional spiritual siswa SMP Negeri 9 Yogyakarta. Penelitian ini merupakan penelitian eksperimen semu.Instrumen yang digunakan adalah tes prestasi belajar, tes kemampuan penalaran, dan angket kecerdasan emosional spiritual. Data dianalisis menggunakan uji Multivariate Analysis of Variance (Manova dan Analysis of Variance (Anova. Hasil penelitian menunjukkan ada pengaruh pembelajaran segiempat dan segitiga dengan pendekatan discovery yang menekankan aspek analogi terhadap prestasi belajar, dan kemampuan penalaran siswa. Berdasarkan analisis, pembelajaran segiempat dan segitiga dengan pendekatan discovery yang menekankan aspek analogi lebih unggul daripada pembelajaran biasa dalam hal prestasi belajar dan kemampuan penalaran. Sebaliknya, dalam hal kecerdasan emosional spiritual siswa, pendekatan discovery yang menekankan aspek analogi tidak memberi pengaruh dan tidak lebih unggul daripada pembelajaran biasa. Kata Kunci: discovery, menekankan aspek analogi, prestasi belajar, kemampuan penalaran, kecerdasan emosional spiritual   THE EFFECT OF DISCOVERY APPROACH EMPHASING ON THE ANALOGY ASPECT ON ACHIEVEMENT, REASONING ABILITY, EMOTIONAL SPIRITUAL INTELLIGENCE Abstract This study aims to investigate the effect of quadrilateral and triangle teaching using the disco-very approach emphasing on the analogy aspect on achievement, reasoning ability, and emotional spiritual intelligenceof Grade VII students of SMPN 9 Yogyakarta. This study was a quasi-experimen-tal study. The instruments of the study were an achievement test, reasoning ability test, and emotional spiritual intelligence questionnaire. The data wereanalyzed using the Multivariate Analysis of Variance (Manova and Analysis of Variance(Anova tests. The results of the study are as follows. There

  17. Applying genetics in inflammatory disease drug discovery

    DEFF Research Database (Denmark)

    Folkersen, Lasse; Biswas, Shameek; Frederiksen, Klaus Stensgaard

    2015-01-01

    , with several notable exceptions, the journey from a small-effect genetic variant to a functional drug has proven arduous, and few examples of actual contributions to drug discovery exist. Here, we discuss novel approaches of overcoming this hurdle by using instead public genetics resources as a pragmatic guide...... alongside existing drug discovery methods. Our aim is to evaluate human genetic confidence as a rationale for drug target selection....

  18. Knowledge discovery in cardiology: A systematic literature review.

    Science.gov (United States)

    Kadi, I; Idri, A; Fernandez-Aleman, J L

    2017-01-01

    Data mining (DM) provides the methodology and technology needed to transform huge amounts of data into useful information for decision making. It is a powerful process employed to extract knowledge and discover new patterns embedded in large data sets. Data mining has been increasingly used in medicine, particularly in cardiology. In fact, DM applications can greatly benefit all those involved in cardiology, such as patients, cardiologists and nurses. The purpose of this paper is to review papers concerning the application of DM techniques in cardiology so as to summarize and analyze evidence regarding: (1) the DM techniques most frequently used in cardiology; (2) the performance of DM models in cardiology; (3) comparisons of the performance of different DM models in cardiology. We performed a systematic literature review of empirical studies on the application of DM techniques in cardiology published in the period between 1 January 2000 and 31 December 2015. A total of 149 articles published between 2000 and 2015 were selected, studied and analyzed according to the following criteria: DM techniques and performance of the approaches developed. The results obtained showed that a significant number of the studies selected used classification and prediction techniques when developing DM models. Neural networks, decision trees and support vector machines were identified as being the techniques most frequently employed when developing DM models in cardiology. Moreover, neural networks and support vector machines achieved the highest accuracy rates and were proved to be more efficient than other techniques. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Daily life activity routine discovery in hemiparetic rehabilitation patients using topic models.

    Science.gov (United States)

    Seiter, J; Derungs, A; Schuster-Amft, C; Amft, O; Tröster, G

    2015-01-01

    based on Latent Dirichlet Allocation. Discovered activity routine patterns were then mapped to six categorized activity routines. Using the rule-based approach, activity routines could be discovered with an average accuracy of 76% across all patients. The rule-based approach outperformed clustering by 10% and showed less confusions for predicted activity routines. Topic models are suitable to discover daily life activity routines in hemiparetic rehabilitation patients without trained classifiers and activity annotations. Activity routines show characteristic patterns regarding activity primitives including body and extremity postures and movement. A patient-independent rule set can be derived. Including expert knowledge supports successful activity routine discovery over completely data-driven clustering.

  20. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    Science.gov (United States)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  1. Arrayed antibody library technology for therapeutic biologic discovery.

    Science.gov (United States)

    Bentley, Cornelia A; Bazirgan, Omar A; Graziano, James J; Holmes, Evan M; Smider, Vaughn V

    2013-03-15

    Traditional immunization and display antibody discovery methods rely on competitive selection amongst a pool of antibodies to identify a lead. While this approach has led to many successful therapeutic antibodies, targets have been limited to proteins which are easily purified. In addition, selection driven discovery has produced a narrow range of antibody functionalities focused on high affinity antagonism. We review the current progress in developing arrayed protein libraries for screening-based, rather than selection-based, discovery. These single molecule per microtiter well libraries have been screened in multiplex formats against both purified antigens and directly against targets expressed on the cell surface. This facilitates the discovery of antibodies against therapeutically interesting targets (GPCRs, ion channels, and other multispanning membrane proteins) and epitopes that have been considered poorly accessible to conventional discovery methods. Copyright © 2013. Published by Elsevier Inc.

  2. Knowledge Representation in Patient Safety Reporting: An Ontological Approach

    Directory of Open Access Journals (Sweden)

    Liang Chen

    2016-10-01

    Full Text Available Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. To improve patient safety reporting, the present paper suggests an ontological representation of patient safety knowledge. Design/methodology/approach: We propose a framework for constructing an ontological knowledge base of patient safety. The present paper describes our design, implementation, and evaluation of the ontology at its initial stage. Findings: We describe the design and initial outcomes of the ontology implementation. The evaluation results demonstrate the clinical validity of the ontology by a self-developed survey measurement. Research limitations: The proposed ontology was developed and evaluated using a small number of information sources. Presently, US data are used, but they are not essential for the ultimate structure of the ontology. Practical implications: The goal of improving patient safety can be aided through investigating patient safety reports and providing actionable knowledge to clinical practitioners. As such, constructing a domain specific ontology for patient safety reports serves as a cornerstone in information collection and text mining methods. Originality/value: The use of ontologies provides abstracted representation of semantic information and enables a wealth of applications in a reporting system. Therefore, constructing such a knowledge base is recognized as a high priority in health care.

  3. Drug discovery for Chagas disease should consider Trypanosoma cruzi strain diversity

    Directory of Open Access Journals (Sweden)

    Bianca Zingales

    2014-09-01

    Full Text Available This opinion piece presents an approach to standardisation of an important aspect of Chagas disease drug discovery and development: selecting Trypanosoma cruzi strains for in vitro screening. We discuss the rationale for strain selection representing T. cruzi diversity and provide recommendations on the preferred parasite stage for drug discovery, T. cruzi discrete typing units to include in the panel of strains and the number of strains/clones for primary screens and lead compounds. We also consider experimental approaches for in vitro drug assays. The Figure illustrates the current Chagas disease drug-discovery and development landscape.

  4. Personal optical disk library (PODL) for knowledge engineering

    Science.gov (United States)

    Wang, Hong; Jia, Huibo; Xu, Duanyi

    2001-02-01

    This paper describes the structure of Personal Optical Disk Library (PODL), a kind of large capacity (40 GB) optical storage equipment for personal usage. With the knowledge engineering technology integrated in the PODL, it can be used on knowledge query, knowledge discovery, Computer-Aided Instruction (CAI) and Online Analysis Process (OLAP).

  5. Citation analysis: A social and dynamic approach to knowledge organization

    DEFF Research Database (Denmark)

    Hjørland, Birger

    2013-01-01

    Knowledge organization (KO) and bibliometrics have traditionally been seen as separate subfields of library and information science, but bibliometric techniques make it possible to identify candidate terms for thesauri and to organize knowledge by relating scientific papers and authors to each...... be considered superior for all purposes. The main difference between traditional knowledge organization systems (KOSs) and maps based on citation analysis is that the first group represents intellectual KOSs, whereas the second represents social KOSs. For this reason bibliometric maps cannot be expected ever...... other and thereby indicating kinds of relatedness and semantic distance. It is therefore important to view bibliometric techniques as a family of approaches to KO in order to illustrate their relative strengths and weaknesses. The subfield of bibliometrics concerned with citation analysis forms...

  6. Gene set-based module discovery in the breast cancer transcriptome

    Directory of Open Access Journals (Sweden)

    Zhang Michael Q

    2009-02-01

    Full Text Available Abstract Background Although microarray-based studies have revealed global view of gene expression in cancer cells, we still have little knowledge about regulatory mechanisms underlying the transcriptome. Several computational methods applied to yeast data have recently succeeded in identifying expression modules, which is defined as co-expressed gene sets under common regulatory mechanisms. However, such module discovery methods are not applied cancer transcriptome data. Results In order to decode oncogenic regulatory programs in cancer cells, we developed a novel module discovery method termed EEM by extending a previously reported module discovery method, and applied it to breast cancer expression data. Starting from seed gene sets prepared based on cis-regulatory elements, ChIP-chip data, and gene locus information, EEM identified 10 principal expression modules in breast cancer based on their expression coherence. Moreover, EEM depicted their activity profiles, which predict regulatory programs in each subtypes of breast tumors. For example, our analysis revealed that the expression module regulated by the Polycomb repressive complex 2 (PRC2 is downregulated in triple negative breast cancers, suggesting similarity of transcriptional programs between stem cells and aggressive breast cancer cells. We also found that the activity of the PRC2 expression module is negatively correlated to the expression of EZH2, a component of PRC2 which belongs to the E2F expression module. E2F-driven EZH2 overexpression may be responsible for the repression of the PRC2 expression modules in triple negative tumors. Furthermore, our network analysis predicts regulatory circuits in breast cancer cells. Conclusion These results demonstrate that the gene set-based module discovery approach is a powerful tool to decode regulatory programs in cancer cells.

  7. Knowledge Management as an Approach to Learning and Instructing Sector University Students in Post-Soviet Professional Education

    Science.gov (United States)

    Volegzhanina, Irina S.; Chusovlyanova, Svetlana V.; Adolf, Vladimir A.; Bykadorova, Ekaterina S.; Belova, Elena N.

    2017-01-01

    The relevance of the study depends on addressing to the issue of knowledge management in learning and instructing students of post-Soviet sector universities. In this regard, the article is intended to reveal the nature of knowledge management approach compared to the knowledge-based one predominated in Soviet education. The flagship approach of…

  8. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  9. Novel ageing-biomarker discovery using data-intensive technologies.

    Science.gov (United States)

    Griffiths, H R; Augustyniak, E M; Bennett, S J; Debacq-Chainiaux, F; Dunston, C R; Kristensen, P; Melchjorsen, C J; Navarrete, Santos A; Simm, A; Toussaint, O

    2015-11-01

    Ageing is accompanied by many visible characteristics. Other biological and physiological markers are also well-described e.g. loss of circulating sex hormones and increased inflammatory cytokines. Biomarkers for healthy ageing studies are presently predicated on existing knowledge of ageing traits. The increasing availability of data-intensive methods enables deep-analysis of biological samples for novel biomarkers. We have adopted two discrete approaches in MARK-AGE Work Package 7 for biomarker discovery; (1) microarray analyses and/or proteomics in cell systems e.g. endothelial progenitor cells or T cell ageing including a stress model; and (2) investigation of cellular material and plasma directly from tightly-defined proband subsets of different ages using proteomic, transcriptomic and miR array. The first approach provided longitudinal insight into endothelial progenitor and T cell ageing. This review describes the strategy and use of hypothesis-free, data-intensive approaches to explore cellular proteins, miR, mRNA and plasma proteins as healthy ageing biomarkers, using ageing models and directly within samples from adults of different ages. It considers the challenges associated with integrating multiple models and pilot studies as rational biomarkers for a large cohort study. From this approach, a number of high-throughput methods were developed to evaluate novel, putative biomarkers of ageing in the MARK-AGE cohort. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Proteomic Approaches in Biomarker Discovery: New Perspectives in Cancer Diagnostics

    Science.gov (United States)

    Kocevar, Nina; Komel, Radovan

    2014-01-01

    Despite remarkable progress in proteomic methods, including improved detection limits and sensitivity, these methods have not yet been established in routine clinical practice. The main limitations, which prevent their integration into clinics, are high cost of equipment, the need for highly trained personnel, and last, but not least, the establishment of reliable and accurate protein biomarkers or panels of protein biomarkers for detection of neoplasms. Furthermore, the complexity and heterogeneity of most solid tumours present obstacles in the discovery of specific protein signatures, which could be used for early detection of cancers, for prediction of disease outcome, and for determining the response to specific therapies. However, cancer proteome, as the end-point of pathological processes that underlie cancer development and progression, could represent an important source for the discovery of new biomarkers and molecular targets for tailored therapies. PMID:24550697

  11. International Drug Discovery Science and Technology--BIT's Seventh Annual Congress.

    Science.gov (United States)

    Bodovitz, Steven

    2010-01-01

    BIT's Seventh Annual International Drug Discovery Science and Technology Congress, held in Shanghai, included topics covering new therapeutic and technological developments in the field of drug discovery. This conference report highlights selected presentations on open-access approaches to R&D, novel and multifactorial targets, and technologies that assist drug discovery. Investigational drugs discussed include the anticancer agents astuprotimut-r (GlaxoSmithKline plc) and AS-1411 (Antisoma plc).

  12. A Virtual Bioinformatics Knowledge Environment for Early Cancer Detection

    Science.gov (United States)

    Crichton, Daniel; Srivastava, Sudhir; Johnsey, Donald

    2003-01-01

    Discovery of disease biomarkers for cancer is a leading focus of early detection. The National Cancer Institute created a network of collaborating institutions focused on the discovery and validation of cancer biomarkers called the Early Detection Research Network (EDRN). Informatics plays a key role in enabling a virtual knowledge environment that provides scientists real time access to distributed data sets located at research institutions across the nation. The distributed and heterogeneous nature of the collaboration makes data sharing across institutions very difficult. EDRN has developed a comprehensive informatics effort focused on developing a national infrastructure enabling seamless access, sharing and discovery of science data resources across all EDRN sites. This paper will discuss the EDRN knowledge system architecture, its objectives and its accomplishments.

  13. An Adaptive Approach to Managing Knowledge Development in a Project-Based Learning Environment

    Science.gov (United States)

    Tilchin, Oleg; Kittany, Mohamed

    2016-01-01

    In this paper we propose an adaptive approach to managing the development of students' knowledge in the comprehensive project-based learning (PBL) environment. Subject study is realized by two-stage PBL. It shapes adaptive knowledge management (KM) process and promotes the correct balance between personalized and collaborative learning. The…

  14. Knowledge discovery based on experiential learning corporate culture management

    Science.gov (United States)

    Tu, Kai-Jan

    2014-10-01

    A good corporate culture based on humanistic theory can make the enterprise's management very effective, all enterprise's members have strong cohesion and centripetal force. With experiential learning model, the enterprise can establish an enthusiastic learning spirit corporate culture, have innovation ability to gain the positive knowledge growth effect, and to meet the fierce global marketing competition. A case study on Trend's corporate culture can offer the proof of industry knowledge growth rate equation as the contribution to experiential learning corporate culture management.

  15. The approach to engineering tasks composition on knowledge portals

    Science.gov (United States)

    Novogrudska, Rina; Globa, Larysa; Schill, Alexsander; Romaniuk, Ryszard; Wójcik, Waldemar; Karnakova, Gaini; Kalizhanova, Aliya

    2017-08-01

    The paper presents an approach to engineering tasks composition on engineering knowledge portals. The specific features of engineering tasks are highlighted, their analysis makes the basis for partial engineering tasks integration. The formal algebraic system for engineering tasks composition is proposed, allowing to set the context-independent formal structures for engineering tasks elements' description. The method of engineering tasks composition is developed that allows to integrate partial calculation tasks into general calculation tasks on engineering portals, performed on user request demand. The real world scenario «Calculation of the strength for the power components of magnetic systems» is represented, approving the applicability and efficiency of proposed approach.

  16. Venomics-Accelerated Cone Snail Venom Peptide Discovery

    Science.gov (United States)

    Himaya, S. W. A.

    2018-01-01

    Cone snail venoms are considered a treasure trove of bioactive peptides. Despite over 800 species of cone snails being known, each producing over 1000 venom peptides, only about 150 unique venom peptides are structurally and functionally characterized. To overcome the limitations of the traditional low-throughput bio-discovery approaches, multi-omics systems approaches have been introduced to accelerate venom peptide discovery and characterisation. This “venomic” approach is starting to unravel the full complexity of cone snail venoms and to provide new insights into their biology and evolution. The main challenge for venomics is the effective integration of transcriptomics, proteomics, and pharmacological data and the efficient analysis of big datasets. Novel database search tools and visualisation techniques are now being introduced that facilitate data exploration, with ongoing advances in related omics fields being expected to further enhance venomics studies. Despite these challenges and future opportunities, cone snail venomics has already exponentially expanded the number of novel venom peptide sequences identified from the species investigated, although most novel conotoxins remain to be pharmacologically characterised. Therefore, efficient high-throughput peptide production systems and/or banks of miniaturized discovery assays are required to overcome this bottleneck and thus enhance cone snail venom bioprospecting and accelerate the identification of novel drug leads. PMID:29522462

  17. Venomics-Accelerated Cone Snail Venom Peptide Discovery

    Directory of Open Access Journals (Sweden)

    S. W. A. Himaya

    2018-03-01

    Full Text Available Cone snail venoms are considered a treasure trove of bioactive peptides. Despite over 800 species of cone snails being known, each producing over 1000 venom peptides, only about 150 unique venom peptides are structurally and functionally characterized. To overcome the limitations of the traditional low-throughput bio-discovery approaches, multi-omics systems approaches have been introduced to accelerate venom peptide discovery and characterisation. This “venomic” approach is starting to unravel the full complexity of cone snail venoms and to provide new insights into their biology and evolution. The main challenge for venomics is the effective integration of transcriptomics, proteomics, and pharmacological data and the efficient analysis of big datasets. Novel database search tools and visualisation techniques are now being introduced that facilitate data exploration, with ongoing advances in related omics fields being expected to further enhance venomics studies. Despite these challenges and future opportunities, cone snail venomics has already exponentially expanded the number of novel venom peptide sequences identified from the species investigated, although most novel conotoxins remain to be pharmacologically characterised. Therefore, efficient high-throughput peptide production systems and/or banks of miniaturized discovery assays are required to overcome this bottleneck and thus enhance cone snail venom bioprospecting and accelerate the identification of novel drug leads.

  18. The discovery and history of knowledge of natural atmospheric radioactivity

    International Nuclear Information System (INIS)

    Renoux, A.

    1996-01-01

    Everybody knows that the radioactivity was discovered, 100 years ago, by the Frenchman Henri Becquerel at Paris, in Feb. 1896, stemmed from the discovery of X-rays by Roentgen in the preceding year. In 1899, Rutherford was able to show the existence of α and β rays, and in 1900 Villard showed the presence of a third class of rays, the γ rays. The discovery of the rare radioactive gas radon is attributed to P. and M. Curie in 1898 and F. Dorn in 1900. Thoron ( 220 Rn) was discovered by Rutherford and Owens in 1899-1900 and Actinon ( 219 Rn) by Debierne and Geisel about the same time. The radon's radiotoxicity was studied in France since 1904 by Bouchard and Balthazard and in 1924 it was formulated the hypothesis that the great mortality observed in the uranium miners of Schneeberg in Germany and Joachimsthal in Czechoslovakia was maybe due to the radon. But, in fact, Elster and Geitel were the first to see that the radioactivity is present in the atmosphere in about 1901. After this date, many investigations were made (M. Curie, for example), but it is during the fifties and, of course, until today that the most numerous works were developed. In this paper, we speak about the researches of the after second war pioneers: Evans, Wilkening, Kawano, Israel, Junge, Schuman, Bricard... Renoux, Madelaine, Blanc, Fontan, Siksna, Chamberlain, Dyson, Nolan, etc. and the works developed later. Finally, we reach to the nineties, period where the works are particularly directed in the aim of radon and radon progeny indoor, with in particular, many works effectuated in France. (author). 78 refs., 17 figs., 2 tabs

  19. Science Teachers’ Pedagogical Content Knowledge and Integrated Approach

    Science.gov (United States)

    Adi Putra, M. J.; Widodo, A.; Sopandi, W.

    2017-09-01

    The integrated approach refers to the stages of pupils’ psychological development. Unfortunately, the competences which are designed into the curriculum is not appropriate with the child development. This Manuscript presents PCK (pedagogical content knowledge) of teachers who teach science content utilizing an integrated approach. The data has been collected by using CoRe, PaP-eR, and interviews from six elementary teachers who teach science. The paper informs that high and stable teacher PCKs have an impact on how teachers present integrated teaching. Because it is influenced by the selection of important content that must be submitted to the students, the depth of the content, the reasons for choosing the teaching procedures and some other things. So for teachers to be able to integrate teaching, they should have a balanced PCK.

  20. Causality discovery technology

    Science.gov (United States)

    Chen, M.; Ertl, T.; Jirotka, M.; Trefethen, A.; Schmidt, A.; Coecke, B.; Bañares-Alcántara, R.

    2012-11-01

    Causality is the fabric of our dynamic world. We all make frequent attempts to reason causation relationships of everyday events (e.g., what was the cause of my headache, or what has upset Alice?). We attempt to manage causality all the time through planning and scheduling. The greatest scientific discoveries are usually about causality (e.g., Newton found the cause for an apple to fall, and Darwin discovered natural selection). Meanwhile, we continue to seek a comprehensive understanding about the causes of numerous complex phenomena, such as social divisions, economic crisis, global warming, home-grown terrorism, etc. Humans analyse and reason causality based on observation, experimentation and acquired a priori knowledge. Today's technologies enable us to make observations and carry out experiments in an unprecedented scale that has created data mountains everywhere. Whereas there are exciting opportunities to discover new causation relationships, there are also unparalleled challenges to benefit from such data mountains. In this article, we present a case for developing a new piece of ICT, called Causality Discovery Technology. We reason about the necessity, feasibility and potential impact of such a technology.

  1. Compositional descriptor-based recommender system for the materials discovery

    Science.gov (United States)

    Seko, Atsuto; Hayashi, Hiroyuki; Tanaka, Isao

    2018-06-01

    Structures and properties of many inorganic compounds have been collected historically. However, it only covers a very small portion of possible inorganic crystals, which implies the presence of numerous currently unknown compounds. A powerful machine-learning strategy is mandatory to discover new inorganic compounds from all chemical combinations. Herein we propose a descriptor-based recommender-system approach to estimate the relevance of chemical compositions where crystals can be formed [i.e., chemically relevant compositions (CRCs)]. In addition to data-driven compositional similarity used in the literature, the use of compositional descriptors as a prior knowledge is helpful for the discovery of new compounds. We validate our recommender systems in two ways. First, one database is used to construct a model, while another is used for the validation. Second, we estimate the phase stability for compounds at expected CRCs using density functional theory calculations.

  2. Countering and Exceeding "Capital": A "Funds of Knowledge" Approach to Re-Imagining Community

    Science.gov (United States)

    Zipin, Lew; Sellar, Sam; Hattam, Robert

    2012-01-01

    This article discusses how the "funds of knowledge" approach (FoK) offers a socially just alternative to the logics of capital, by drawing on knowledge assets from students' family and community lifeworlds to build engaging and rigorous learning, supporting school-community interactions that build capacities. We explain how we applied…

  3. Advanced biological and chemical discovery (ABCD): centralizing discovery knowledge in an inherently decentralized world.

    Science.gov (United States)

    Agrafiotis, Dimitris K; Alex, Simson; Dai, Heng; Derkinderen, An; Farnum, Michael; Gates, Peter; Izrailev, Sergei; Jaeger, Edward P; Konstant, Paul; Leung, Albert; Lobanov, Victor S; Marichal, Patrick; Martin, Douglas; Rassokhin, Dmitrii N; Shemanarev, Maxim; Skalkin, Andrew; Stong, John; Tabruyn, Tom; Vermeiren, Marleen; Wan, Jackson; Xu, Xiang Yang; Yao, Xiang

    2007-01-01

    We present ABCD, an integrated drug discovery informatics platform developed at Johnson & Johnson Pharmaceutical Research & Development, L.L.C. ABCD is an attempt to bridge multiple continents, data systems, and cultures using modern information technology and to provide scientists with tools that allow them to analyze multifactorial SAR and make informed, data-driven decisions. The system consists of three major components: (1) a data warehouse, which combines data from multiple chemical and pharmacological transactional databases, designed for supreme query performance; (2) a state-of-the-art application suite, which facilitates data upload, retrieval, mining, and reporting, and (3) a workspace, which facilitates collaboration and data sharing by allowing users to share queries, templates, results, and reports across project teams, campuses, and other organizational units. Chemical intelligence, performance, and analytical sophistication lie at the heart of the new system, which was developed entirely in-house. ABCD is used routinely by more than 1000 scientists around the world and is rapidly expanding into other functional areas within the J&J organization.

  4. EFEKTIVITAS PENDEKATAN SAINTIFIK BERBASIS GROUP INVESTIGATION DAN DISCOVERY LEARNING DITINJAU DARI MINAT BELAJAR MAHASISWA

    Directory of Open Access Journals (Sweden)

    Ira Vahlia

    2017-06-01

    Full Text Available Appropriate learning models contribute to student learning interest in math. The purpose of this study is to describe the difference in effectiveness between scientific approach based on group investigation and discovery in terms of student's interest in learning. The research that is conducted is quasi experimental research and the design used is 2 x factorial description. In experimental class I that apply scientific approach model based on study group investigation obtained the average value of learning outcome of 66.60 while in the experimental class II applying the approach Science-based discovery learning obtained the average value of posttest of 76.28. Based on the marginal rate, the scientific approach to discovery-based learning on moderate interest in learning outcomes is higher than the learning outcomes at high and low interest. In the scientific approach based on group study investigation and discovery learning there are differences in average learning outcomes between high, medium and low interest. Scientific approach based on group investigation learning on higher interest in learning outcomes is higher than moderate and low interest.

  5. Supporting the Knowledge-to-Action Process: A Systems-Thinking Approach

    Science.gov (United States)

    Cherney, Adrian; Head, Brian

    2011-01-01

    The processes for moving research-based knowledge to the domains of action in social policy and professional practice are complex. Several disciplinary research traditions have illuminated several key aspects of these processes. A more holistic approach, drawing on systems thinking, has also been outlined and advocated by recent contributors to…

  6. Mathematical modeling for novel cancer drug discovery and development.

    Science.gov (United States)

    Zhang, Ping; Brusic, Vladimir

    2014-10-01

    Mathematical modeling enables: the in silico classification of cancers, the prediction of disease outcomes, optimization of therapy, identification of promising drug targets and prediction of resistance to anticancer drugs. In silico pre-screened drug targets can be validated by a small number of carefully selected experiments. This review discusses the basics of mathematical modeling in cancer drug discovery and development. The topics include in silico discovery of novel molecular drug targets, optimization of immunotherapies, personalized medicine and guiding preclinical and clinical trials. Breast cancer has been used to demonstrate the applications of mathematical modeling in cancer diagnostics, the identification of high-risk population, cancer screening strategies, prediction of tumor growth and guiding cancer treatment. Mathematical models are the key components of the toolkit used in the fight against cancer. The combinatorial complexity of new drugs discovery is enormous, making systematic drug discovery, by experimentation, alone difficult if not impossible. The biggest challenges include seamless integration of growing data, information and knowledge, and making them available for a multiplicity of analyses. Mathematical models are essential for bringing cancer drug discovery into the era of Omics, Big Data and personalized medicine.

  7. Influence of Knowledge of Content and Students on Beginning Agriculture Teachers' Approaches to Teaching Content

    Science.gov (United States)

    Rice, Amber H.; Kitchel, Tracy

    2016-01-01

    This study explored experiences of beginning agriculture teachers' approaches to teaching content. The research question guiding the study was: how does agriculture teachers' knowledge of content and students influence their process of breaking down content knowledge for teaching? The researchers employed a grounded theory approach in which five…

  8. Cell and small animal models for phenotypic drug discovery

    Directory of Open Access Journals (Sweden)

    Szabo M

    2017-06-01

    Full Text Available Mihaly Szabo,1 Sara Svensson Akusjärvi,1 Ankur Saxena,1 Jianping Liu,2 Gayathri Chandrasekar,1 Satish S Kitambi1 1Department of Microbiology Tumor, and Cell Biology, 2Department of Biochemistry and Biophysics, Karolinska Institutet, Solna, Sweden Abstract: The phenotype-based drug discovery (PDD approach is re-emerging as an alternative platform for drug discovery. This review provides an overview of the various model systems and technical advances in imaging and image analyses that strengthen the PDD platform. In PDD screens, compounds of therapeutic value are identified based on the phenotypic perturbations produced irrespective of target(s or mechanism of action. In this article, examples of phenotypic changes that can be detected and quantified with relative ease in a cell-based setup are discussed. In addition, a higher order of PDD screening setup using small animal models is also explored. As PDD screens integrate physiology and multiple signaling mechanisms during the screening process, the identified hits have higher biomedical applicability. Taken together, this review highlights the advantages gained by adopting a PDD approach in drug discovery. Such a PDD platform can complement target-based systems that are currently in practice to accelerate drug discovery. Keywords: phenotype, screening, PDD, discovery, zebrafish, drug

  9. Application of lean manufacturing concepts to drug discovery: rapid analogue library synthesis.

    Science.gov (United States)

    Weller, Harold N; Nirschl, David S; Petrillo, Edward W; Poss, Michael A; Andres, Charles J; Cavallaro, Cullen L; Echols, Martin M; Grant-Young, Katherine A; Houston, John G; Miller, Arthur V; Swann, R Thomas

    2006-01-01

    The application of parallel synthesis to lead optimization programs in drug discovery has been an ongoing challenge since the first reports of library synthesis. A number of approaches to the application of parallel array synthesis to lead optimization have been attempted over the years, ranging from widespread deployment by (and support of) individual medicinal chemists to centralization as a service by an expert core team. This manuscript describes our experience with the latter approach, which was undertaken as part of a larger initiative to optimize drug discovery. In particular, we highlight how concepts taken from the manufacturing sector can be applied to drug discovery and parallel synthesis to improve the timeliness and thus the impact of arrays on drug discovery.

  10. Discovery and annotation of small proteins using genomics, proteomics and computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Xiaohan; Tschaplinski, Timothy J.; Hurst, Gregory B.; Jawdy, Sara; Abraham, Paul E.; Lankford, Patricia K.; Adams, Rachel M.; Shah, Manesh B.; Hettich, Robert L.; Lindquist, Erika; Kalluri, Udaya C.; Gunter, Lee E.; Pennacchio, Christa; Tuskan, Gerald A.

    2011-03-02

    Small proteins (10 200 amino acids aa in length) encoded by short open reading frames (sORF) play important regulatory roles in various biological processes, including tumor progression, stress response, flowering, and hormone signaling. However, ab initio discovery of small proteins has been relatively overlooked. Recent advances in deep transcriptome sequencing make it possible to efficiently identify sORFs at the genome level. In this study, we obtained 2.6 million expressed sequence tag (EST) reads from Populus deltoides leaf transcriptome and reconstructed full-length transcripts from the EST sequences. We identified an initial set of 12,852 sORFs encoding proteins of 10 200 aa in length. Three computational approaches were then used to enrich for bona fide protein-coding sORFs from the initial sORF set: (1) codingpotential prediction, (2) evolutionary conservation between P. deltoides and other plant species, and (3) gene family clustering within P. deltoides. As a result, a high-confidence sORF candidate set containing 1469 genes was obtained. Analysis of the protein domains, non-protein-coding RNA motifs, sequence length distribution, and protein mass spectrometry data supported this high-confidence sORF set. In the high-confidence sORF candidate set, known protein domains were identified in 1282 genes (higher-confidence sORF candidate set), out of which 611 genes, designated as highest-confidence candidate sORF set, were supported by proteomics data. Of the 611 highest-confidence candidate sORF genes, 56 were new to the current Populus genome annotation. This study not only demonstrates that there are potential sORF candidates to be annotated in sequenced genomes, but also presents an efficient strategy for discovery of sORFs in species with no genome annotation yet available.

  11. A New Universe of Discoveries

    Science.gov (United States)

    Córdova, France A.

    2016-01-01

    The convergence of emerging advances in astronomical instruments, computational capabilities and talented practitioners (both professional and civilian) is creating an extraordinary new environment for making numerous fundamental discoveries in astronomy, ranging from the nature of exoplanets to understanding the evolution of solar systems and galaxies. The National Science Foundation is playing a critical role in supporting, stimulating, and shaping these advances. NSF is more than an agency of government or a funding mechanism for the infrastructure of science. The work of NSF is a sacred trust that every generation of Americans makes to those of the next generation, that we will build on the body of knowledge we inherit and continue to push forward the frontiers of science. We never lose sight of NSF's obligation to "explore the unexplored" and inspire all of humanity with the wonders of discovery. As the only Federal agency dedicated to the support of basic research and education in all fields of science and engineering, NSF has empowered discoveries across a broad spectrum of scientific inquiry for more than six decades. The result is fundamental scientific research that has had a profound impact on our nation's innovation ecosystem and kept our nation at the very forefront of the world's science-and-engineering enterprise.

  12. A creative approach to the development of an agenda for knowledge utilization: outputs from the 11th international knowledge utilization colloquium (KU 11).

    Science.gov (United States)

    Wilkinson, Joyce E; Rycroft-Malone, Jo; Davies, Huw T O; McCormack, Brendan

    2012-12-01

    A group of researchers and practitioners interested in advancing knowledge utilization met as a colloquium in Belfast (KU 11) and used a "world café" approach to exploit the social capital and shared understanding built up over previous events to consider the research and practice agenda. We considered three key areas of relevance to knowledge use: (1) understanding the nature of research use, influence and impact; (2) blended and collaborative approaches to knowledge production and use; and (3) supporting sustainability and spread of evidence-informed innovations. The approach enabled the development of artifacts that reflected the three areas and these were analyzed using a creative hermeneutic approach. The themes that emerged and which are outlined in this commentary are not mutually exclusive. There was much overlap in the discussions and therefore of the themes, reflecting the complex nature of knowledge translation work. The agenda that has emerged from KU 11 also reflects the participatory and creative approach in which the meeting was structured and focused, and therefore emphasizes the processual, relational and contingent nature of some of the challenges we face. The past 20 years has seen an explosion in activity around understanding KU, and we have learned much about the difficulties. Whilst the agenda for the next decade may be becoming clearer, colloquia such as KU 11, using creative and engaging approaches, have a key role to play in dissecting, articulating and sharing that agenda. In this way, we also build an ever-expanding international community that is dedicated to working towards increasing the chances of success for better patient care. © 2012 Sigma Theta Tau International.

  13. Formal concept analysis in knowledge discovery: A survey

    NARCIS (Netherlands)

    Poelmans, J.; Elzinga, P.; Viaene, S.; Dedene, G.; Croitoru, M.; Ferré, S.; Lukose, D.

    2010-01-01

    In this paper, we analyze the literature on Formal Concept Analysis (FCA) using FCA. We collected 702 papers published between 2003-2009 mentioning Formal Concept Analysis in the abstract. We developed a knowledge browsing environment to support our literature analysis process. The pdf-files

  14. Rationale for a natural products approach to herbicide discovery.

    Science.gov (United States)

    Dayan, Franck E; Owens, Daniel K; Duke, Stephen O

    2012-04-01

    Weeds continue to evolve resistance to all the known modes of herbicidal action, but no herbicide with a new target site has been commercialized in nearly 20 years. The so-called 'new chemistries' are simply molecules belonging to new chemical classes that have the same mechanisms of action as older herbicides (e.g. the protoporphyrinogen-oxidase-inhibiting pyrimidinedione saflufenacil or the very-long-chain fatty acid elongase targeting sulfonylisoxazoline herbicide pyroxasulfone). Therefore, the number of tools to manage weeds, and in particular those that can control herbicide-resistant weeds, is diminishing rapidly. There is an imminent need for truly innovative classes of herbicides that explore chemical spaces and interact with target sites not previously exploited by older active ingredients. This review proposes a rationale for a natural-products-centered approach to herbicide discovery that capitalizes on the structural diversity and ingenuity afforded by these biologically active compounds. The natural process of extended-throughput screening (high number of compounds tested on many potential target sites over long periods of times) that has shaped the evolution of natural products tends to generate molecules tailored to interact with specific target sites. As this review shows, there is generally little overlap between the mode of action of natural and synthetic phytotoxins, and more emphasis should be placed on applying methods that have proved beneficial to the pharmaceutical industry to solve problems in the agrochemical industry. Published 2012 by John Wiley & Sons, Ltd.

  15. Discovery stories in the science classroom

    Science.gov (United States)

    Arya, Diana Jaleh

    when the readers have little prior knowledge of a given topic. Further, ethnic minority groups of lower socio-economic level (i.e., Latin and African-American origins) demonstrated an even greater benefit from the SDN texts, suggesting that a scientist's story of discovery can help to close the gap in academic performance in science.

  16. Network-based Approaches in Pharmacology.

    Science.gov (United States)

    Boezio, Baptiste; Audouze, Karine; Ducrot, Pierre; Taboureau, Olivier

    2017-10-01

    In drug discovery, network-based approaches are expected to spotlight our understanding of drug action across multiple layers of information. On one hand, network pharmacology considers the drug response in the context of a cellular or phenotypic network. On the other hand, a chemical-based network is a promising alternative for characterizing the chemical space. Both can provide complementary support for the development of rational drug design and better knowledge of the mechanisms underlying the multiple actions of drugs. Recent progress in both concepts is discussed here. In addition, a network-based approach using drug-target-therapy data is introduced as an example. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Entrepreneurship as a legitimate field of knowledge.

    Science.gov (United States)

    Sánchez, José C

    2011-08-01

    Entrepreneurship as a research topic has been approached from disciplines such as economics, sociology or psychology. After justifying its study, we define the domain of the field, highlighting what has currently become its dominant paradigm, the process of the discovery, assessment and exploitation of opportunities. We then describe the main perspectives and offer an integrated conceptual framework that will allow us to legitimize the study of entrepreneurship as a field of knowledge in its own right. We believe that this framework will help researchers to better recognize the relations among the many factors forming part of the study of entrepreneurship. Lastly, we conclude with some brief reflections on the potential value of the framework presented.

  18. Four disruptive strategies for removing drug discovery bottlenecks.

    Science.gov (United States)

    Ekins, Sean; Waller, Chris L; Bradley, Mary P; Clark, Alex M; Williams, Antony J

    2013-03-01

    Drug discovery is shifting focus from industry to outside partners and, in the process, creating new bottlenecks. Technologies like high throughput screening (HTS) have moved to a larger number of academic and institutional laboratories in the USA, with little coordination or consideration of the outputs and creating a translational gap. Although there have been collaborative public-private partnerships in Europe to share pharmaceutical data, the USA has seemingly lagged behind and this may hold it back. Sharing precompetitive data and models may accelerate discovery across the board, while finding the best collaborators, mining social media and mobile approaches to open drug discovery should be evaluated in our efforts to remove drug discovery bottlenecks. We describe four strategies to rectify the current unsustainable situation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. How structure shapes dynamics: knowledge development in Wikipedia--a network multilevel modeling approach.

    Directory of Open Access Journals (Sweden)

    Iassen Halatchliyski

    Full Text Available Using a longitudinal network analysis approach, we investigate the structural development of the knowledge base of Wikipedia in order to explain the appearance of new knowledge. The data consists of the articles in two adjacent knowledge domains: psychology and education. We analyze the development of networks of knowledge consisting of interlinked articles at seven snapshots from 2006 to 2012 with an interval of one year between them. Longitudinal data on the topological position of each article in the networks is used to model the appearance of new knowledge over time. Thus, the structural dimension of knowledge is related to its dynamics. Using multilevel modeling as well as eigenvector and betweenness measures, we explain the significance of pivotal articles that are either central within one of the knowledge domains or boundary-crossing between the two domains at a given point in time for the future development of new knowledge in the knowledge base.

  20. Orphan diseases: state of the drug discovery art.

    Science.gov (United States)

    Volmar, Claude-Henry; Wahlestedt, Claes; Brothers, Shaun P

    2017-06-01

    Since 1983 more than 300 drugs have been developed and approved for orphan diseases. However, considering the development of novel diagnosis tools, the number of rare diseases vastly outpaces therapeutic discovery. Academic centers and nonprofit institutes are now at the forefront of rare disease R&D, partnering with pharmaceutical companies when academic researchers discover novel drugs or targets for specific diseases, thus reducing the failure risk and cost for pharmaceutical companies. Considerable progress has occurred in the art of orphan drug discovery, and a symbiotic relationship now exists between pharmaceutical industry, academia, and philanthropists that provides a useful framework for orphan disease therapeutic discovery. Here, the current state-of-the-art of drug discovery for orphan diseases is reviewed. Current technological approaches and challenges for drug discovery are considered, some of which can present somewhat unique challenges and opportunities in orphan diseases, including the potential for personalized medicine, gene therapy, and phenotypic screening.

  1. Knowledge Discovery and Pavement Performance : Intelligent Data Mining

    NARCIS (Netherlands)

    Miradi, M.

    2009-01-01

    The main goal of the study was to discover knowledge from data about asphalt road pavement problems to achieve a better understanding of the behavior of them and via this understanding improve pavement quality and enhance its lifespan. Four pavement problems were chosen to be investigated; raveling

  2. Computer-Assisted Discovery and Proof

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2007-12-10

    With the advent of powerful, widely-available mathematical software, combined with ever-faster computer hardware, we are approaching a day when both the discovery and proof of mathematical facts can be done in a computer-assisted manner. his article presents several specific examples of this new paradigm in action.

  3. Building policy capacities: an interactive approach for linking knowledge to action in health promotion.

    Science.gov (United States)

    Rütten, Alfred; Gelius, Peter

    2014-09-01

    This article outlines a theoretical framework for an interactive, research-driven approach to building policy capacities in health promotion. First, it illustrates how two important issues in the recent public health debate, capacity building and linking scientific knowledge to policy action, are connected to each other theoretically. It then introduces an international study on an interactive approach to capacity building in health promotion policy. The approach combines the ADEPT model of policy capacities with a co-operative planning process to foster the exchange of knowledge between policy-makers and researchers, thus improving intra- and inter-organizational capacities. A regional-level physical activity promotion project involving governmental and public-law institutions, NGOs and university researchers serves as a case study to illustrate the potential of the approach for capacity building. Analysis and comparison with a similar local-level project indicate that the approach provides an effective means of linking scientific knowledge to policy action and to planning concrete measures for capacity building in health promotion, but that it requires sufficiently long timelines and adequate resources to achieve adequate implementation and sustainability. © The Author (2013). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Synthetic biology approaches in cancer immunotherapy, genetic network engineering, and genome editing.

    Science.gov (United States)

    Chakravarti, Deboki; Cho, Jang Hwan; Weinberg, Benjamin H; Wong, Nicole M; Wong, Wilson W

    2016-04-18

    Investigations into cells and their contents have provided evolving insight into the emergence of complex biological behaviors. Capitalizing on this knowledge, synthetic biology seeks to manipulate the cellular machinery towards novel purposes, extending discoveries from basic science to new applications. While these developments have demonstrated the potential of building with biological parts, the complexity of cells can pose numerous challenges. In this review, we will highlight the broad and vital role that the synthetic biology approach has played in applying fundamental biological discoveries in receptors, genetic circuits, and genome-editing systems towards translation in the fields of immunotherapy, biosensors, disease models and gene therapy. These examples are evidence of the strength of synthetic approaches, while also illustrating considerations that must be addressed when developing systems around living cells.

  5. A wavelet-based approach to the discovery of themes and sections in monophonic melodies

    DEFF Research Database (Denmark)

    Velarde, Gissel; Meredith, David

    We present the computational method submitted to the MIREX 2014 Discovery of Repeated Themes & Sections task, and the results on the monophonic version of the JKU Patterns Development Database. In the context of pattern discovery in monophonic music, the idea behind our method is that, with a good...

  6. Benefiting from Customer and Competitor Knowledge: A Market-Based Approach to Organizational Learning

    Science.gov (United States)

    Hoe, Siu Loon

    2008-01-01

    Purpose: The purpose of this paper is to review the organizational learning, market orientation and learning orientation concepts, highlight the importance of market knowledge to organizational learning and recommend ways in adopting a market-based approach to organizational learning. Design/methodology/approach: The extant organizational learning…

  7. Integrated analytical approaches towards toxic algal natural products discovery

    DEFF Research Database (Denmark)

    Larsen, Thomas Ostenfeld; Rasmussen, Silas Anselm; Gedsted Andersen, Mikael

    Microalgae are known to produce toxins which affect the marine ecosystems. This include compounds active against competitors, grazers and in many cases also fish (1,2). Many strategies can be followed for discovery of novel bioactive secondary metabolites from marine sources. We have previously...... is dereplication, where we use explorative solid-phase extraction (E-SPE), and UHPLC-state-of-the-art high resolution mass spectrometry (waste time isolating and elucidating...

  8. The Discovery of Insulin: A Case Study of Scientific Methodology

    Science.gov (United States)

    Stansfield, William D.

    2012-01-01

    The nature of scientific research sometimes involves a trial-and-error procedure. Popular reviews of successful results from this approach often sanitize the story by omitting unsuccessful trials, thus painting the rosy impression that research simply follows a direct route from hypothesis to experiment to scientific discovery. The discovery of…

  9. Automated vocabulary discovery for geo-parsing online epidemic intelligence.

    Science.gov (United States)

    Keller, Mikaela; Freifeld, Clark C; Brownstein, John S

    2009-11-24

    Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  10. Discovery: Under the Microscope at Kennedy Space Center

    Science.gov (United States)

    Howard, Philip M.

    2013-01-01

    The National Aeronautics & Space Administration (NASA) is known for discovery, exploration, and advancement of knowledge. Since the days of Leeuwenhoek, microscopy has been at the forefront of discovery and knowledge. No truer is that statement than today at Kennedy Space Center (KSC), where microscopy plays a major role in contamination identification and is an integral part of failure analysis. Space exploration involves flight hardware undergoing rigorous "visually clean" inspections at every step of processing. The unknown contaminants that are discovered on these inspections can directly impact the mission by decreasing performance of sensors and scientific detectors on spacecraft and satellites, acting as micrometeorites, damaging critical sealing surfaces, and causing hazards to the crew of manned missions. This talk will discuss how microscopy has played a major role in all aspects of space port operations at KSC. Case studies will highlight years of analysis at the Materials Science Division including facility and payload contamination for the Navigation Signal Timing and Ranging Global Positioning Satellites (NA VST AR GPS) missions, quality control monitoring of monomethyl hydrazine fuel procurement for launch vehicle operations, Shuttle Solids Rocket Booster (SRB) foam processing failure analysis, and Space Shuttle Main Engine Cut-off (ECO) flight sensor anomaly analysis. What I hope to share with my fellow microscopists is some of the excitement of microscopy and how its discoveries has led to hardware processing, that has helped enable the successful launch of vehicles and space flight missions here at Kennedy Space Center.

  11. Semantic Data Integration and Knowledge Management to Represent Biological Network Associations.

    Science.gov (United States)

    Losko, Sascha; Heumann, Klaus

    2017-01-01

    The vast quantities of information generated by academic and industrial research groups are reflected in a rapidly growing body of scientific literature and exponentially expanding resources of formalized data, including experimental data, originating from a multitude of "-omics" platforms, phenotype information, and clinical data. For bioinformatics, the challenge remains to structure this information so that scientists can identify relevant information, to integrate this information as specific "knowledge bases," and to formalize this knowledge across multiple scientific domains to facilitate hypothesis generation and validation. Here we report on progress made in building a generic knowledge management environment capable of representing and mining both explicit and implicit knowledge and, thus, generating new knowledge. Risk management in drug discovery and clinical research is used as a typical example to illustrate this approach. In this chapter we introduce techniques and concepts (such as ontologies, semantic objects, typed relationships, contexts, graphs, and information layers) that are used to represent complex biomedical networks. The BioXM™ Knowledge Management Environment is used as an example to demonstrate how a domain such as oncology is represented and how this representation is utilized for research.

  12. An Ensemble Approach to Knowledge-Based Intensity-Modulated Radiation Therapy Planning

    Directory of Open Access Journals (Sweden)

    Jiahan Zhang

    2018-03-01

    Full Text Available Knowledge-based planning (KBP utilizes experienced planners’ knowledge embedded in prior plans to estimate optimal achievable dose volume histogram (DVH of new cases. In the regression-based KBP framework, previously planned patients’ anatomical features and DVHs are extracted, and prior knowledge is summarized as the regression coefficients that transform features to organ-at-risk DVH predictions. In our study, we find that in different settings, different regression methods work better. To improve the robustness of KBP models, we propose an ensemble method that combines the strengths of various linear regression models, including stepwise, lasso, elastic net, and ridge regression. In the ensemble approach, we first obtain individual model prediction metadata using in-training-set leave-one-out cross validation. A constrained optimization is subsequently performed to decide individual model weights. The metadata is also used to filter out impactful training set outliers. We evaluate our method on a fresh set of retrospectively retrieved anonymized prostate intensity-modulated radiation therapy (IMRT cases and head and neck IMRT cases. The proposed approach is more robust against small training set size, wrongly labeled cases, and dosimetric inferior plans, compared with other individual models. In summary, we believe the improved robustness makes the proposed method more suitable for clinical settings than individual models.

  13. Context-sensitive service discovery experimental prototype and evaluation

    DEFF Research Database (Denmark)

    Balken, Robin; Haukrogh, Jesper; L. Jensen, Jens

    2007-01-01

    The amount of different networks and services available to users today are increasing. This introduces the need for a way to locate and sort out irrelevant services in the process of discovering available services to a user. This paper describes and evaluates a prototype of an automated discovery...... and selection system, which locates services relevant to a user, based on his/her context and the context of the available services. The prototype includes a multi-level, hierarchical system approach and the introduction of entities called User-nodes, Super-nodes and Root-nodes. These entities separate...... the network in domains that handle the complex distributed service discovery, which is based on dynamically changing context information. In the prototype, a method for performing context-sensitive service discovery has been realised. The service discovery part utilizes UPnP, which has been expanded in order...

  14. Volatility Discovery

    DEFF Research Database (Denmark)

    Dias, Gustavo Fruet; Scherrer, Cristina; Papailias, Fotis

    The price discovery literature investigates how homogenous securities traded on different markets incorporate information into prices. We take this literature one step further and investigate how these markets contribute to stochastic volatility (volatility discovery). We formally show...... that the realized measures from homogenous securities share a fractional stochastic trend, which is a combination of the price and volatility discovery measures. Furthermore, we show that volatility discovery is associated with the way that market participants process information arrival (market sensitivity......). Finally, we compute volatility discovery for 30 actively traded stocks in the U.S. and report that Nyse and Arca dominate Nasdaq....

  15. Nuclear Knowledge Creation and Transfer in Enriched Learning Environments: A Practical Approach

    International Nuclear Information System (INIS)

    Ruiz, F.; Gonzalez, J.; Delgado, J.L.

    2016-01-01

    Full text: Technology, the social nature of learning and the generational learning style are conforming new models of training that are changing the roles of the instructors, the channels of communication and the proper learning content of the knowledge to be transferred. New training methodologies are being using in the primary and secondary education and “Vintage” classroom learning does not meet the educational requirements of these methodologies; therefore, it’s necessary to incorporate them in the Knowledge Management processes used in the nuclear industry. This paper describes a practical approach of an enriched learning environment with the purpose of creating and transferring nuclear knowledge. (author

  16. Drug discovery for male subfertility using high-throughput screening: a new approach to an unsolved problem.

    Science.gov (United States)

    Martins da Silva, Sarah J; Brown, Sean G; Sutton, Keith; King, Louise V; Ruso, Halil; Gray, David W; Wyatt, Paul G; Kelly, Mark C; Barratt, Christopher L R; Hope, Anthony G

    2017-05-01

    Can pharma drug discovery approaches be utilized to transform investigation into novel therapeutics for male infertility? High-throughput screening (HTS) is a viable approach to much-needed drug discovery for male factor infertility. There is both huge demand and a genuine clinical need for new treatment options for infertile men. However, the time, effort and resources required for drug discovery are currently exorbitant, due to the unique challenges of the cellular, physical and functional properties of human spermatozoa and a lack of appropriate assay platform. Spermatozoa were obtained from healthy volunteer research donors and subfertile patients undergoing IVF/ICSI at a hospital-assisted reproductive techniques clinic between January 2012 and November 2016. A HTS assay was developed and validated using intracellular calcium ([Ca2+]i) as a surrogate for motility in human spermatozoa. Calcium fluorescence was detected using a Flexstation microplate reader (384-well platform) and compared with responses evoked by progesterone, a compound known to modify a number of biologically relevant behaviours in human spermatozoa. Hit compounds identified following single point drug screen (10 μM) of an ion channel-focussed library assembled by the University of Dundee Drug Discovery Unit were rescreened to ensure potency using standard 10 point half-logarithm concentration curves, and tested for purity and integrity using liquid chromatography and mass spectrometry. Hit compounds were grouped by structure activity relationships and five representative compounds then further investigated for direct effects on spermatozoa, using computer-assisted sperm assessment, sperm penetration assay and whole-cell patch clamping. Of the 3242 ion channel library ligands screened, 384 compounds (11.8%) elicited a statistically significant increase in calcium fluorescence, with greater than 3× median absolute deviation above the baseline. Seventy-four compounds eliciting ≥50% increase

  17. A Query Evaluation Approach using Opinions of Turkish Financial Market Professionals

    Directory of Open Access Journals (Sweden)

    Bora Uğurlu

    2015-08-01

    Full Text Available People who do not have expertise in the financial area may not see the relationship between the numerical and linguistic data. In our study, a knowledge discovery approach using Turkish natural language processing is recommended in order to respond to meaningful queries and classify them with high accuracy. Query corpus consists of randomly selected unique keywords. Quantitative evaluation is done in order to measure the classification performance. Experimental results indicate that our proposed approach is sufficiently consistent with and able to make categorical classifications correctly. The approach highlights the relationship between numerical and linguistic data obtained from Turkish financial market.

  18. Integration of asynchronous knowledge sources in a novel speech recognition framework

    OpenAIRE

    Van hamme, Hugo

    2008-01-01

    Van hamme H., ''Integration of asynchronous knowledge sources in a novel speech recognition framework'', Proceedings ITRW on speech analysis and processing for knowledge discovery, 4 pp., June 2008, Aalborg, Denmark.

  19. 40 CFR 300.300 - Phase I-Discovery or notification.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 27 2010-07-01 2010-07-01 false Phase I-Discovery or notification. 300.300 Section 300.300 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND... person in charge of a vessel or a facility shall, as soon as he or she has knowledge of any discharge...

  20. Strategic Management Model with Lens of Knowledge Management and Competitive Intelligence: A Review Approach

    OpenAIRE

    Shujahat, Muhammad; Hussain, Saddam; Javed, Sammar; Muhammad, Imran Malik; Thursamy, Ramayah; Ali, Junaid

    2017-01-01

    Purpose:\\ud First purpose of this study is to discuss the synergic and separate use of knowledge and\\ud intelligence, via knowledge management and competitive intelligence, in each stage of strategic\\ud management process. Second purpose is to discuss the implications of each stage of strategic\\ud management process for knowledge management and competitive intelligence and vice versa.\\ud Methodology/Design/Approach:\\ud A systematic literature review was performed within timeframe of 2000 to 2...

  1. The fragile x mental retardation syndrome 20 years after the FMR1 gene discovery: an expanding universe of knowledge.

    Science.gov (United States)

    Rousseau, François; Labelle, Yves; Bussières, Johanne; Lindsay, Carmen

    2011-08-01

    The fragile X mental retardation (FXMR) syndrome is one of the most frequent causes of mental retardation. Affected individuals display a wide range of additional characteristic features including behavioural and physical phenotypes, and the extent to which individuals are affected is highly variable. For these reasons, elucidation of the pathophysiology of this disease has been an important challenge to the scientific community. 1991 marks the year of the discovery of both the FMR1 gene mutations involved in this disease, and of their dynamic nature. Although a mouse model for the disease has been available for 16 years and extensive research has been performed on the FMR1 protein (FMRP), we still understand little about how the disease develops, and no treatment has yet been shown to be effective. In this review, we summarise current knowledge on FXMR with an emphasis on the technical challenges of molecular diagnostics, on its prevalence and dynamics among populations, and on the potential of screening for FMR1 mutations.

  2. The Fragile X Mental Retardation Syndrome 20 Years After the FMR1 Gene Discovery: an Expanding Universe of Knowledge

    Science.gov (United States)

    Rousseau, François; Labelle, Yves; Bussières, Johanne; Lindsay, Carmen

    2011-01-01

    The fragile X mental retardation (FXMR) syndrome is one of the most frequent causes of mental retardation. Affected individuals display a wide range of additional characteristic features including behavioural and physical phenotypes, and the extent to which individuals are affected is highly variable. For these reasons, elucidation of the pathophysiology of this disease has been an important challenge to the scientific community. 1991 marks the year of the discovery of both the FMR1 gene mutations involved in this disease, and of their dynamic nature. Although a mouse model for the disease has been available for 16 years and extensive research has been performed on the FMR1 protein (FMRP), we still understand little about how the disease develops, and no treatment has yet been shown to be effective. In this review, we summarise current knowledge on FXMR with an emphasis on the technical challenges of molecular diagnostics, on its prevalence and dynamics among populations, and on the potential of screening for FMR1 mutations. PMID:21912443

  3. Literature-related discovery techniques applied to ocular disease : a vitreous restoration example

    NARCIS (Netherlands)

    Kostoff, Ronald N.; Los, Leonoor I.

    2013-01-01

    Purpose of reviewLiterature-related discovery and innovation (LRDI) is a text mining approach for bridging unconnected disciplines to hypothesize radical discovery. Application to medical problems involves identifying key disease symptoms, and identifying causes and treatments for those symptoms

  4. Keefektifan setting TPS dalam pendekatan discovery learning dan problem-based learning pada pembelajaran materi lingkaran SMP

    Directory of Open Access Journals (Sweden)

    Rahmi Hidayati

    2017-05-01

    The purpose of this study was to describe the effectiveness of setting Think Pair Share (TPS in the approach to discovery learning and problem-based learning in terms of student achievement, mathematical communication skills, and interpersonal skills of the student.  This study was a quasi-experimental study using the pretest-posttest nonequivalent group design. The research population comprised all Year VIII students of SMP Negeri 1 Yogyakarta. The research sample was randomly selected from eight classes, two classes were elected. The instrument used in this study is the learning achievement test, a test of mathematical communication skills, and interpersonal skills student questionnaires. To test the effectiveness of setting Think Pair Share (TPS in the approach to discovery learning and problem-based learning, the one sample t-test was carried out. Then, to investigate the difference in effectiveness between the setting Think Pair Share (TPS in the approach to discovery learning and problem-based learning, the Multivariate Analysis of Variance (MANOVA was carried out. The research findings indicate that the setting TPS discovery approach to learning and problem-based approach to learning (PBL is effective in terms of learning achievement, mathematical communication skills, and interpersonal skills of the students. No difference in effectiveness between setting TPS discovery approach to learning and problem-based learning (PBL in terms of learning achievement, mathematical communication skills, and interpersonal skills of the students. Keywords: TPS setting in discovery learning approach, in problem-based learning, academic achievement, mathematical communication skills, and interpersonal skills of the student

  5. The Discovery of the Existence of the Absolute in Existential Metaphysics

    Directory of Open Access Journals (Sweden)

    Andrzej Maryniarczyk

    2016-12-01

    Full Text Available The article shows the way in which the discovery of the existence of the Absolute is made in existential metaphysics. This existential metaphysics provides us with knowledge about reality. It shows the content of the experience of being, the content given to us in the transcendentals. It also unveils the foundation of the rational order, which is given to us in the discovery of the first principles of the existence of being and of cognition. Metaphysics provides us also with knowledge concerning the structure of being. It shows us being as composite and plural; being which is “insufficient” in its structure and calls for an explanation. That being—that is problematized in existence, given to us in experience, and incompletely intelligible in itself—lifts us toward its ultimate “complement” and understanding, to the Absolute.

  6. Next-Generation Sequencing Approaches in Genome-Wide Discovery of Single Nucleotide Polymorphism Markers Associated with Pungency and Disease Resistance in Pepper.

    Science.gov (United States)

    Manivannan, Abinaya; Kim, Jin-Hee; Yang, Eun-Young; Ahn, Yul-Kyun; Lee, Eun-Su; Choi, Sena; Kim, Do-Sun

    2018-01-01

    Pepper is an economically important horticultural plant that has been widely used for its pungency and spicy taste in worldwide cuisines. Therefore, the domestication of pepper has been carried out since antiquity. Owing to meet the growing demand for pepper with high quality, organoleptic property, nutraceutical contents, and disease tolerance, genomics assisted breeding techniques can be incorporated to develop novel pepper varieties with desired traits. The application of next-generation sequencing (NGS) approaches has reformed the plant breeding technology especially in the area of molecular marker assisted breeding. The availability of genomic information aids in the deeper understanding of several molecular mechanisms behind the vital physiological processes. In addition, the NGS methods facilitate the genome-wide discovery of DNA based markers linked to key genes involved in important biological phenomenon. Among the molecular markers, single nucleotide polymorphism (SNP) indulges various benefits in comparison with other existing DNA based markers. The present review concentrates on the impact of NGS approaches in the discovery of useful SNP markers associated with pungency and disease resistance in pepper. The information provided in the current endeavor can be utilized for the betterment of pepper breeding in future.

  7. Next-Generation Sequencing Approaches in Genome-Wide Discovery of Single Nucleotide Polymorphism Markers Associated with Pungency and Disease Resistance in Pepper

    Directory of Open Access Journals (Sweden)

    Abinaya Manivannan

    2018-01-01

    Full Text Available Pepper is an economically important horticultural plant that has been widely used for its pungency and spicy taste in worldwide cuisines. Therefore, the domestication of pepper has been carried out since antiquity. Owing to meet the growing demand for pepper with high quality, organoleptic property, nutraceutical contents, and disease tolerance, genomics assisted breeding techniques can be incorporated to develop novel pepper varieties with desired traits. The application of next-generation sequencing (NGS approaches has reformed the plant breeding technology especially in the area of molecular marker assisted breeding. The availability of genomic information aids in the deeper understanding of several molecular mechanisms behind the vital physiological processes. In addition, the NGS methods facilitate the genome-wide discovery of DNA based markers linked to key genes involved in important biological phenomenon. Among the molecular markers, single nucleotide polymorphism (SNP indulges various benefits in comparison with other existing DNA based markers. The present review concentrates on the impact of NGS approaches in the discovery of useful SNP markers associated with pungency and disease resistance in pepper. The information provided in the current endeavor can be utilized for the betterment of pepper breeding in future.

  8. An Integrated Open Approach to Capturing Systematic Knowledge for Manufacturing Process Innovation Based on Collective Intelligence

    Directory of Open Access Journals (Sweden)

    Gangfeng Wang

    2018-02-01

    Full Text Available Process innovation plays a vital role in the manufacture realization of increasingly complex new products, especially in the context of sustainable development and cleaner production. Knowledge-based innovation design can inspire designers’ creative thinking; however, the existing scattered knowledge has not yet been properly captured and organized according to Computer-Aided Process Innovation (CAPI. Therefore, this paper proposes an integrated approach to tackle this non-trivial issue. By analyzing the design process of CAPI and technical features of open innovation, a novel holistic paradigm of process innovation knowledge capture based on collective intelligence (PIKC-CI is constructed from the perspective of the knowledge life cycle. Then, a multi-source innovation knowledge fusion algorithm based on semantic elements reconfiguration is applied to form new public knowledge. To ensure the credibility and orderliness of innovation knowledge refinement, a collaborative editing strategy based on knowledge lock and knowledge–social trust degree is explored. Finally, a knowledge management system MPI-OKCS integrating the proposed techniques is implemented into the pre-built CAPI general platform, and a welding process innovation example is provided to illustrate the feasibility of the proposed approach. It is expected that our work would lay the foundation for the future knowledge-inspired CAPI and smart process planning.

  9. Knowledge extraction from evolving spiking neural networks with rank order population coding.

    Science.gov (United States)

    Soltic, Snjezana; Kasabov, Nikola

    2010-12-01

    This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.

  10. Commonsense knowledge extraction for Persian language: A combinatory approach

    Directory of Open Access Journals (Sweden)

    Mehdi Moradi

    2015-12-01

    Full Text Available Putting human commonsense knowledge into computers has always been a long standing dream of artificial intelligence (AI. The cost of several tens of millions of dollars and times have been covered so that the computers could know about “objects falling, not rising.”,” running is faster than walking. The large database was built, automated and semi-automated methods were introduced and volunteers’ efforts were utilized to achieve this, but an automated, high-throughput and low-noise method for commonsense collection still remains as the holy grail of AI. The aim of this study was to build commonsense knowledge ontology using three approaches namely Hearst method, machine translation and using structured resources. Using three Persian corpuse and Applying aforementioned methods, we could extract 7 different relations. 70000 assertions have been extracted. Finally, average accuracy of Hearst, MT and structured resource were 75%, 75% and 100% respectively.

  11. Knowledge repositories in knowledge cities: institutions, conventions and knowledge subnetworks

    NARCIS (Netherlands)

    Cheng, P.; Choi, C.J.; Chen, Shu; Eldomiaty, T.I.; Millar-Schijf, Carla C.J.M.

    2004-01-01

    Abstract: Suggests another dimension of research in, and application of, knowledge management. This theoretical paper adopts a conceptual, multi-disciplinary approach. First, knowledge can be stored and transmitted via institutions. Second, knowledge "subnetworks" or smaller groupings within larger

  12. Knowledge intensive organisations: on the frontiers of knowledge management: Guest editorial

    NARCIS (Netherlands)

    Millar-Schijf, Carla C.J.M.; Lockett, Martin; Mahon, John F.

    2016-01-01

    Purpose This paper aims to further research on leadership and knowledge management through formal knowledge strategies in knowledge-intensive organizations (KIOs), and analyse knowledge management challenges and approaches within KIOs, especially tacit knowledge. Design/methodology/approach This

  13. Open Science Meets Stem Cells: A New Drug Discovery Approach for Neurodegenerative Disorders.

    Science.gov (United States)

    Han, Chanshuai; Chaineau, Mathilde; Chen, Carol X-Q; Beitel, Lenore K; Durcan, Thomas M

    2018-01-01

    Neurodegenerative diseases are a challenge for drug discovery, as the biological mechanisms are complex and poorly understood, with a paucity of models that faithfully recapitulate these disorders. Recent advances in stem cell technology have provided a paradigm shift, providing researchers with tools to generate human induced pluripotent stem cells (iPSCs) from patient cells. With the potential to generate any human cell type, we can now generate human neurons and develop "first-of-their-kind" disease-relevant assays for small molecule screening. Now that the tools are in place, it is imperative that we accelerate discoveries from the bench to the clinic. Using traditional closed-door research systems raises barriers to discovery, by restricting access to cells, data and other research findings. Thus, a new strategy is required, and the Montreal Neurological Institute (MNI) and its partners are piloting an "Open Science" model. One signature initiative will be that the MNI biorepository will curate and disseminate patient samples in a more accessible manner through open transfer agreements. This feeds into the MNI open drug discovery platform, focused on developing industry-standard assays with iPSC-derived neurons. All cell lines, reagents and assay findings developed in this open fashion will be made available to academia and industry. By removing the obstacles many universities and companies face in distributing patient samples and assay results, our goal is to accelerate translational medical research and the development of new therapies for devastating neurodegenerative disorders.

  14. Open Science Meets Stem Cells: A New Drug Discovery Approach for Neurodegenerative Disorders

    Directory of Open Access Journals (Sweden)

    Chanshuai Han

    2018-02-01

    Full Text Available Neurodegenerative diseases are a challenge for drug discovery, as the biological mechanisms are complex and poorly understood, with a paucity of models that faithfully recapitulate these disorders. Recent advances in stem cell technology have provided a paradigm shift, providing researchers with tools to generate human induced pluripotent stem cells (iPSCs from patient cells. With the potential to generate any human cell type, we can now generate human neurons and develop “first-of-their-kind” disease-relevant assays for small molecule screening. Now that the tools are in place, it is imperative that we accelerate discoveries from the bench to the clinic. Using traditional closed-door research systems raises barriers to discovery, by restricting access to cells, data and other research findings. Thus, a new strategy is required, and the Montreal Neurological Institute (MNI and its partners are piloting an “Open Science” model. One signature initiative will be that the MNI biorepository will curate and disseminate patient samples in a more accessible manner through open transfer agreements. This feeds into the MNI open drug discovery platform, focused on developing industry-standard assays with iPSC-derived neurons. All cell lines, reagents and assay findings developed in this open fashion will be made available to academia and industry. By removing the obstacles many universities and companies face in distributing patient samples and assay results, our goal is to accelerate translational medical research and the development of new therapies for devastating neurodegenerative disorders.

  15. Ethnobotanical approaches of traditional medicine studies: some experiences from Asia.

    Science.gov (United States)

    Sheng-Ji, P

    2001-01-01

    Ethnobotany, as a research field of science, has been widely used for the documentation of indigenous knowledge on the use of plants and for providing an inventory of useful plants from local flora in Asian countries. Plants that are used for traditional herbal medicine in different countries are an important part of these studies. However, in some countries in recent years, ethnobotanical studies have been used for the discovery of new drugs and new drug development. In general, experiences gained from ethnobotanical approaches of traditional medicinal studies in China and Himalayan countries have helped drug production and new drug development. At the same time, in many cases, over-harvesting, degradation of medical plants, and loss of traditional medical knowledge in local communities are common problems in these resource areas. Issues of indigenous knowledge, intellectual property rights, and uncontrolled transboundary trade in medicinal plants occur frequently in the region. This paper discusses ethnobotanical approaches of traditional medicinal studies, in reference to experiences from China and Himalayan countries, with an emphasis on the conservation of traditional medical knowledge and medical plant resources.

  16. The calculus a genetic approach

    CERN Document Server

    Toeplitz, Otto

    2007-01-01

    When first published posthumously in 1963, this book presented a radically different approach to the teaching of calculus.  In sharp contrast to the methods of his time, Otto Toeplitz did not teach calculus as a static system of techniques and facts to be memorized. Instead, he drew on his knowledge of the history of mathematics and presented calculus as an organic evolution of ideas beginning with the discoveries of Greek scholars, such as Archimedes, Pythagoras, and Euclid, and developing through the centuries in the work of Kepler, Galileo, Fermat, Newton, and Leibniz. Through this unique a

  17. Medicinal chemistry inspired fragment-based drug discovery.

    Science.gov (United States)

    Lanter, James; Zhang, Xuqing; Sui, Zhihua

    2011-01-01

    Lead generation can be a very challenging phase of the drug discovery process. The two principal methods for this stage of research are blind screening and rational design. Among the rational or semirational design approaches, fragment-based drug discovery (FBDD) has emerged as a useful tool for the generation of lead structures. It is particularly powerful as a complement to high-throughput screening approaches when the latter failed to yield viable hits for further development. Engagement of medicinal chemists early in the process can accelerate the progression of FBDD efforts by incorporating drug-friendly properties in the earliest stages of the design process. Medium-chain acyl-CoA synthetase 2b and ketohexokinase are chosen as examples to illustrate the importance of close collaboration of medicinal chemists, crystallography, and modeling. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems.

    Science.gov (United States)

    DesAutels, Spencer J; Fox, Zachary E; Giuse, Dario A; Williams, Annette M; Kou, Qing-Hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems.

  19. Using Best Practices to Extract, Organize, and Reuse Embedded Decision Support Content Knowledge Rules from Mature Clinical Systems

    Science.gov (United States)

    DesAutels, Spencer J.; Fox, Zachary E.; Giuse, Dario A.; Williams, Annette M.; Kou, Qing-hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia

    2016-01-01

    Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems. PMID:28269846

  20. Approaching socio-technical issues in Knowledge Communication

    DEFF Research Database (Denmark)

    Kampf, Constance; Islas Sedano, Carolina

    2008-01-01

    This paper looks at the connection between technology, knowledge management and knowledge communication theory from a process perspective. Knowledge management and knowledge communication processes are examined through the iterations in creating project goals and objectives which connect the social...... and objectives with respect to knowledge communication theory, demonstrating the potential of knowledge communication concepts for socio-technical design processes, as well as the implications of socio-technical design processes in extending our understanding of knowledge communication....

  1. Facilitating Work Based Learning Projects: A Business Process Oriented Knowledge Management Approach

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2009). Facilitating Work Based Learning Projects: A Business Process Oriented Knowledge Management Approach. Presented at the 'Open workshop of TENCompetence - Rethinking Learning and Employment at a Time of Economic Uncertainty-event'. November, 19, 2009,

  2. Truth and Credibility in Sincere Policy Analysis: Alternative Approaches for the Production of Policy-Relevant Knowledge.

    Science.gov (United States)

    Bozeman, Barry; Landsbergen, David

    1989-01-01

    Two competing approaches to policy analysis are distinguished: a credibility approach, and a truth approach. According to the credibility approach, the policy analyst's role is to search for plausible argument rather than truth. Each approach has pragmatic tradeoffs in fulfilling the goal of providing usable knowledge to decision makers. (TJH)

  3. Business Process Management – A Traditional Approach Versus a Knowledge Based Approach

    Directory of Open Access Journals (Sweden)

    Roberto Paiano

    2015-12-01

    Full Text Available The enterprise management represents a heterogeneous aggregate of both resources and assets that need to be coordinated and orchestrated in order to reach the goals related to the business mission. Influences and forces that may influence this process, and also for that they should be considered, are not concentrated in the business environment, but they are related to the entireoperational context of a company. For this reason, business processes must be the most versatile and flexible with respect to the changes that occur within the whole operational context of a company.Considering the supportive role that information systems play in favour of Business Process Management - BPM, it is also essential to implement a constant, continuous and quick mechanism for the information system alignment with respect to the evolution followed by business processes.In particular, such mechanism must intervene on BPM systems in order to keep them aligned and compliant with respect to both the context changes and the regulations. In order to facilitate this alignment mechanism, companies are already referring to the support offered by specific solutions, such as knowledge bases. In this context, a possible solution might be the approach we propose, which is based on a specific framework called Process Management System. Our methodology implements a knowledge base support for business experts, which is not limited to the BPM operating phases, but includes also the engineering and prototyping activities of the corresponding information system. This paper aims to compare and evaluate a traditional BPM approach with respect to theapproach we propose. In effect, such analysis aims to emphasize the lack of traditional methodology especially with respect to the alignment between business processes and information systems, along with their compliance with context domain and regulations.

  4. West Nile Virus Drug Discovery

    Directory of Open Access Journals (Sweden)

    Siew Pheng Lim

    2013-12-01

    Full Text Available The outbreak of West Nile virus (WNV in 1999 in the USA, and its continued spread throughout the Americas, parts of Europe, the Middle East and Africa, underscored the need for WNV antiviral development. Here, we review the current status of WNV drug discovery. A number of approaches have been used to search for inhibitors of WNV, including viral infection-based screening, enzyme-based screening, structure-based virtual screening, structure-based rationale design, and antibody-based therapy. These efforts have yielded inhibitors of viral or cellular factors that are critical for viral replication. For small molecule inhibitors, no promising preclinical candidate has been developed; most of the inhibitors could not even be advanced to the stage of hit-to-lead optimization due to their poor drug-like properties. However, several inhibitors developed for related members of the family Flaviviridae, such as dengue virus and hepatitis C virus, exhibited cross-inhibition of WNV, suggesting the possibility to re-purpose these antivirals for WNV treatment. Most promisingly, therapeutic antibodies have shown excellent efficacy in mouse model; one of such antibodies has been advanced into clinical trial. The knowledge accumulated during the past fifteen years has provided better rationale for the ongoing WNV and other flavivirus antiviral development.

  5. Improve Data Mining and Knowledge Discovery through the use of MatLab

    Science.gov (United States)

    Shaykahian, Gholan Ali; Martin, Dawn Elliott; Beil, Robert

    2011-01-01

    Data mining is widely used to mine business, engineering, and scientific data. Data mining uses pattern based queries, searches, or other analyses of one or more electronic databases/datasets in order to discover or locate a predictive pattern or anomaly indicative of system failure, criminal or terrorist activity, etc. There are various algorithms, techniques and methods used to mine data; including neural networks, genetic algorithms, decision trees, nearest neighbor method, rule induction association analysis, slice and dice, segmentation, and clustering. These algorithms, techniques and methods used to detect patterns in a dataset, have been used in the development of numerous open source and commercially available products and technology for data mining. Data mining is best realized when latent information in a large quantity of data stored is discovered. No one technique solves all data mining problems; challenges are to select algorithms or methods appropriate to strengthen data/text mining and trending within given datasets. In recent years, throughout industry, academia and government agencies, thousands of data systems have been designed and tailored to serve specific engineering and business needs. Many of these systems use databases with relational algebra and structured query language to categorize and retrieve data. In these systems, data analyses are limited and require prior explicit knowledge of metadata and database relations; lacking exploratory data mining and discoveries of latent information. This presentation introduces MatLab(TradeMark)(MATrix LABoratory), an engineering and scientific data analyses tool to perform data mining. MatLab was originally intended to perform purely numerical calculations (a glorified calculator). Now, in addition to having hundreds of mathematical functions, it is a programming language with hundreds built in standard functions and numerous available toolboxes. MatLab's ease of data processing, visualization and

  6. Panorama 2014 - New oil and gas discoveries

    International Nuclear Information System (INIS)

    Vially, Roland; Hureau, Geoffroy

    2013-12-01

    Spending on exploration increased significantly in 2012, and this growth should continue into 2013. Over a period of ten years, exploration budgets have increased five-fold, leading to major discoveries in regions as yet unexplored. In 2012, 25 billion barrels of oil equivalent (Gboe) were revealed. This is more than the average for the whole decade, but less than the amount for the previous year. Although knowledge of the volumes that have been discovered is still very fragmented, they should continue to fall into 2013. The main reason lies in the fact that spending on exploration is being shifted towards assessing discoveries made in previous years in the particularly prolific basins of Brazil and East Africa, while the exploration of border regions - such as the West African pre-salt formation - is still only in its early stages. (authors)

  7. ADVANCED APPROACH TO PRODUCTION WORKFLOW COMPOSITION ON ENGINEERING KNOWLEDGE PORTALS

    OpenAIRE

    Novogrudska, Rina; Kot, Tatyana; Globa, Larisa; Schill, Alexander

    2016-01-01

    Background. In the environment of engineering knowledge portals great amount of partial workflows is concentrated. Such workflows are composed into general workflow aiming to perform real complex production task. Characteristics of partial workflows and general workflow structure are not studied enough, that affects the impossibility of general production workflowdynamic composition.Objective. Creating an approach to the general production workflow dynamic composition based on the partial wor...

  8. A SURVEY ON OPTIMIZATION APPROACHES TO SEMANTIC SERVICE DISCOVERY TOWARDS AN INTEGRATED SOLUTION

    Directory of Open Access Journals (Sweden)

    Chellammal Surianarayanan

    2012-07-01

    Full Text Available The process of semantic service discovery using an ontology reasoner such as Pellet is time consuming. This restricts the usage of web services in real time applications having dynamic composition requirements. As performance of semantic service discovery is crucial in service composition, it should be optimized. Various optimization methods are being proposed to improve the performance of semantic discovery. In this work, we investigate the existing optimization methods and broadly classify optimization mechanisms into two categories, namely optimization by efficient reasoning and optimization by efficient matching. Optimization by efficient matching is further classified into subcategories such as optimization by clustering, optimization by inverted indexing, optimization by caching, optimization by hybrid methods, optimization by efficient data structures and optimization by efficient matching algorithms. With a detailed study of different methods, an integrated optimization infrastructure along with matching method has been proposed to improve the performance of semantic matching component. To achieve better optimization the proposed method integrates the effects of caching, clustering and indexing. Theoretical aspects of performance evaluation of the proposed method are discussed.

  9. On reliable discovery of molecular signatures

    Directory of Open Access Journals (Sweden)

    Björkegren Johan

    2009-01-01

    Full Text Available Abstract Background Molecular signatures are sets of genes, proteins, genetic variants or other variables that can be used as markers for a particular phenotype. Reliable signature discovery methods could yield valuable insight into cell biology and mechanisms of human disease. However, it is currently not clear how to control error rates such as the false discovery rate (FDR in signature discovery. Moreover, signatures for cancer gene expression have been shown to be unstable, that is, difficult to replicate in independent studies, casting doubts on their reliability. Results We demonstrate that with modern prediction methods, signatures that yield accurate predictions may still have a high FDR. Further, we show that even signatures with low FDR may fail to replicate in independent studies due to limited statistical power. Thus, neither stability nor predictive accuracy are relevant when FDR control is the primary goal. We therefore develop a general statistical hypothesis testing framework that for the first time provides FDR control for signature discovery. Our method is demonstrated to be correct in simulation studies. When applied to five cancer data sets, the method was able to discover molecular signatures with 5% FDR in three cases, while two data sets yielded no significant findings. Conclusion Our approach enables reliable discovery of molecular signatures from genome-wide data with current sample sizes. The statistical framework developed herein is potentially applicable to a wide range of prediction problems in bioinformatics.

  10. Facilitating Work Based Learning Projects: A Business Process Oriented Knowledge Management Approach

    NARCIS (Netherlands)

    Miao, Yongwu; Sloep, Peter; Koper, Rob

    2009-01-01

    Miao, Y., Sloep, P. B., & Koper, R. (2010). Facilitating Work Based Learning Projects: A Business Process Oriented Knowledge Management Approach. In D. Griffiths & R. Koper (Eds.), Rethinking Learning and Employment at a Time of Economic Uncertainty. Proceedings of the 6th TENCompetence Open

  11. The discovery of radioactivity: a bend in sciences history

    International Nuclear Information System (INIS)

    Dautray, R.

    1997-01-01

    One hundred years after the discovery of radioactivity, it is possible to see what are the consequences of this discovery for the science. Four consequences are studied in this article: the acquisition of a new knowledge about matter and universe. Secondly, the observation that the radioactivity has given a clock of world history and open to us the past and how this past forged the present world. Thirdly, the fact that radioactivity gave tracers, markers which allow to sound the internal structure of the human body as well as these one of earth and solar system and to unveil the mechanisms. The fourth consequence, is all the applications, electro-nuclear energy, national defence, nuclear medicine. (N.C.)

  12. Providing Fast Discovery in D2D Communication with Full Duplex Technology

    DEFF Research Database (Denmark)

    Gatnau, Marta; Berardinelli, Gilberto; Mahmood, Nurul Huda

    2016-01-01

    technology to provide D2D fast discovery. Such framework provides an algorithm to estimate the number of neighbor devices and to dynamically decide the transmission probability, for adapting to network changes and meeting the 10 milliseconds target. Finally, a signaling scheme is proposed to reduce......In Direct Device-to-Device (D2D), the device awareness procedure known as the discovery phase is required prior to the exchange of data. This work considers autonomous devices where the infrastructure is not involved in the discovery procedure. Commonly, the transmission of the discovery message...... the network interference. Results show that our framework performs better than a static approach, reducing the time it takes to complete the discovery phase. In addition, supporting full duplex allows to further reduce the discovery time compared to half duplex transmission mode....

  13. Neptune's Discovery: Le Verrier, Adams, and the Assignment of Credit

    Science.gov (United States)

    Sheehan, William

    2011-01-01

    As one of the most significant achievements of 19th century astronomy, the discovery of Neptune has been the subject of a vast literature. A large part of this literature--beginning with the period immediately after the optical discovery in Berlin--has been the obsession with assigning credit to the two men who attempted to calculate the planet's position (and initially this played out against the international rivalry between France and England). Le Verrier and Adams occupied much different positions in the Scientific Establishments of their respective countries; had markedly different personalities; and approached the investigation using different methods. A psychiatrist and historian of astronomy tries to provide some new contexts to the familiar story of the discovery of Neptune, and argues that the personalities of these two men played crucial roles in their approaches to the problem they set themselves and the way others reacted to their stimuli. Adams had features of high-functioning autism, while Le Verrier's domineering, obsessive, orderly personality--though it allowed him to be immensely productive--eventually led to serious difficulties with his peers (and an outright revolt). Though it took extraordinary smarts to calculate the position of Neptune, the discovery required social skills that these men lacked--and thus the process to discovery was more bumbling and adventitious than it might have been. The discovery of Neptune occurred at a moment when astronomy was changing from that of heroic individuals to team collaborations involving multiple experts, and remains an object lesson in the sociological aspects of scientific endeavor.

  14. A diagnostic expert system for the nuclear power plant b ased on the hybrid knowledge approach

    International Nuclear Information System (INIS)

    Yang, J.O.; Chang, S.H.

    1989-01-01

    A diagnostic expert system, the hybrid knowledge based plant operation supporting system (HYPOSS), which has been developed to support operators' decisionmaking during the transients of the nuclear power plant, is described. HYPOSS adopts the hybrid knowledge approach, which combines both shallow and deep knowledge to take advantage of the merits of both approaches. In HYPOSS, four types of knowledge are used according to the steps of diagnosis procedure. They are structural, functional, behavioral, and heuristic knowledge. The structural and functional knowledge is represented by three fundamental primitives and five types of functions, respectively. The behavioral knowledge is represented using constraints. The inference procedure is based on the human problem-solving behavior modeled in HYPOSS. The event-based operational guidelines are provided to the operator according to the diagnosed results. If the exact anomalies cannot be identified while some of the critical safety functions are challenged, the function-based operational guidelines are provided to the operator. For the validation of HYPOSS, several tests have been performed based on the data produced by a plant simulator. The results of validation studies show good applicability of HYPOSS to the anomaly diagnosis of nuclear power plant

  15. Participatory approach to the development of a knowledge base for problem-solving in diabetes self-management.

    Science.gov (United States)

    Cole-Lewis, Heather J; Smaldone, Arlene M; Davidson, Patricia R; Kukafka, Rita; Tobin, Jonathan N; Cassells, Andrea; Mynatt, Elizabeth D; Hripcsak, George; Mamykina, Lena

    2016-01-01

    To develop an expandable knowledge base of reusable knowledge related to self-management of diabetes that can be used as a foundation for patient-centric decision support tools. The structure and components of the knowledge base were created in participatory design with academic diabetes educators using knowledge acquisition methods. The knowledge base was validated using scenario-based approach with practicing diabetes educators and individuals with diabetes recruited from Community Health Centers (CHCs) serving economically disadvantaged communities and ethnic minorities in New York. The knowledge base includes eight glycemic control problems, over 150 behaviors known to contribute to these problems coupled with contextual explanations, and over 200 specific action-oriented self-management goals for correcting problematic behaviors, with corresponding motivational messages. The validation of the knowledge base suggested high level of completeness and accuracy, and identified improvements in cultural appropriateness. These were addressed in new iterations of the knowledge base. The resulting knowledge base is theoretically grounded, incorporates practical and evidence-based knowledge used by diabetes educators in practice settings, and allows for personally meaningful choices by individuals with diabetes. Participatory design approach helped researchers to capture implicit knowledge of practicing diabetes educators and make it explicit and reusable. The knowledge base proposed here is an important step towards development of new generation patient-centric decision support tools for facilitating chronic disease self-management. While this knowledge base specifically targets diabetes, its overall structure and composition can be generalized to other chronic conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. On the growth of scientific knowledge: yeast biology as a case study.

    Directory of Open Access Journals (Sweden)

    Xionglei He

    2009-03-01

    Full Text Available The tempo and mode of human knowledge expansion is an enduring yet poorly understood topic. Through a temporal network analysis of three decades of discoveries of protein interactions and genetic interactions in baker's yeast, we show that the growth of scientific knowledge is exponential over time and that important subjects tend to be studied earlier. However, expansions of different domains of knowledge are highly heterogeneous and episodic such that the temporal turnover of knowledge hubs is much greater than expected by chance. Familiar subjects are preferentially studied over new subjects, leading to a reduced pace of innovation. While research is increasingly done in teams, the number of discoveries per researcher is greater in smaller teams. These findings reveal collective human behaviors in scientific research and help design better strategies in future knowledge exploration.

  17. Automated vocabulary discovery for geo-parsing online epidemic intelligence

    Directory of Open Access Journals (Sweden)

    Freifeld Clark C

    2009-11-01

    Full Text Available Abstract Background Automated surveillance of the Internet provides a timely and sensitive method for alerting on global emerging infectious disease threats. HealthMap is part of a new generation of online systems designed to monitor and visualize, on a real-time basis, disease outbreak alerts as reported by online news media and public health sources. HealthMap is of specific interest for national and international public health organizations and international travelers. A particular task that makes such a surveillance useful is the automated discovery of the geographic references contained in the retrieved outbreak alerts. This task is sometimes referred to as "geo-parsing". A typical approach to geo-parsing would demand an expensive training corpus of alerts manually tagged by a human. Results Given that human readers perform this kind of task by using both their lexical and contextual knowledge, we developed an approach which relies on a relatively small expert-built gazetteer, thus limiting the need of human input, but focuses on learning the context in which geographic references appear. We show in a set of experiments, that this approach exhibits a substantial capacity to discover geographic locations outside of its initial lexicon. Conclusion The results of this analysis provide a framework for future automated global surveillance efforts that reduce manual input and improve timeliness of reporting.

  18. Management of knowledge across generations: preventing knowledge loss, enabling knowledge readiness

    International Nuclear Information System (INIS)

    Day, John A.

    2012-01-01

    J. Day argued that the preservation of records is a necessary, but not a sufficient condition to enable intelligent future decision making and management of nuclear waste. He distinguishes knowledge management from information management. Information without the potential to act on it is information for its own sake. He believes that knowledge will be a key factor for the generations that follow us. Records need knowledge, and knowledge needs records. A single representation of knowledge can be dangerous. Knowledge is multifaceted and complex, which necessitates a holistic approach. Throughout the presentation the concepts of 'knowledge readiness' and 'knowledge mothballing' (the process of knowing, forgetting and relearning) were proposed. Based on experiences at Sellafield the actions of knowledge audit mapping (including technical, societal and historical knowledge), knowledge loss risk assessing (although we would like to we cannot hold on to everything, and should thus take a risk approach, asking ourselves what is at stake if we delete certain parts of information), and knowledge retention for the long term management of a nuclear facility were presented. During the discussion, the link between knowledge and behaviour was raised. It was argued that the better informed people are, the less likely they are to make mistakes

  19. Novel approach of fragment-based lead discovery applied to renin inhibitors.

    Science.gov (United States)

    Tawada, Michiko; Suzuki, Shinkichi; Imaeda, Yasuhiro; Oki, Hideyuki; Snell, Gyorgy; Behnke, Craig A; Kondo, Mitsuyo; Tarui, Naoki; Tanaka, Toshimasa; Kuroita, Takanobu; Tomimoto, Masaki

    2016-11-15

    A novel approach was conducted for fragment-based lead discovery and applied to renin inhibitors. The biochemical screening of a fragment library against renin provided the hit fragment which showed a characteristic interaction pattern with the target protein. The hit fragment bound only to the S1, S3, and S3 SP (S3 subpocket) sites without any interactions with the catalytic aspartate residues (Asp32 and Asp215 (pepsin numbering)). Prior to making chemical modifications to the hit fragment, we first identified its essential binding sites by utilizing the hit fragment's substructures. Second, we created a new and smaller scaffold, which better occupied the identified essential S3 and S3 SP sites, by utilizing library synthesis with high-throughput chemistry. We then revisited the S1 site and efficiently explored a good building block attaching to the scaffold with library synthesis. In the library syntheses, the binding modes of each pivotal compound were determined and confirmed by X-ray crystallography and the library was strategically designed by structure-based computational approach not only to obtain a more active compound but also to obtain informative Structure Activity Relationship (SAR). As a result, we obtained a lead compound offering synthetic accessibility as well as the improved in vitro ADMET profiles. The fragments and compounds possessing a characteristic interaction pattern provided new structural insights into renin's active site and the potential to create a new generation of renin inhibitors. In addition, we demonstrated our FBDD strategy integrating highly sensitive biochemical assay, X-ray crystallography, and high-throughput synthesis and in silico library design aimed at fragment morphing at the initial stage was effective to elucidate a pocket profile and a promising lead compound. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Comment on "drug discovery: turning the titanic".

    Science.gov (United States)

    Lesterhuis, W Joost; Bosco, Anthony; Lake, Richard A

    2014-03-26

    The pathobiology-based approach to research and development has been the dominant paradigm for successful drug discovery over the last decades. We propose that the molecular and cellular events that govern a resolving, rather than an evolving, disease may reveal new druggable pathways.

  1. New perspectives on innovative drug discovery: an overview.

    Science.gov (United States)

    Pan, Si Yuan; Pan, Shan; Yu, Zhi-Ling; Ma, Dik-Lung; Chen, Si-Bao; Fong, Wang-Fun; Han, Yi-Fan; Ko, Kam-Ming

    2010-01-01

    Despite advances in technology, drug discovery is still a lengthy, expensive, difficult, and inefficient process, with a low rate of success. Today, advances in biomedical science have brought about great strides in therapeutic interventions for a wide spectrum of diseases. The advent of biochemical techniques and cutting-edge bio/chemical technologies has made available a plethora of practical approaches to drug screening and design. In 2010, the total sales of the global pharmaceutical market will reach 600 billion US dollars and expand to over 975 billion dollars by 2013. The aim of this review is to summarize available information on contemporary approaches and strategies in the discovery of novel therapeutic agents, especially from the complementary and alternative medicines, including natural products and traditional remedies such as Chinese herbal medicine.

  2. Network-based discovery through mechanistic systems biology. Implications for applications--SMEs and drug discovery: where the action is.

    Science.gov (United States)

    Benson, Neil

    2015-08-01

    Phase II attrition remains the most important challenge for drug discovery. Tackling the problem requires improved understanding of the complexity of disease biology. Systems biology approaches to this problem can, in principle, deliver this. This article reviews the reports of the application of mechanistic systems models to drug discovery questions and discusses the added value. Although we are on the journey to the virtual human, the length, path and rate of learning from this remain an open question. Success will be dependent on the will to invest and make the most of the insight generated along the way. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Paths of Discovery: Comparing the Search Effectiveness of EBSCO Discovery Service, Summon, Google Scholar, and Conventional Library Resources

    Science.gov (United States)

    Asher, Andrew D.; Duke, Lynda M.; Wilson, Suzanne

    2013-01-01

    In 2011, researchers at Bucknell University and Illinois Wesleyan University compared the search efficacy of Serial Solutions Summon, EBSCO Discovery Service, Google Scholar, and conventional library databases. Using a mixed-methods approach, qualitative and quantitative data were gathered on students' usage of these tools. Regardless of the…

  4. A Road Map for Knowledge Management Systems Design Using Axiomatic Design Approach

    Directory of Open Access Journals (Sweden)

    Houshmand Mahmoud

    2017-01-01

    Full Text Available Successful design and implementation of knowledge management systems have been the main concern of many researchers. It has been reported that more than 50% of knowledge management systems have failed, therefore, it is required to seek for a new and comprehensive scientific approach to design and implement it. In the design and implementation of a knowledge management system, it is required to know ’what we want to achieve’ and ’how and by what processes we will achieve it’. A literature review conducted and axiomatic design theory selected for this purpose. For the first time, this paper develops a conceptual design of knowledge management systems by means of a hierarchical structure, composed of ’Functional Requirements’ (FRs, ’Design Parameters’ (DPs, and ’Process Variables’ (PVs. The intersection of several studies conducted in the field of knowledge management systems has been used to design the knowledge management model. It reveals that six essential bases of knowledge management are organizational culture, organizational structure, human resources, management and leadership, information technology, and the external environment of the organization; that are represented as top DPs in the structure of the model. These essential factors are decomposed to lower levels by means of zigzagging. The model implemented in Tehran Urban and Suburban Railway Operation Corporation (TUSROC and the results were very promising. The most important result of this study is a roadmap to design successful and efficient knowledge management systems.

  5. Classification and Comparison of Architecture Evolution Reuse Knowledge - A Systematic Review

    DEFF Research Database (Denmark)

    Ahmad, Aakash; Jamshidi, Pooyan; Pahl, Claus

    2014-01-01

    patterns (34% of selected studies) represent a predominant solution, followed by evolution styles (25%) and adaptation strategies and policies (22%) to enable application of reuse knowledge. Empirical methods for acquisition of reuse knowledge represent 19% including pattern discovery, configuration...

  6. A modelling approach to study learning processes with a focus on knowledge creation

    NARCIS (Netherlands)

    Naeve, Ambjorn; Yli-Luoma, Pertti; Kravcik, Milos; Lytras, Miltiadis

    2008-01-01

    Naeve, A., Yli-Luoma, P., Kravcik, M., & Lytras, M. D. (2008). A modelling approach to study learning processes with a focus on knowledge creation. International Journal Technology Enhanced Learning, 1(1/2), 1–34.

  7. Discovery of Cationic Polymers for Non-viral Gene Delivery using Combinatorial Approaches

    Science.gov (United States)

    Barua, Sutapa; Ramos, James; Potta, Thrimoorthy; Taylor, David; Huang, Huang-Chiao; Montanez, Gabriela; Rege, Kaushal

    2015-01-01

    Gene therapy is an attractive treatment option for diseases of genetic origin, including several cancers and cardiovascular diseases. While viruses are effective vectors for delivering exogenous genes to cells, concerns related to insertional mutagenesis, immunogenicity, lack of tropism, decay and high production costs necessitate the discovery of non-viral methods. Significant efforts have been focused on cationic polymers as non-viral alternatives for gene delivery. Recent studies have employed combinatorial syntheses and parallel screening methods for enhancing the efficacy of gene delivery, biocompatibility of the delivery vehicle, and overcoming cellular level barriers as they relate to polymer-mediated transgene uptake, transport, transcription, and expression. This review summarizes and discusses recent advances in combinatorial syntheses and parallel screening of cationic polymer libraries for the discovery of efficient and safe gene delivery systems. PMID:21843141

  8. Equation Discovery for Financial Forcasting in Context of Islamic Banking

    Institute of Scientific and Technical Information of China (English)

    Amer Alzaidi; Dimitar Kazakov

    2010-01-01

    This paper describes an equation discovery approach based on machine leamng using LAGdtAMGE as an equation discovery tool,with two sources of input,a dataset and model presented in context-free gammar.The approach is searching a large range of potential equations by a specific model.The parameters of the equation are fitted to find the best equations.The The experiments are illustratedwith commodity prices from the London Metal Exchange for the period of January-October 2009.The outputs of the experiments are a large number of equations;same of the equations display that the predicted rakes are following the market trends in perfect patterns.

  9. Automatic discovery of cross-family sequence features associated with protein function

    Directory of Open Access Journals (Sweden)

    Krings Andrea

    2006-01-01

    knowledge discovery in annotated sequence data. The technique is able to identify functionally important sequence features and does not require expert knowledge. By viewing protein function from a sequence perspective, the approach is also suitable for discovering unexpected links between biological processes, such as the recently discovered role of ubiquitination in transcription.

  10. A knowledge representation approach using fuzzy cognitive maps for better navigation support in an adaptive learning system.

    Science.gov (United States)

    Chrysafiadi, Konstantina; Virvou, Maria

    2013-12-01

    In this paper a knowledge representation approach of an adaptive and/or personalized tutoring system is presented. The domain knowledge should be represented in a more realistic way in order to allow the adaptive and/or personalized tutoring system to deliver the learning material to each individual learner dynamically taking into account her/his learning needs and her/his different learning pace. To succeed this, the domain knowledge representation has to depict the possible increase or decrease of the learner's knowledge. Considering that the domain concepts that constitute the learning material are not independent from each other, the knowledge representation approach has to allow the system to recognize either the domain concepts that are already partly or completely known for a learner, or the domain concepts that s/he has forgotten, taking into account the learner's knowledge level of the related concepts. In other words, the system should be informed about the knowledge dependencies that exist among the domain concepts of the learning material, as well as the strength on impact of each domain concept on others. Fuzzy Cognitive Maps (FCMs) seem to be an ideal way for representing graphically this kind of information. The suggested knowledge representation approach has been implemented in an e-learning adaptive system for teaching computer programming. The particular system was used by the students of a postgraduate program in the field of Informatics in the University of Piraeus and was compared with a corresponding system, in which the domain knowledge was represented using the most common used technique of network of concepts. The results of the evaluation were very encouraging.

  11. Search strategy has influenced the discovery rate of human viruses.

    Science.gov (United States)

    Rosenberg, Ronald; Johansson, Michael A; Powers, Ann M; Miller, Barry R

    2013-08-20

    A widely held concern is that the pace of infectious disease emergence has been increasing. We have analyzed the rate of discovery of pathogenic viruses, the preeminent source of newly discovered causes of human disease, from 1897 through 2010. The rate was highest during 1950-1969, after which it moderated. This general picture masks two distinct trends: for arthropod-borne viruses, which comprised 39% of pathogenic viruses, the discovery rate peaked at three per year during 1960-1969, but subsequently fell nearly to zero by 1980; however, the rate of discovery of nonarboviruses remained stable at about two per year from 1950 through 2010. The period of highest arbovirus discovery coincided with a comprehensive program supported by The Rockefeller Foundation of isolating viruses from humans, animals, and arthropod vectors at field stations in Latin America, Africa, and India. The productivity of this strategy illustrates the importance of location, approach, long-term commitment, and sponsorship in the discovery of emerging pathogens.

  12. Novel CNS drug discovery and development approach: model-based integration to predict neuro-pharmacokinetics and pharmacodynamics.

    Science.gov (United States)

    de Lange, Elizabeth C M; van den Brink, Willem; Yamamoto, Yumi; de Witte, Wilhelmus E A; Wong, Yin Cheong

    2017-12-01

    CNS drug development has been hampered by inadequate consideration of CNS pharmacokinetic (PK), pharmacodynamics (PD) and disease complexity (reductionist approach). Improvement is required via integrative model-based approaches. Areas covered: The authors summarize factors that have played a role in the high attrition rate of CNS compounds. Recent advances in CNS research and drug discovery are presented, especially with regard to assessment of relevant neuro-PK parameters. Suggestions for further improvements are also discussed. Expert opinion: Understanding time- and condition dependent interrelationships between neuro-PK and neuro-PD processes is key to predictions in different conditions. As a first screen, it is suggested to use in silico/in vitro derived molecular properties of candidate compounds and predict concentration-time profiles of compounds in multiple compartments of the human CNS, using time-course based physiology-based (PB) PK models. Then, for selected compounds, one can include in vitro drug-target binding kinetics to predict target occupancy (TO)-time profiles in humans. This will improve neuro-PD prediction. Furthermore, a pharmaco-omics approach is suggested, providing multilevel and paralleled data on systems processes from individuals in a systems-wide manner. Thus, clinical trials will be better informed, using fewer animals, while also, needing fewer individuals and samples per individual for proof of concept in humans.

  13. Using ePortfolio-Based Learning Approach to Facilitate Knowledge Sharing and Creation among College Students

    Science.gov (United States)

    Chang, Chi-Cheng; Chou, Pao-Nan; Liang, Chaoyan

    2018-01-01

    The purpose of the present study was to examine the effects of the ePortfolio-based learning approach (ePBLA) on knowledge sharing and creation with 92 college students majoring in electrical engineering as the participants. Multivariate analysis of covariance (MANCOVA) with a covariance of pretest on knowledge sharing and creation was conducted…

  14. Approaching the young generation. How to transfer knowledge to future professionals

    International Nuclear Information System (INIS)

    Garcia Laruelo, J.; Vinusa Carretero, A.

    2016-01-01

    One of the main goals of Spanish Young Generation Network (in Spanish Jovenes Nucleares) is the dissemination of nuclear science and technology knowledge to the students and general public. From this point of view, we approach the younger students, our future professionals, in order to teach them this science and to answer the main questions society has about this sector. (Author)

  15. Interregional Knowledge Management Workshop on Life Cycle Management of Design Basis Information. Issues, Challenges, Approaches

    International Nuclear Information System (INIS)

    Šula, Radek

    2013-01-01

    Introduction and objectives: • It is evident that the design basis area is from the point of view of knowledge sharing extremely complicated. • Time is changing and puts on us ever greater demands. • We have to analyze the near and remote surroundings and have to simplified the problem of knowledge sharing in that area. • I believe that it is graspable task for knowledge management and I will try to outline some possible context and approaches

  16. Open source drug discovery--a new paradigm of collaborative research in tuberculosis drug development.

    Science.gov (United States)

    Bhardwaj, Anshu; Scaria, Vinod; Raghava, Gajendra Pal Singh; Lynn, Andrew Michael; Chandra, Nagasuma; Banerjee, Sulagna; Raghunandanan, Muthukurussi V; Pandey, Vikas; Taneja, Bhupesh; Yadav, Jyoti; Dash, Debasis; Bhattacharya, Jaijit; Misra, Amit; Kumar, Anil; Ramachandran, Srinivasan; Thomas, Zakir; Brahmachari, Samir K

    2011-09-01

    It is being realized that the traditional closed-door and market driven approaches for drug discovery may not be the best suited model for the diseases of the developing world such as tuberculosis and malaria, because most patients suffering from these diseases have poor paying capacity. To ensure that new drugs are created for patients suffering from these diseases, it is necessary to formulate an alternate paradigm of drug discovery process. The current model constrained by limitations for collaboration and for sharing of resources with confidentiality hampers the opportunities for bringing expertise from diverse fields. These limitations hinder the possibilities of lowering the cost of drug discovery. The Open Source Drug Discovery project initiated by Council of Scientific and Industrial Research, India has adopted an open source model to power wide participation across geographical borders. Open Source Drug Discovery emphasizes integrative science through collaboration, open-sharing, taking up multi-faceted approaches and accruing benefits from advances on different fronts of new drug discovery. Because the open source model is based on community participation, it has the potential to self-sustain continuous development by generating a storehouse of alternatives towards continued pursuit for new drug discovery. Since the inventions are community generated, the new chemical entities developed by Open Source Drug Discovery will be taken up for clinical trial in a non-exclusive manner by participation of multiple companies with majority funding from Open Source Drug Discovery. This will ensure availability of drugs through a lower cost community driven drug discovery process for diseases afflicting people with poor paying capacity. Hopefully what LINUX the World Wide Web have done for the information technology, Open Source Drug Discovery will do for drug discovery. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. On the Limitations of Biological Knowledge

    Science.gov (United States)

    Dougherty, Edward R; Shmulevich, Ilya

    2012-01-01

    Scientific knowledge is grounded in a particular epistemology and, owing to the requirements of that epistemology, possesses limitations. Some limitations are intrinsic, in the sense that they depend inherently on the nature of scientific knowledge; others are contingent, depending on the present state of knowledge, including technology. Understanding limitations facilitates scientific research because one can then recognize when one is confronted by a limitation, as opposed to simply being unable to solve a problem within the existing bounds of possibility. In the hope that the role of limiting factors can be brought more clearly into focus and discussed, we consider several sources of limitation as they apply to biological knowledge: mathematical complexity, experimental constraints, validation, knowledge discovery, and human intellectual capacity. PMID:23633917

  18. The Proteomics Big Challenge for Biomarkers and New Drug-Targets Discovery

    Science.gov (United States)

    Savino, Rocco; Paduano, Sergio; Preianò, Mariaimmacolata; Terracciano, Rosa

    2012-01-01

    In the modern process of drug discovery, clinical, functional and chemical proteomics can converge and integrate synergies. Functional proteomics explores and elucidates the components of pathways and their interactions which, when deregulated, lead to a disease condition. This knowledge allows the design of strategies to target multiple pathways with combinations of pathway-specific drugs, which might increase chances of success and reduce the occurrence of drug resistance. Chemical proteomics, by analyzing the drug interactome, strongly contributes to accelerate the process of new druggable targets discovery. In the research area of clinical proteomics, proteome and peptidome mass spectrometry-profiling of human bodily fluid (plasma, serum, urine and so on), as well as of tissue and of cells, represents a promising tool for novel biomarker and eventually new druggable targets discovery. In the present review we provide a survey of current strategies of functional, chemical and clinical proteomics. Major issues will be presented for proteomic technologies used for the discovery of biomarkers for early disease diagnosis and identification of new drug targets. PMID:23203042

  19. Lessons learned about art-based approaches for disseminating knowledge.

    Science.gov (United States)

    Bruce, Anne; Makaroff, Kara L Schick; Sheilds, Laurene; Beuthin, Rosanne; Molzahn, Anita; Shermak, Sheryl

    2013-01-01

    To present a case example of using an arts-based approach and the development of an art exhibit to disseminate research findings from a narrative research study. Once a study has been completed, the final step of dissemination of findings is crucial. In this paper, we explore the benefits of bringing nursing research into public spaces using an arts-based approach. Findings from a qualitative narrative study exploring experiences of living with life-threatening illnesses. Semi-structured in-depth interviews were conducted with 32 participants living with cancer, chronic renal disease, or HIV/AIDS. Participants were invited to share a symbol representing their experience of living with life-threatening illness and the meaning it held for them. The exhibit conveyed experiences of how people story and re-story their lives when living with chronic kidney disease, cancer or HIV. Photographic images of symbolic representations of study participants' experiences and poetic narratives from their stories were exhibited in a public art gallery. The theoretical underpinning of arts-based approaches and the lessons learned in creating an art exhibit from research findings are explored. Creative art forms for research and disseminating knowledge offer new ways of understanding and knowing that are under-used in nursing. Arts-based approaches make visible patients' experiences that are often left unarticulated or hidden. Creative dissemination approaches such as art exhibits can promote insight and new ways of knowing that communicate nursing research to both public and professional audiences.

  20. Risk based knowledge assessments: towards a toolbox for managing key knowledge assets

    International Nuclear Information System (INIS)

    Bright, Clive

    2008-01-01

    Full text: It is now well acknowledged that considerable Knowledge Management (KM) issues are faced by national and international nuclear communities. Much of these problems relate to issues of an ageing workforce and the significantly reduced influx of new generation of nuclear engineers and scientists. The management discipline of KM contains a broad spectrum of methods and techniques. However, the effective implementation of a KM strategy requires the selection and deployment of appropriate and targeted approaches that are pertinent to the particular issues of the technical or business area within an organisation. A clear strategy is contingent upon an assessment of what are the knowledge areas and what are the key (knowledge) risk areas. In particular the following issues have to be addressed: 'what knowledge exists?', 'what is the nature and format of that knowledge?' and 'what knowledge is key to our continued, safe, and effective operation?'. Answers to such questions will enable an organisation to prioritise KM effort and employ subsequent KM approaches that are appropriate. Subsequent approaches ranging from the utilisation of information technologies, such as databases; knowledge retention methods; and the setting up of Community of Practices to share knowledge and experience. This paper considers a risk assessment based approach to KM. In so doing the paper extends work previously reported on an approach to conducting knowledge audits by considering the integration of that approach with approaches to (knowledge) risk assessment. The paper also provides a brief review of the various KM approaches that can act to reduce the level of risk faced by an organisation. The paper concludes by reflecting upon the role, value of deploying such a risk based approach. (author)

  1. Towards a routine application of Top-Down approaches for label-free discovery workflows.

    Science.gov (United States)

    Schmit, Pierre-Olivier; Vialaret, Jerome; Wessels, Hans J C T; van Gool, Alain J; Lehmann, Sylvain; Gabelle, Audrey; Wood, Jason; Bern, Marshall; Paape, Rainer; Suckau, Detlev; Kruppa, Gary; Hirtz, Christophe

    2018-03-20

    Thanks to proteomics investigations, our vision of the role of different protein isoforms in the pathophysiology of diseases has largely evolved. The idea that protein biomarkers like tau, amyloid peptides, ApoE, cystatin, or neurogranin are represented in body fluids as single species is obviously over-simplified, as most proteins are present in different isoforms and subjected to numerous processing and post-translational modifications. Measuring the intact mass of proteins by MS has the advantage to provide information on the presence and relative amount of the different proteoforms. Such Top-Down approaches typically require a high degree of sample pre-fractionation to allow the MS system to deliver optimal performance in terms of dynamic range, mass accuracy and resolution. In clinical studies, however, the requirements for pre-analytical robustness and sample size large enough for statistical power restrict the routine use of a high degree of sample pre-fractionation. In this study, we have investigated the capacities of current-generation Ultra-High Resolution Q-Tof systems to deal with high complexity intact protein samples and have evaluated the approach on a cohort of patients suffering from neurodegenerative disease. Statistical analysis has shown that several proteoforms can be used to distinguish Alzheimer disease patients from patients suffering from other neurodegenerative disease. Top-down approaches have an extremely high biological relevance, especially when it comes to biomarker discovery, but the necessary pre-fractionation constraints are not easily compatible with the robustness requirements and the size of clinical sample cohorts. We have demonstrated that intact protein profiling studies could be run on UHR-Q-ToF with limited pre-fractionation. The proteoforms that have been identified as candidate biomarkers in the-proof-of concept study are derived from proteins known to play a role in the pathophysiology process of Alzheimer disease

  2. Topology Discovery Using Cisco Discovery Protocol

    OpenAIRE

    Rodriguez, Sergio R.

    2009-01-01

    In this paper we address the problem of discovering network topology in proprietary networks. Namely, we investigate topology discovery in Cisco-based networks. Cisco devices run Cisco Discovery Protocol (CDP) which holds information about these devices. We first compare properties of topologies that can be obtained from networks deploying CDP versus Spanning Tree Protocol (STP) and Management Information Base (MIB) Forwarding Database (FDB). Then we describe a method of discovering topology ...

  3. Advances in fragment-based drug discovery platforms.

    Science.gov (United States)

    Orita, Masaya; Warizaya, Masaichi; Amano, Yasushi; Ohno, Kazuki; Niimi, Tatsuya

    2009-11-01

    Fragment-based drug discovery (FBDD) has been established as a powerful alternative and complement to traditional high-throughput screening techniques for identifying drug leads. At present, this technique is widely used among academic groups as well as small biotech and large pharmaceutical companies. In recent years, > 10 new compounds developed with FBDD have entered clinical development, and more and more attention in the drug discovery field is being focused on this technique. Under the FBDD approach, a fragment library of relatively small compounds (molecular mass = 100 - 300 Da) is screened by various methods and the identified fragment hits which normally weakly bind to the target are used as starting points to generate more potent drug leads. Because FBDD is still a relatively new drug discovery technology, further developments and optimizations in screening platforms and fragment exploitation can be expected. This review summarizes recent advances in FBDD platforms and discusses the factors important for the successful application of this technique. Under the FBDD approach, both identifying the starting fragment hit to be developed and generating the drug lead from that starting fragment hit are important. Integration of various techniques, such as computational technology, X-ray crystallography, NMR, surface plasmon resonance, isothermal titration calorimetry, mass spectrometry and high-concentration screening, must be applied in a situation-appropriate manner.

  4. Advancing Drug Discovery through Enhanced Free Energy Calculations.

    Science.gov (United States)

    Abel, Robert; Wang, Lingle; Harder, Edward D; Berne, B J; Friesner, Richard A

    2017-07-18

    A principal goal of drug discovery project is to design molecules that can tightly and selectively bind to the target protein receptor. Accurate prediction of protein-ligand binding free energies is therefore of central importance in computational chemistry and computer aided drug design. Multiple recent improvements in computing power, classical force field accuracy, enhanced sampling methods, and simulation setup have enabled accurate and reliable calculations of protein-ligands binding free energies, and position free energy calculations to play a guiding role in small molecule drug discovery. In this Account, we outline the relevant methodological advances, including the REST2 (Replica Exchange with Solute Temperting) enhanced sampling, the incorporation of REST2 sampling with convential FEP (Free Energy Perturbation) through FEP/REST, the OPLS3 force field, and the advanced simulation setup that constitute our FEP+ approach, followed by the presentation of extensive comparisons with experiment, demonstrating sufficient accuracy in potency prediction (better than 1 kcal/mol) to substantially impact lead optimization campaigns. The limitations of the current FEP+ implementation and best practices in drug discovery applications are also discussed followed by the future methodology development plans to address those limitations. We then report results from a recent drug discovery project, in which several thousand FEP+ calculations were successfully deployed to simultaneously optimize potency, selectivity, and solubility, illustrating the power of the approach to solve challenging drug design problems. The capabilities of free energy calculations to accurately predict potency and selectivity have led to the advance of ongoing drug discovery projects, in challenging situations where alternative approaches would have great difficulties. The ability to effectively carry out projects evaluating tens of thousands, or hundreds of thousands, of proposed drug candidates

  5. A structural informatics approach to mine kinase knowledge bases.

    Science.gov (United States)

    Brooijmans, Natasja; Mobilio, Dominick; Walker, Gary; Nilakantan, Ramaswamy; Denny, Rajiah A; Feyfant, Eric; Diller, David; Bikker, Jack; Humblet, Christine

    2010-03-01

    In this paper, we describe a combination of structural informatics approaches developed to mine data extracted from existing structure knowledge bases (Protein Data Bank and the GVK database) with a focus on kinase ATP-binding site data. In contrast to existing systems that retrieve and analyze protein structures, our techniques are centered on a database of ligand-bound geometries in relation to residues lining the binding site and transparent access to ligand-based SAR data. We illustrate the systems in the context of the Abelson kinase and related inhibitor structures. 2009 Elsevier Ltd. All rights reserved.

  6. Mining knowledge from text repositories using information extraction ...

    Indian Academy of Sciences (India)

    Information extraction (IE); text mining; text repositories; knowledge discovery from .... general purpose English words. However ... of precision and recall, as extensive experimentation is required due to lack of public tagged corpora. 4. Mining ...

  7. The mass-action law based algorithm for cost-effective approach for cancer drug discovery and development.

    Science.gov (United States)

    Chou, Ting-Chao

    2011-01-01

    The mass-action law based system analysis via mathematical induction and deduction lead to the generalized theory and algorithm that allows computerized simulation of dose-effect dynamics with small size experiments using a small number of data points in vitro, in animals, and in humans. The median-effect equation of the mass-action law deduced from over 300 mechanism specific-equations has been shown to be the unified theory that serves as the common-link for complicated biomedical systems. After using the median-effect principle as the common denominator, its applications are mechanism-independent, drug unit-independent, and dynamic order-independent; and can be used generally for single drug analysis or for multiple drug combinations in constant-ratio or non-constant ratios. Since the "median" is the common link and universal reference point in biological systems, these general enabling lead to computerized quantitative bio-informatics for econo-green bio-research in broad disciplines. Specific applications of the theory, especially relevant to drug discovery, drug combination, and clinical trials, have been cited or illustrated in terms of algorithms, experimental design and computerized simulation for data analysis. Lessons learned from cancer research during the past fifty years provide a valuable opportunity to reflect, and to improve the conventional divergent approach and to introduce a new convergent avenue, based on the mass-action law principle, for the efficient cancer drug discovery and the low-cost drug development.

  8. A novel approach to Service Discovery in Mobile Adhoc Network

    OpenAIRE

    Islam, Noman; Shaikh, Zubair A.

    2015-01-01

    Mobile Adhoc Network (MANET) is a network of a number of mobile routers and associated hosts, organized in a random fashion via wireless links. During recent years MANET has gained enormous amount of attention and has been widely used for not only military purposes but for search-and-rescue operations, intelligent transportation system, data collection, virtual classrooms and ubiquitous computing. Service Discovery is one of the most important issues in MANET. It is defined as the process of ...

  9. Towards a category theory approach to analogy: Analyzing re-representation and acquisition of numerical knowledge.

    Directory of Open Access Journals (Sweden)

    Jairo A Navarrete

    2017-08-01

    Full Text Available Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called "flexibility" whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena.

  10. Towards a category theory approach to analogy: Analyzing re-representation and acquisition of numerical knowledge.

    Science.gov (United States)

    Navarrete, Jairo A; Dartnell, Pablo

    2017-08-01

    Category Theory, a branch of mathematics, has shown promise as a modeling framework for higher-level cognition. We introduce an algebraic model for analogy that uses the language of category theory to explore analogy-related cognitive phenomena. To illustrate the potential of this approach, we use this model to explore three objects of study in cognitive literature. First, (a) we use commutative diagrams to analyze an effect of playing particular educational board games on the learning of numbers. Second, (b) we employ a notion called coequalizer as a formal model of re-representation that explains a property of computational models of analogy called "flexibility" whereby non-similar representational elements are considered matches and placed in structural correspondence. Finally, (c) we build a formal learning model which shows that re-representation, language processing and analogy making can explain the acquisition of knowledge of rational numbers. These objects of study provide a picture of acquisition of numerical knowledge that is compatible with empirical evidence and offers insights on possible connections between notions such as relational knowledge, analogy, learning, conceptual knowledge, re-representation and procedural knowledge. This suggests that the approach presented here facilitates mathematical modeling of cognition and provides novel ways to think about analogy-related cognitive phenomena.

  11. Scientific workflows as productivity tools for drug discovery.

    Science.gov (United States)

    Shon, John; Ohkawa, Hitomi; Hammer, Juergen

    2008-05-01

    Large pharmaceutical companies annually invest tens to hundreds of millions of US dollars in research informatics to support their early drug discovery processes. Traditionally, most of these investments are designed to increase the efficiency of drug discovery. The introduction of do-it-yourself scientific workflow platforms has enabled research informatics organizations to shift their efforts toward scientific innovation, ultimately resulting in a possible increase in return on their investments. Unlike the handling of most scientific data and application integration approaches, researchers apply scientific workflows to in silico experimentation and exploration, leading to scientific discoveries that lie beyond automation and integration. This review highlights some key requirements for scientific workflow environments in the pharmaceutical industry that are necessary for increasing research productivity. Examples of the application of scientific workflows in research and a summary of recent platform advances are also provided.

  12. PERFORMANCE EVALUATION OF AN ALTERNATIVE CONTROLLER FOR BLUETOOTH SERVICE DISCOVERY

    Directory of Open Access Journals (Sweden)

    M. Sughasiny

    2012-06-01

    Full Text Available Bluetooth is a short range radio technology to form a small wireless system. It is used in low –cost, low power ad-hoc networks and it suffers from long service discovery delay and high power consumption. Bluetooth employs the 2.4 GHz ISM band, sharing the same bandwidth with the wireless LAN implementing the IEEE 802.11 standards. Thus it causes significantly lower interference. For improving the efficiency of SDP, we present an implementation of Bluetooth 2.1 in the NS-2 simulator, discuss the IEEE 802.11b as a Bluetooth controller and propose a new alternative Bluetooth Controller based on Adaptive Frequency Hopping techniques using Amplifier Power. The resulting approach significantly reduces the service discovery time, thereby lowering power consumption and increasing the throughput. We present the benefits of our new approach and compare it with existing approach using NS-2 Simulations and we have presented the comparison graphs in support of our approach.

  13. The new world of discovery, invention, and innovation: convergence of knowledge, technology, and society

    Energy Technology Data Exchange (ETDEWEB)

    Roco, Mihail C., E-mail: mroco@nsf.gov; Bainbridge, William S. [National Science Foundation (United States)

    2013-09-15

    Convergence of knowledge and technology for the benefit of society (CKTS) is the core opportunity for progress in the twenty-first century. CKTS is defined as the escalating and transformative interactions among seemingly different disciplines, technologies, communities, and domains of human activity to achieve mutual compatibility, synergism, and integration, and through this process to create added value and branch out to meet shared goals. Convergence has been progressing by stages over the past several decades, beginning with nanotechnology for the material world, followed by convergence of nanotechnology, biotechnology, information, and cognitive science (NBIC) for emerging technologies. CKTS is the third level of convergence. It suggests a general process to advance creativity, innovation, and societal progress based on five general purpose principles: (1) the interdependence of all components of nature and society, (2) decision analysis for research, development, and applications based on dynamic system-logic deduction, (3) enhancement of creativity and innovation through evolutionary processes of convergence that combines existing principles and divergence that generates new ones, (4) the utility of higher-level cross-domain languages to generate new solutions and support transfer of new knowledge, and (5) the value of vision-inspired basic research embodied in grand challenges. CKTS is a general purpose approach in knowledge society. It allows society to answer questions and resolve problems that isolated capabilities cannot, as well as to create new competencies, knowledge, and technologies on this basis. Possible solutions are outlined for key societal challenges in the next decade, including support for foundational emerging technologies NBIC to penetrate essential platforms of human activity and create new industries and jobs, improve lifelong wellness and human potential, achieve personalized and integrated healthcare and education, and secure a

  14. The new world of discovery, invention, and innovation: convergence of knowledge, technology, and society

    International Nuclear Information System (INIS)

    Roco, Mihail C.; Bainbridge, William S.

    2013-01-01

    Convergence of knowledge and technology for the benefit of society (CKTS) is the core opportunity for progress in the twenty-first century. CKTS is defined as the escalating and transformative interactions among seemingly different disciplines, technologies, communities, and domains of human activity to achieve mutual compatibility, synergism, and integration, and through this process to create added value and branch out to meet shared goals. Convergence has been progressing by stages over the past several decades, beginning with nanotechnology for the material world, followed by convergence of nanotechnology, biotechnology, information, and cognitive science (NBIC) for emerging technologies. CKTS is the third level of convergence. It suggests a general process to advance creativity, innovation, and societal progress based on five general purpose principles: (1) the interdependence of all components of nature and society, (2) decision analysis for research, development, and applications based on dynamic system-logic deduction, (3) enhancement of creativity and innovation through evolutionary processes of convergence that combines existing principles and divergence that generates new ones, (4) the utility of higher-level cross-domain languages to generate new solutions and support transfer of new knowledge, and (5) the value of vision-inspired basic research embodied in grand challenges. CKTS is a general purpose approach in knowledge society. It allows society to answer questions and resolve problems that isolated capabilities cannot, as well as to create new competencies, knowledge, and technologies on this basis. Possible solutions are outlined for key societal challenges in the next decade, including support for foundational emerging technologies NBIC to penetrate essential platforms of human activity and create new industries and jobs, improve lifelong wellness and human potential, achieve personalized and integrated healthcare and education, and secure a

  15. The new world of discovery, invention, and innovation: convergence of knowledge, technology, and society

    Science.gov (United States)

    Roco, Mihail C.; Bainbridge, William S.

    2013-09-01

    Convergence of knowledge and technology for the benefit of society (CKTS) is the core opportunity for progress in the twenty-first century. CKTS is defined as the escalating and transformative interactions among seemingly different disciplines, technologies, communities, and domains of human activity to achieve mutual compatibility, synergism, and integration, and through this process to create added value and branch out to meet shared goals. Convergence has been progressing by stages over the past several decades, beginning with nanotechnology for the material world, followed by convergence of nanotechnology, biotechnology, information, and cognitive science (NBIC) for emerging technologies. CKTS is the third level of convergence. It suggests a general process to advance creativity, innovation, and societal progress based on five general purpose principles: (1) the interdependence of all components of nature and society, (2) decision analysis for research, development, and applications based on dynamic system-logic deduction, (3) enhancement of creativity and innovation through evolutionary processes of convergence that combines existing principles and divergence that generates new ones, (4) the utility of higher-level cross-domain languages to generate new solutions and support transfer of new knowledge, and (5) the value of vision-inspired basic research embodied in grand challenges. CKTS is a general purpose approach in knowledge society. It allows society to answer questions and resolve problems that isolated capabilities cannot, as well as to create new competencies, knowledge, and technologies on this basis. Possible solutions are outlined for key societal challenges in the next decade, including support for foundational emerging technologies NBIC to penetrate essential platforms of human activity and create new industries and jobs, improve lifelong wellness and human potential, achieve personalized and integrated healthcare and education, and secure a

  16. Enhancing knowledge discovery from cancer genomics data with Galaxy.

    Science.gov (United States)

    Albuquerque, Marco A; Grande, Bruno M; Ritch, Elie J; Pararajalingam, Prasath; Jessa, Selin; Krzywinski, Martin; Grewal, Jasleen K; Shah, Sohrab P; Boutros, Paul C; Morin, Ryan D

    2017-05-01

    The field of cancer genomics has demonstrated the power of massively parallel sequencing techniques to inform on the genes and specific alterations that drive tumor onset and progression. Although large comprehensive sequence data sets continue to be made increasingly available, data analysis remains an ongoing challenge, particularly for laboratories lacking dedicated resources and bioinformatics expertise. To address this, we have produced a collection of Galaxy tools that represent many popular algorithms for detecting somatic genetic alterations from cancer genome and exome data. We developed new methods for parallelization of these tools within Galaxy to accelerate runtime and have demonstrated their usability and summarized their runtimes on multiple cloud service providers. Some tools represent extensions or refinement of existing toolkits to yield visualizations suited to cohort-wide cancer genomic analysis. For example, we present Oncocircos and Oncoprintplus, which generate data-rich summaries of exome-derived somatic mutation. Workflows that integrate these to achieve data integration and visualizations are demonstrated on a cohort of 96 diffuse large B-cell lymphomas and enabled the discovery of multiple candidate lymphoma-related genes. Our toolkit is available from our GitHub repository as Galaxy tool and dependency definitions and has been deployed using virtualization on multiple platforms including Docker. © The Author 2017. Published by Oxford University Press.

  17. The NCAR Research Data Archive's Hybrid Approach for Data Discovery and Access

    Science.gov (United States)

    Schuster, D.; Worley, S. J.

    2013-12-01

    The NCAR Research Data Archive (RDA http://rda.ucar.edu) maintains a variety of data discovery and access capabilities for it's 600+ dataset collections to support the varying needs of a diverse user community. In-house developed and standards-based community tools offer services to more than 10,000 users annually. By number of users the largest group is external and access the RDA through web based protocols; the internal NCAR HPC users are fewer in number, but typically access more data volume. This paper will detail the data discovery and access services maintained by the RDA to support both user groups, and show metrics that illustrate how the community is using the services. The distributed search capability enabled by standards-based community tools, such as Geoportal and an OAI-PMH access point that serves multiple metadata standards, provide pathways for external users to initially discover RDA holdings. From here, in-house developed web interfaces leverage primary discovery level metadata databases that support keyword and faceted searches. Internal NCAR HPC users, or those familiar with the RDA, may go directly to the dataset collection of interest and refine their search based on rich file collection metadata. Multiple levels of metadata have proven to be invaluable for discovery within terabyte-sized archives composed of many atmospheric or oceanic levels, hundreds of parameters, and often numerous grid and time resolutions. Once users find the data they want, their access needs may vary as well. A THREDDS data server running on targeted dataset collections enables remote file access through OPENDAP and other web based protocols primarily for external users. In-house developed tools give all users the capability to submit data subset extraction and format conversion requests through scalable, HPC based delayed mode batch processing. Users can monitor their RDA-based data processing progress and receive instructions on how to access the data when it is

  18. Empirical study using network of semantically related associations in bridging the knowledge gap.

    Science.gov (United States)

    Abedi, Vida; Yeasin, Mohammed; Zand, Ramin

    2014-11-27

    The data overload has created a new set of challenges in finding meaningful and relevant information with minimal cognitive effort. However designing robust and scalable knowledge discovery systems remains a challenge. Recent innovations in the (biological) literature mining tools have opened new avenues to understand the confluence of various diseases, genes, risk factors as well as biological processes in bridging the gaps between the massive amounts of scientific data and harvesting useful knowledge. In this paper, we highlight some of the findings using a text analytics tool, called ARIANA--Adaptive Robust and Integrative Analysis for finding Novel Associations. Empirical study using ARIANA reveals knowledge discovery instances that illustrate the efficacy of such tool. For example, ARIANA can capture the connection between the drug hexamethonium and pulmonary inflammation and fibrosis that caused the tragic death of a healthy volunteer in a 2001 John Hopkins asthma study, even though the abstract of the study was not part of the semantic model. An integrated system, such as ARIANA, could assist the human expert in exploratory literature search by bringing forward hidden associations, promoting data reuse and knowledge discovery as well as stimulating interdisciplinary projects by connecting information across the disciplines.

  19. New Trends in E-Science: Machine Learning and Knowledge Discovery in Databases

    Science.gov (United States)

    Brescia, Massimo

    2012-11-01

    Data mining, or Knowledge Discovery in Databases (KDD), while being the main methodology to extract the scientific information contained in Massive Data Sets (MDS), needs to tackle crucial problems since it has to orchestrate complex challenges posed by transparent access to different computing environments, scalability of algorithms, reusability of resources. To achieve a leap forward for the progress of e-science in the data avalanche era, the community needs to implement an infrastructure capable of performing data access, processing and mining in a distributed but integrated context. The increasing complexity of modern technologies carried out a huge production of data, whose related warehouse management and the need to optimize analysis and mining procedures lead to a change in concept on modern science. Classical data exploration, based on local user own data storage and limited computing infrastructures, is no more efficient in the case of MDS, worldwide spread over inhomogeneous data centres and requiring teraflop processing power. In this context modern experimental and observational science requires a good understanding of computer science, network infrastructures, Data Mining, etc. i.e. of all those techniques which fall into the domain of the so called e-science (recently assessed also by the Fourth Paradigm of Science). Such understanding is almost completely absent in the older generations of scientists and this reflects in the inadequacy of most academic and research programs. A paradigm shift is needed: statistical pattern recognition, object oriented programming, distributed computing, parallel programming need to become an essential part of scientific background. A possible practical solution is to provide the research community with easy-to understand, easy-to-use tools, based on the Web 2.0 technologies and Machine Learning methodology. Tools where almost all the complexity is hidden to the final user, but which are still flexible and able to

  20. Discovery Mondays - The detectors: tracking particles

    CERN Multimedia

    2005-01-01

    View of a module from the LHCb vertex detector, which will be presented at the next Discovery Monday. How do you observe the invisible? In order to deepen still further our knowledge of the infinitely small, physicists accelerate beams of particles and generate collisions between them at extraordinary energies. The collisions give birth to showers of new particles. What are they? In order to find out, physicists slip into the role of detectives thanks to the detectors. At the next Discovery Monday you will find out about the different methods used at CERN to detect particles. A cloud chamber will allow you to see the tracks of cosmic particles live. You will also be given the chance to see real modules for the ATLAS and for the LHCb experiments. Strange materials will be on hand, such as crystals that are heavier than iron and yet as transparent as glass... Come to the Microcosm and become a top detective yourself! This event will take place in French. Join us at the Microcosm (Reception Building 33, M...

  1. Discovery Mondays - The detectors: tracking particles

    CERN Multimedia

    2005-01-01

    View of a module from the LHCb vertex detector, which will be presented at the next Discovery Monday. How do you observe the invisible? In order to deepen still further our knowledge of the infinitely small, physicists accelerate beams of particles at close to the speed of light, then generate collisions between them at extraordinary energies, giving birth to showers of new particles. What are these particles? In order to find out, physicists transform themselves into detectives with the help of the detectors. Located around the collision area, these exceptional machines are made up of various layers, each of which detects and measures specific properties of the particles that travel through them. Powerful computers then reconstruct their trajectory and record their charge, mass and energy in order to build up a kind of particle ID card. At the next Discovery Monday you will be able to find out about the different methods used at CERN to detect particles. A cloud chamber will provide live images of the trac...

  2. Business Model Discovery by Technology Entrepreneurs

    Directory of Open Access Journals (Sweden)

    Steven Muegge

    2012-04-01

    Full Text Available Value creation and value capture are central to technology entrepreneurship. The ways in which a particular firm creates and captures value are the foundation of that firm's business model, which is an explanation of how the business delivers value to a set of customers at attractive profits. Despite the deep conceptual link between business models and technology entrepreneurship, little is known about the processes by which technology entrepreneurs produce successful business models. This article makes three contributions to partially address this knowledge gap. First, it argues that business model discovery by technology entrepreneurs can be, and often should be, disciplined by both intention and structure. Second, it provides a tool for disciplined business model discovery that includes an actionable process and a worksheet for describing a business model in a form that is both concise and explicit. Third, it shares preliminary results and lessons learned from six technology entrepreneurs applying a disciplined process to strengthen or reinvent the business models of their own nascent technology businesses.

  3. Designing discovery learning environments: process analysis and implications for designing an information system

    NARCIS (Netherlands)

    Pieters, Julius Marie; Limbach, R.; de Jong, Anthonius J.M.

    2004-01-01

    A systematic analysis of the design process of authors of (simulation based) discovery learning environments was carried out. The analysis aimed at identifying the design activities of authors and categorising knowledge gaps that they experience. First, five existing studies were systematically

  4. Affinity Crystallography: A New Approach to Extracting High-Affinity Enzyme Inhibitors from Natural Extracts.

    Science.gov (United States)

    Aguda, Adeleke H; Lavallee, Vincent; Cheng, Ping; Bott, Tina M; Meimetis, Labros G; Law, Simon; Nguyen, Nham T; Williams, David E; Kaleta, Jadwiga; Villanueva, Ivan; Davies, Julian; Andersen, Raymond J; Brayer, Gary D; Brömme, Dieter

    2016-08-26

    Natural products are an important source of novel drug scaffolds. The highly variable and unpredictable timelines associated with isolating novel compounds and elucidating their structures have led to the demise of exploring natural product extract libraries in drug discovery programs. Here we introduce affinity crystallography as a new methodology that significantly shortens the time of the hit to active structure cycle in bioactive natural product discovery research. This affinity crystallography approach is illustrated by using semipure fractions of an actinomycetes culture extract to isolate and identify a cathepsin K inhibitor and to compare the outcome with the traditional assay-guided purification/structural analysis approach. The traditional approach resulted in the identification of the known inhibitor antipain (1) and its new but lower potency dehydration product 2, while the affinity crystallography approach led to the identification of a new high-affinity inhibitor named lichostatinal (3). The structure and potency of lichostatinal (3) was verified by total synthesis and kinetic characterization. To the best of our knowledge, this is the first example of isolating and characterizing a potent enzyme inhibitor from a partially purified crude natural product extract using a protein crystallographic approach.

  5. Inseparability of science history and discovery

    Directory of Open Access Journals (Sweden)

    J. M. Herndon

    2010-04-01

    Full Text Available Science is very much a logical progression through time. Progressing along a logical path of discovery is rather like following a path through the wilderness. Occasionally the path splits, presenting a choice; the correct logical interpretation leads to further progress, the wrong choice leads to confusion. By considering deeply the relevant science history, one might begin to recognize past faltering in the logical progression of observations and ideas and, perhaps then, to discover new, more precise understanding. The following specific examples of science faltering are described from a historical perspective: (1 Composition of the Earth's inner core; (2 Giant planet internal energy production; (3 Physical impossibility of Earth-core convection and Earth-mantle convection, and; (4 Thermonuclear ignition of stars. For each example, a revised logical progression is described, leading, respectively, to: (1 Understanding the endo-Earth's composition; (2 The concept of nuclear georeactor origin of geo- and planetary magnetic fields; (3 The invalidation and replacement of plate tectonics; and, (4 Understanding the basis for the observed distribution of luminous stars in galaxies. These revised logical progressions clearly show the inseparability of science history and discovery. A different and more fundamental approach to making scientific discoveries than the frequently discussed variants of the scientific method is this: An individual ponders and through tedious efforts arranges seemingly unrelated observations into a logical sequence in the mind so that causal relationships become evident and new understanding emerges, showing the path for new observations, for new experiments, for new theoretical considerations, and for new discoveries. Science history is rich in "seemingly unrelated observations" just waiting to be logically and causally related to reveal new discoveries.

  6. Integrating scientific and local knowledge to inform risk-based management approaches for climate adaptation

    Directory of Open Access Journals (Sweden)

    Nathan P. Kettle

    2014-01-01

    Full Text Available Risk-based management approaches to climate adaptation depend on the assessment of potential threats, and their causes, vulnerabilities, and impacts. The refinement of these approaches relies heavily on detailed local knowledge of places and priorities, such as infrastructure, governance structures, and socio-economic conditions, as well as scientific understanding of climate projections and trends. Developing processes that integrate local and scientific knowledge will enhance the value of risk-based management approaches, facilitate group learning and planning processes, and support the capacity of communities to prepare for change. This study uses the Vulnerability, Consequences, and Adaptation Planning Scenarios (VCAPS process, a form of analytic-deliberative dialogue, and the conceptual frameworks of hazard management and climate vulnerability, to integrate scientific and local knowledge. We worked with local government staff in an urbanized barrier island community (Sullivan’s Island, South Carolina to consider climate risks, impacts, and adaptation challenges associated with sea level rise and wastewater and stormwater management. The findings discuss how the process increases understanding of town officials’ views of risks and climate change impacts to barrier islands, the management actions being considered to address of the multiple impacts of concern, and the local tradeoffs and challenges in adaptation planning. We also comment on group learning and specific adaptation tasks, strategies, and needs identified.

  7. Discovery Learning: Zombie, Phoenix, or Elephant?

    Science.gov (United States)

    Bakker, Arthur

    2018-01-01

    Discovery learning continues to be a topic of heated debate. It has been called a zombie, and this special issue raises the question whether it may be a phoenix arising from the ashes to which the topic was burnt. However, in this commentary I propose it is more like an elephant--a huge topic approached by many people who address different…

  8. Augmented Reality-Based Simulators as Discovery Learning Tools: An Empirical Study

    Science.gov (United States)

    Ibáñez, María-Blanca; Di-Serio, Ángela; Villarán-Molina, Diego; Delgado-Kloos, Carlos

    2015-01-01

    This paper reports empirical evidence on having students use AR-SaBEr, a simulation tool based on augmented reality (AR), to discover the basic principles of electricity through a series of experiments. AR-SaBEr was enhanced with knowledge-based support and inquiry-based scaffolding mechanisms, which proved useful for discovery learning in…

  9. Astrobiology, discovery, and societal impact

    CERN Document Server

    Dick, Steven J

    2018-01-01

    The search for life in the universe, once the stuff of science fiction, is now a robust worldwide research program with a well-defined roadmap probing both scientific and societal issues. This volume examines the humanistic aspects of astrobiology, systematically discussing the approaches, critical issues, and implications of discovering life beyond Earth. What do the concepts of life and intelligence, culture and civilization, technology and communication mean in a cosmic context? What are the theological and philosophical implications if we find life - and if we do not? Steven J. Dick argues that given recent scientific findings, the discovery of life in some form beyond Earth is likely and so we need to study the possible impacts of such a discovery and formulate policies to deal with them. The remarkable and often surprising results are presented here in a form accessible to disciplines across the sciences, social sciences, and humanities.

  10. Knowledge Engineering Approach to the Geotectonic Discourse

    Science.gov (United States)

    Pshenichny, Cyril

    2014-05-01

    The intellectual challenge of geotectonics is, and always was, much harder than that of most of the sciences: geotectonics has to say much when there is objectively not too much to say. As the target of study (the genesis of regional and planetary geological structures) is vast and multidisciplinary and is more or less generic for many geological disciplines, its more or less complete description is practically inachievable. Hence, the normal pathway of natural-scientific research - first acquire data, then draw conclusion - unlikely can be the case here. Geotectonics does quite the opposite; its approach is purely abductive: first to suggest a conceptualization (hypothesis) based on some external grounds (either general planetary/cosmic/philosophic/religious considerations, or based on experience gained from research of other structures/regions/planets) and then to acquire data that either support or refute it. In fact, geotectonics defines the context for data acquisition, and hence, the paradigm for the entire body of geology. Being an obvious necessity for a descriptive science, this nevertheless creates a number of threats: • Like any people, scientists like simplicity and unity, and therefore a single geotectonic hypothesis may seem preferable once based on the data available at the moment and oppress other views which may acquire evidence in the future; • As impartial data acquisition is rather a myth than reality even in most of the natural sciences, in a study like geology this process becomes strongly biased by the reigning hypothesis and controlled to supply only supportive evidence; • It becomes collectively agreed that any, or great many, domains of geological knowledge are determined by a geotectonic concept, which is, in turn, offered by a reigning hypothesis (sometimes reclassified as theory) - e.g., exploration geologists must involve the global geotectonic terminology in their technical reports on assessment of mineral or hydrocarbon

  11. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    Energy Technology Data Exchange (ETDEWEB)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten [RWTH Aachen University, Chair for Nonlinear Dynamics, Steinbachstr. 15, 52047 Aachen (Germany); Gebhardt, Sascha [RWTH Aachen University, Virtual Reality Group, IT Center, Seffenter Weg 23, 52074 Aachen (Germany); Kuhlen, Torsten [Forschungszentrum Jülich GmbH, Institute for Advanced Simulation (IAS), Jülich Supercomputing Centre (JSC), Wilhelm-Johnen-Straße, 52425 Jülich (Germany); Schulz, Wolfgang [Fraunhofer, ILT Laser Technology, Steinbachstr. 15, 52047 Aachen (Germany)

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  12. 14 CFR 406.143 - Discovery.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Discovery. 406.143 Section 406.143... Transportation Adjudications § 406.143 Discovery. (a) Initiation of discovery. Any party may initiate discovery... after a complaint has been filed. (b) Methods of discovery. The following methods of discovery are...

  13. Influencing factors for condition-based maintenance in railway tracks using knowledge-based approach

    NARCIS (Netherlands)

    Jamshidi, A.; Hajizadeh, S.; Naeimi, M.; Nunez Vicencio, Alfredo; Li, Z.

    2017-01-01

    In this paper, we present a condition-based maintenance decision method using
    knowledge-based approach for rail surface defects. A railway track may contain a considerable number of surface defects which influence track maintenance decisions. The proposed method is based on two sets of

  14. Context discovery using attenuated Bloom codes: model description and validation

    NARCIS (Netherlands)

    Liu, F.; Heijenk, Geert

    A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this report. In order to investigate the performance of this approach, a model has been developed. This document describes the model and its validation. The model has been

  15. OntoWeaver S: supporting the design of knowledge portals

    OpenAIRE

    Lei, Yuangui; Motta, Enrico; Domingue, John

    2004-01-01

    This paper presents OntoWeaver-S, an ontology-based infrastructure for building knowledge portals. In particular, OntoWeaver-S is integrated with a comprehensive web service platform, IRS-II, for the publication, discovery, and execution of web services. In this way, OntoWeaver-S supports the access and provision of remote web services for knowledge portals. Moreover, it provides a set of comprehensive site ontologies to model and represent knowledge portals, and thus is able to offer high le...

  16. Bead-based screening in chemical biology and drug discovery

    DEFF Research Database (Denmark)

    Komnatnyy, Vitaly V.; Nielsen, Thomas Eiland; Qvortrup, Katrine

    2018-01-01

    libraries for early drug discovery. Among the various library forms, the one-bead-one-compound (OBOC) library, where each bead carries many copies of a single compound, holds the greatest potential for the rapid identification of novel hits against emerging drug targets. However, this potential has not yet...... been fully realized due to a number of technical obstacles. In this feature article, we review the progress that has been made towards bead-based library screening and applications to the discovery of bioactive compounds. We identify the key challenges of this approach and highlight key steps needed......High-throughput screening is an important component of the drug discovery process. The screening of libraries containing hundreds of thousands of compounds requires assays amanable to miniaturisation and automization. Combinatorial chemistry holds a unique promise to deliver structural diverse...

  17. The application of mass-spectrometry-based protein biomarker discovery to theragnostics

    OpenAIRE

    Street, Jonathan M; Dear, James W

    2010-01-01

    Over the last decade rapid developments in mass spectrometry have allowed the identification of multiple proteins in complex biological samples. This proteomic approach has been applied to biomarker discovery in the context of clinical pharmacology (the combination of biomarker and drug now being termed ‘theragnostics’). In this review we provide a roadmap for early protein biomarker discovery studies, focusing on some key questions that regularly confront researchers.

  18. Node Discovery and Interpretation in Unstructured Resource-Constrained Environments

    DEFF Research Database (Denmark)

    Gechev, Miroslav; Kasabova, Slavyana; Mihovska, Albena D.

    2014-01-01

    for the discovery, linking and interpretation of nodes in unstructured and resource-constrained network environments and their interrelated and collective use for the delivery of smart services. The model is based on a basic mathematical approach, which describes and predicts the success of human interactions...... in the context of long-term relationships and identifies several key variables in the context of communications in resource-constrained environments. The general theoretical model is described and several algorithms are proposed as part of the node discovery, identification, and linking processes in relation...

  19. Higgs Discovery

    DEFF Research Database (Denmark)

    Sannino, Francesco

    2013-01-01

    has been challenged by the discovery of a not-so-heavy Higgs-like state. I will therefore review the recent discovery \\cite{Foadi:2012bb} that the standard model top-induced radiative corrections naturally reduce the intrinsic non-perturbative mass of the composite Higgs state towards the desired...... via first principle lattice simulations with encouraging results. The new findings show that the recent naive claims made about new strong dynamics at the electroweak scale being disfavoured by the discovery of a not-so-heavy composite Higgs are unwarranted. I will then introduce the more speculative......I discuss the impact of the discovery of a Higgs-like state on composite dynamics starting by critically examining the reasons in favour of either an elementary or composite nature of this state. Accepting the standard model interpretation I re-address the standard model vacuum stability within...

  20. A knowledge-based approach to the design of integrated renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Ramakumar, R.; Abouzahr, I. (Oklahoma State Univ., Stillwater, OK (United States). Engineering Energy Lab.); Ashenayi, K. (Dept. of Electrical Engineering, Univ. of Tulsa, Tulsa, OK (United States))

    1992-12-01

    Integrated Renewable Energy Systems (IRES) utilize two or more renewable energy resources and end-use technologies to supply a variety of energy needs, often in a stand-alone mode. A knowledge-based design approach that minimizes the total capital cost at a pre-selected reliability level is presented. The reliability level is quantified by the loss of power supply probability (LPSP). The procedure includes some resource-need matching based on economics, the quality of energy needed, and the characteristics of the resource. A detailed example is presented in this paper and discussed to illustrate the usefullness of the design approach.