WorldWideScience

Sample records for proven knowledge-based approach

  1. Understanding images using knowledge based approach

    International Nuclear Information System (INIS)

    Tascini, G.

    1985-01-01

    This paper presents an approach to image understanding focusing on low level image processing and proposes a rule-based approach as part of larger knowledge-based system. The general system has a yerarchical structure that comprises several knowledge-based layers. The main idea is to confine at the lower level the domain independent knowledge and to reserve the higher levels for the domain dependent knowledge, that is for the interpretation

  2. A Knowledge Based Approach to VLSI CAD

    Science.gov (United States)

    1983-09-01

    Avail-and/or Dist ISpecial L| OI. SEICURITY CLASIIrCATION OP THIS IPA.lErllm S Daene." A KNOwLEDE BASED APPROACH TO VLSI CAD’ Louis L Steinberg and...major issues lies in building up and managing the knowledge base of oesign expertise. We expect that, as with many recent expert systems, in order to

  3. Integration, Provenance, and Temporal Queries for Large-Scale Knowledge Bases

    OpenAIRE

    Gao, Shi

    2016-01-01

    Knowledge bases that summarize web information in RDF triples deliver many benefits, including support for natural language question answering and powerful structured queries that extract encyclopedic knowledge via SPARQL. Large scale knowledge bases grow rapidly in terms of scale and significance, and undergo frequent changes in both schema and content. Two critical problems have thus emerged: (i) how to support temporal queries that explore the history of knowledge bases or flash-back to th...

  4. Knowledge-Based Approaches: Two cases of applicability

    DEFF Research Database (Denmark)

    Andersen, Tom

    1997-01-01

    Basic issues of the term: A knowledge-based approach (KBA) are discussed. Two cases of applicable to KBA are presented, and its concluded that KBA is more than just IT.......Basic issues of the term: A knowledge-based approach (KBA) are discussed. Two cases of applicable to KBA are presented, and its concluded that KBA is more than just IT....

  5. A knowledge-based approach for recognition of handwritten Pitman ...

    Indian Academy of Sciences (India)

    The paper describes a knowledge-based approach for the recognition of PSL strokes. Information about location and the direction of the starting point and final point of strokes are considered the knowledge base for recognition of strokes. The work comprises preprocessing, determination of starting and final points, ...

  6. Intelligent assembly time analysis, using a digital knowledge based approach

    NARCIS (Netherlands)

    Jin, Y.; Curran, R.; Butterfield, J.; Burke, R.; Welch, B.

    2009-01-01

    The implementation of effective time analysis methods fast and accurately in the era of digital manufacturing has become a significant challenge for aerospace manufacturers hoping to build and maintain a competitive advantage. This paper proposes a structure oriented, knowledge-based approach for

  7. A structural informatics approach to mine kinase knowledge bases.

    Science.gov (United States)

    Brooijmans, Natasja; Mobilio, Dominick; Walker, Gary; Nilakantan, Ramaswamy; Denny, Rajiah A; Feyfant, Eric; Diller, David; Bikker, Jack; Humblet, Christine

    2010-03-01

    In this paper, we describe a combination of structural informatics approaches developed to mine data extracted from existing structure knowledge bases (Protein Data Bank and the GVK database) with a focus on kinase ATP-binding site data. In contrast to existing systems that retrieve and analyze protein structures, our techniques are centered on a database of ligand-bound geometries in relation to residues lining the binding site and transparent access to ligand-based SAR data. We illustrate the systems in the context of the Abelson kinase and related inhibitor structures. 2009 Elsevier Ltd. All rights reserved.

  8. Big data analytics in immunology: a knowledge-based approach.

    Science.gov (United States)

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  9. Big Data Analytics in Immunology: A Knowledge-Based Approach

    Directory of Open Access Journals (Sweden)

    Guang Lan Zhang

    2014-01-01

    Full Text Available With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  10. Intelligent Flowcharting Developmental Approach to Legal Knowledge Based System

    Directory of Open Access Journals (Sweden)

    Nitin Balaji Bilgi

    2011-10-01

    Full Text Available The basic aim of this research, described in this paper is to develop a hybrid legal expert system/ knowledge based system, with specific reference to the transfer of property act, within the Indian legal system which is often in demand. In this paper the authors discuss an traditional approach to combining two types of reasoning methodologies, Rule Based Reasoning (RBR and Case Based Reasoning (CBR. In RBR module we have interpreted and implemented rules that occur in legal statutes of the Transfer of property act. In the CBR module we have an implementation to find the related cases. The VisiRule software made available by Logic Programming Associates is used in the development of RBR part this expert system. The authors have used java Net Beans for development of CBR. VisiRule is a decision charting tool, in which the rules are defined by a combination of graphical shapes and pieces of text, and produces rules.

  11. Space nuclear reactor system diagnosis: Knowledge-based approach

    International Nuclear Information System (INIS)

    Ting, Y.T.D.

    1990-01-01

    SP-100 space nuclear reactor system development is a joint effort by the Department of Energy, the Department of Defense and the National Aeronautics and Space Administration. The system is designed to operate in isolation for many years, and is possibly subject to little or no remote maintenance. This dissertation proposes a knowledge based diagnostic system which, in principle, can diagnose the faults which can either cause reactor shutdown or lead to another serious problem. This framework in general can be applied to the fully specified system if detailed design information becomes available. The set of faults considered herein is identified based on heuristic knowledge about the system operation. The suitable approach to diagnostic problem solving is proposed after investigating the most prevalent methodologies in Artificial Intelligence as well as the causal analysis of the system. Deep causal knowledge modeling based on digraph, fault-tree or logic flowgraph methodology would present a need for some knowledge representation to handle the time dependent system behavior. A proposed qualitative temporal knowledge modeling methodology, using rules with specified time delay among the process variables, has been proposed and is used to develop the diagnostic sufficient rule set. The rule set has been modified by using a time zone approach to have a robust system design. The sufficient rule set is transformed to a sufficient and necessary one by searching the whole knowledge base. Qualitative data analysis is proposed in analyzing the measured data if in a real time situation. An expert system shell - Intelligence Compiler is used to develop the prototype system. Frames are used for the process variables. Forward chaining rules are used in monitoring and backward chaining rules are used in diagnosis

  12. Knowledge-based approach to video content classification

    Science.gov (United States)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  13. Knowledge-based biomedical word sense disambiguation: comparison of approaches

    Directory of Open Access Journals (Sweden)

    Aronson Alan R

    2010-11-01

    Full Text Available Abstract Background Word sense disambiguation (WSD algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well

  14. An Algebraic Approach to Knowledge Bases Informational Equivalence

    OpenAIRE

    Plotkin, B.; Plotkin, T.

    2003-01-01

    In this paper we study the notion of knowledge from the positions of universal algebra and algebraic logic. We consider first order knowledge which is based on first order logic. We define categories of knowledge and knowledge bases. These notions are defined for the fixed subject of knowledge. The key notion of informational equivalence of two knowledge bases is introduced. We use the idea of equivalence of categories in this definition. We prove that for finite models there is a clear way t...

  15. Knowledge based expert system approach to instrumentation selection (INSEL

    Directory of Open Access Journals (Sweden)

    S. Barai

    2004-08-01

    Full Text Available The selection of appropriate instrumentation for any structural measurement of civil engineering structure is a complex task. Recent developments in Artificial Intelligence (AI can help in an organized use of experiential knowledge available on instrumentation for laboratory and in-situ measurement. Usually, the instrumentation decision is based on the experience and judgment of experimentalists. The heuristic knowledge available for different types of measurement is domain dependent and the information is scattered in varied knowledge sources. The knowledge engineering techniques can help in capturing the experiential knowledge. This paper demonstrates a prototype knowledge based system for INstrument SELection (INSEL assistant where the experiential knowledge for various structural domains can be captured and utilized for making instrumentation decision. In particular, this Knowledge Based Expert System (KBES encodes the heuristics on measurement and demonstrates the instrument selection process with reference to steel bridges. INSEL runs on a microcomputer and uses an INSIGHT 2+ environment.

  16. Fault detection and reliability, knowledge based and other approaches

    International Nuclear Information System (INIS)

    Singh, M.G.; Hindi, K.S.; Tzafestas, S.G.

    1987-01-01

    These proceedings are split up into four major parts in order to reflect the most significant aspects of reliability and fault detection as viewed at present. The first part deals with knowledge-based systems and comprises eleven contributions from leading experts in the field. The emphasis here is primarily on the use of artificial intelligence, expert systems and other knowledge-based systems for fault detection and reliability. The second part is devoted to fault detection of technological systems and comprises thirteen contributions dealing with applications of fault detection techniques to various technological systems such as gas networks, electric power systems, nuclear reactors and assembly cells. The third part of the proceedings, which consists of seven contributions, treats robust, fault tolerant and intelligent controllers and covers methodological issues as well as several applications ranging from nuclear power plants to industrial robots to steel grinding. The fourth part treats fault tolerant digital techniques and comprises five contributions. Two papers, one on reactor noise analysis, the other on reactor control system design, are indexed separately. (author)

  17. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    Science.gov (United States)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  18. An approach to build a knowledge base for reactor diagnostic system using statistical method

    International Nuclear Information System (INIS)

    Yokobayashi, Masao; Matsumoto, Kiyoshi; Kohsaka, Atsuo

    1988-01-01

    In the development of a rule-based expert system, one of the key issues is how to acquire knowledge and to build a knowledge base. When the knowledge base of DISKET was built, which is an expert system for nuclear reactor accident diagnosis developed in Japan Atomic Energy Research Institute, several problems have been experienced. To write rules is a time-consuming task, and it was difficult to keep the objectivity and consistency of rules as the number of rules increased. Certainty factors must be determined often according to engineering judgement, i.e. empirically or intuitively. A systematic approach was attempted to cope with these difficulties and to build efficiently an objective knowledge base. The approach described in this paper is based on the concept that a prototype knowledge base, colloquially speaking an initial guess, should first be generated in a systematic way, then it is modified or improved by human experts for practical use. Factor analysis was used as the systematic way. DISKET system, the procedure of building a knowledge base, and the verification of the approach are reported. (Kako, I.)

  19. Influencing factors for condition-based maintenance in railway tracks using knowledge-based approach

    NARCIS (Netherlands)

    Jamshidi, A.; Hajizadeh, S.; Naeimi, M.; Nunez Vicencio, Alfredo; Li, Z.

    2017-01-01

    In this paper, we present a condition-based maintenance decision method using
    knowledge-based approach for rail surface defects. A railway track may contain a considerable number of surface defects which influence track maintenance decisions. The proposed method is based on two sets of

  20. Condensed and Updated Version of the Systematic Approach Meteorological Knowledge Base Western North Pacific

    OpenAIRE

    Carr, Lester E., III; Elsberry, Russell L.; Boothe, Mark A.

    1997-01-01

    The views expressed in this report are those of the authors and do not reflect the official policy or position of the Department of Defense. The meteorological knowledge base for the Systematic and Integrated Approach to Tropical Cydone Track Forecasting proposed by Carr and Elsberry has evolved as additional research has been completed. This Systematic Approach has been applied in the eastern and central North Pacific, and in the Southern Hemisphere, a number of conceptual models have bee...

  1. An approach to build knowledge base for reactor accident diagnostic system using statistical method

    International Nuclear Information System (INIS)

    Kohsaka, Atsuo; Yokobayashi, Masao; Matsumoto, Kiyoshi; Fujii, Minoru

    1988-01-01

    In the development of a rule based expert system, one of key issues is how to build a knowledge base (KB). A systematic approach has been attempted for building an objective KB efficiently. The approach is based on the concept that a prototype KB should first be generated in a systematic way and then it is to be modified and/or improved by expert for practical use. The statistical method, Factor Analysis, was applied to build a prototype KB for the JAERI expert system DISKET using source information obtained from a PWR simulator. The prototype KB was obtained and the inference with this KB was performed against several types of transients. In each diagnosis, the transient type was well identified. From this study, it is concluded that the statistical method used is useful for building a prototype knowledge base. (author)

  2. Business Process Management – A Traditional Approach Versus a Knowledge Based Approach

    Directory of Open Access Journals (Sweden)

    Roberto Paiano

    2015-12-01

    Full Text Available The enterprise management represents a heterogeneous aggregate of both resources and assets that need to be coordinated and orchestrated in order to reach the goals related to the business mission. Influences and forces that may influence this process, and also for that they should be considered, are not concentrated in the business environment, but they are related to the entireoperational context of a company. For this reason, business processes must be the most versatile and flexible with respect to the changes that occur within the whole operational context of a company.Considering the supportive role that information systems play in favour of Business Process Management - BPM, it is also essential to implement a constant, continuous and quick mechanism for the information system alignment with respect to the evolution followed by business processes.In particular, such mechanism must intervene on BPM systems in order to keep them aligned and compliant with respect to both the context changes and the regulations. In order to facilitate this alignment mechanism, companies are already referring to the support offered by specific solutions, such as knowledge bases. In this context, a possible solution might be the approach we propose, which is based on a specific framework called Process Management System. Our methodology implements a knowledge base support for business experts, which is not limited to the BPM operating phases, but includes also the engineering and prototyping activities of the corresponding information system. This paper aims to compare and evaluate a traditional BPM approach with respect to theapproach we propose. In effect, such analysis aims to emphasize the lack of traditional methodology especially with respect to the alignment between business processes and information systems, along with their compliance with context domain and regulations.

  3. A knowledge-based approach to the design of integrated renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Ramakumar, R.; Abouzahr, I. (Oklahoma State Univ., Stillwater, OK (United States). Engineering Energy Lab.); Ashenayi, K. (Dept. of Electrical Engineering, Univ. of Tulsa, Tulsa, OK (United States))

    1992-12-01

    Integrated Renewable Energy Systems (IRES) utilize two or more renewable energy resources and end-use technologies to supply a variety of energy needs, often in a stand-alone mode. A knowledge-based design approach that minimizes the total capital cost at a pre-selected reliability level is presented. The reliability level is quantified by the loss of power supply probability (LPSP). The procedure includes some resource-need matching based on economics, the quality of energy needed, and the characteristics of the resource. A detailed example is presented in this paper and discussed to illustrate the usefullness of the design approach.

  4. Constructing and Refining Knowledge Bases: A Collaborative Apprenticeship Multistrategy Learning Approach

    National Research Council Canada - National Science Library

    Tecuci, Gheorghe

    2000-01-01

    This research has developed a theory, methodology and learning agent shell for development of knowledge bases and knowledge-based agents, by domain experts, with limited assistance from knowledge engineers...

  5. A knowledge-based approach to the evaluation of fault trees

    International Nuclear Information System (INIS)

    Hwang, Yann-Jong; Chow, Louis R.; Huang, Henry C.

    1996-01-01

    A list of critical components is useful for determining the potential problems of a complex system. However, to find this list through evaluating the fault trees is expensive and time consuming. This paper intends to propose an integrated software program which consists of a fault tree constructor, a knowledge base, and an efficient algorithm for evaluating minimal cut sets of a large fault tree. The proposed algorithm uses the approaches of top-down heuristic searching and the probability-based truncation. That makes the evaluation of fault trees obviously efficient and provides critical components for solving the potential problems in complex systems. Finally, some practical fault trees are included to illustrate the results

  6. A knowledge-based approach to identification and adaptation in dynamical systems control

    Science.gov (United States)

    Glass, B. J.; Wong, C. M.

    1988-01-01

    Artificial intelligence techniques are applied to the problems of model form and parameter identification of large-scale dynamic systems. The object-oriented knowledge representation is discussed in the context of causal modeling and qualitative reasoning. Structured sets of rules are used for implementing qualitative component simulations, for catching qualitative discrepancies and quantitative bound violations, and for making reconfiguration and control decisions that affect the physical system. These decisions are executed by backward-chaining through a knowledge base of control action tasks. This approach was implemented for two examples: a triple quadrupole mass spectrometer and a two-phase thermal testbed. Results of tests with both of these systems demonstrate that the software replicates some or most of the functionality of a human operator, thereby reducing the need for a human-in-the-loop in the lower levels of control of these complex systems.

  7. A knowledge based approach to matching human neurodegenerative disease and animal models

    Directory of Open Access Journals (Sweden)

    Maryann E Martone

    2013-05-01

    Full Text Available Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology and an associated Phenotype Knowledge Base using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework and qualities are drawn from the Phenotype and Trait Ontology. We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  8. Participatory approach to the development of a knowledge base for problem-solving in diabetes self-management.

    Science.gov (United States)

    Cole-Lewis, Heather J; Smaldone, Arlene M; Davidson, Patricia R; Kukafka, Rita; Tobin, Jonathan N; Cassells, Andrea; Mynatt, Elizabeth D; Hripcsak, George; Mamykina, Lena

    2016-01-01

    To develop an expandable knowledge base of reusable knowledge related to self-management of diabetes that can be used as a foundation for patient-centric decision support tools. The structure and components of the knowledge base were created in participatory design with academic diabetes educators using knowledge acquisition methods. The knowledge base was validated using scenario-based approach with practicing diabetes educators and individuals with diabetes recruited from Community Health Centers (CHCs) serving economically disadvantaged communities and ethnic minorities in New York. The knowledge base includes eight glycemic control problems, over 150 behaviors known to contribute to these problems coupled with contextual explanations, and over 200 specific action-oriented self-management goals for correcting problematic behaviors, with corresponding motivational messages. The validation of the knowledge base suggested high level of completeness and accuracy, and identified improvements in cultural appropriateness. These were addressed in new iterations of the knowledge base. The resulting knowledge base is theoretically grounded, incorporates practical and evidence-based knowledge used by diabetes educators in practice settings, and allows for personally meaningful choices by individuals with diabetes. Participatory design approach helped researchers to capture implicit knowledge of practicing diabetes educators and make it explicit and reusable. The knowledge base proposed here is an important step towards development of new generation patient-centric decision support tools for facilitating chronic disease self-management. While this knowledge base specifically targets diabetes, its overall structure and composition can be generalized to other chronic conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. An Ensemble Approach to Knowledge-Based Intensity-Modulated Radiation Therapy Planning

    Directory of Open Access Journals (Sweden)

    Jiahan Zhang

    2018-03-01

    Full Text Available Knowledge-based planning (KBP utilizes experienced planners’ knowledge embedded in prior plans to estimate optimal achievable dose volume histogram (DVH of new cases. In the regression-based KBP framework, previously planned patients’ anatomical features and DVHs are extracted, and prior knowledge is summarized as the regression coefficients that transform features to organ-at-risk DVH predictions. In our study, we find that in different settings, different regression methods work better. To improve the robustness of KBP models, we propose an ensemble method that combines the strengths of various linear regression models, including stepwise, lasso, elastic net, and ridge regression. In the ensemble approach, we first obtain individual model prediction metadata using in-training-set leave-one-out cross validation. A constrained optimization is subsequently performed to decide individual model weights. The metadata is also used to filter out impactful training set outliers. We evaluate our method on a fresh set of retrospectively retrieved anonymized prostate intensity-modulated radiation therapy (IMRT cases and head and neck IMRT cases. The proposed approach is more robust against small training set size, wrongly labeled cases, and dosimetric inferior plans, compared with other individual models. In summary, we believe the improved robustness makes the proposed method more suitable for clinical settings than individual models.

  10. INTEGRATING CASE-BASED REASONING, KNOWLEDGE-BASED APPROACH AND TSP ALGORITHM FOR MINIMUM TOUR FINDING

    Directory of Open Access Journals (Sweden)

    Hossein Erfani

    2009-07-01

    Full Text Available Imagine you have traveled to an unfamiliar city. Before you start your daily tour around the city, you need to know a good route. In Network Theory (NT, this is the traveling salesman problem (TSP. A dynamic programming algorithm is often used for solving this problem. However, when the road network of the city is very complicated and dense, which is usually the case, it will take too long for the algorithm to find the shortest path. Furthermore, in reality, things are not as simple as those stated in AT. For instance, the cost of travel for the same part of the city at different times may not be the same. In this project, we have integrated TSP algorithm with AI knowledge-based approach and case-based reasoning in solving the problem. With this integration, knowledge about the geographical information and past cases are used to help TSP algorithm in finding a solution. This approach dramatically reduces the computation time required for minimum tour finding.

  11. An approach to build a knowledge base for reactor accident diagnostic expert system

    International Nuclear Information System (INIS)

    Yoshida, K.; Fujii, M.; Fujiki, K.; Yokobayashi, M.; Kohsaka, A.; Aoyagi, T.; Hirota, Y.

    1987-01-01

    In the development of a rule based expert system, one of the key issues is how to acquire knowledge and to build knowledge base (KB). On building the KB of DISKET, which is an expert system for nuclear reactor accident diagnosis developed in JAERI, several problems have been experienced as follows. To write rules is a time consuming task, and it is difficult to keep the objectivity and consistency of rules as the number of rules increase. Further, certainty factors (CFs) must be often determined according to engineering judgment, i.e., empirically or intuitively. A systematic approach was attempted to handle these difficulties and to build an objective KB efficiently. The approach described in this paper is based on the concept that a prototype KB, colloquially speaking an initial guess, should first be generated in a systematic way and then is to be modified and/or improved by human experts for practical use. Statistical methods, principally Factor Analysis, were used as the systematic way to build a prototype KB for the DISKET using a PWR plant simulator data. The source information is a number of data obtained from the simulation of transients, such as the status of components and annunciator etc., and major process parameters like pressures, temperatures and so on

  12. Distributional and Knowledge-Based Approaches for Computing Portuguese Word Similarity

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available Identifying similar and related words is not only key in natural language understanding but also a suitable task for assessing the quality of computational resources that organise words and meanings of a language, compiled by different means. This paper, which aims to be a reference for those interested in computing word similarity in Portuguese, presents several approaches for this task and is motivated by the recent availability of state-of-the-art distributional models of Portuguese words, which add to several lexical knowledge bases (LKBs for this language, available for a longer time. The previous resources were exploited to answer word similarity tests, which also became recently available for Portuguese. We conclude that there are several valid approaches for this task, but not one that outperforms all the others in every single test. Distributional models seem to capture relatedness better, while LKBs are better suited for computing genuine similarity, but, in general, better results are obtained when knowledge from different sources is combined.

  13. Issues involved in a knowledge-based approach to procedure synthesis

    International Nuclear Information System (INIS)

    Hajek, B.K.; Khartabil, L.F.; Miller, D.W.

    1992-01-01

    Many knowledge-based systems (KBSs) have been built to assist human operators in managing nuclear power plant operating functions, such as monitoring, fault diagnosis, alarm filtering, and procedure management. For procedure management, KBSs have been built to display and track existing written procedures or to dynamically follow procedure execution by monitoring plant data and action execution and suggesting recovery steps. More recent works build KBSs able to synthesize procedures. This paper addresses and examines the main issues related to the implementation of on-line procedure synthesis using KBSs. A KBS for procedure synthesis can provide a more robust and effective procedural plan during accidents. Currently existing procedures for abnormal plant conditions, written as precompiled step sets based on the event and symptom approaches, are inherently not robust because anticipation of all potential plant states and associated plant responses is not possible. Thus, their failure recovery capability is limited to the precompiled set. Procedure synthesis has the potential to overcome these two problems because it does not require such precompilation of large sets of plant states and associated recovery procedures. Other benefits obtained from a complete procedure synthesis system are providing (a) a methodology for off-line procedure verification and (b) a methodology for the eventual automation of plant operations

  14. THE DESIGN OF KNOWLEDGE BASE FOR SURFACE RELATIONS BASED PART RECOGNITION APPROACH

    Directory of Open Access Journals (Sweden)

    Adem ÇİÇEK

    2007-01-01

    Full Text Available In this study, a new knowledge base for an expert system used in part recognition algorithm has been designed. Parts are recognized by the computer program by comparing face adjacency relations and attributes belonging to each part represented in the rules in the knowledge base developed with face adjacency relations and attributes generated from STEP file of the part. Besides, rule writing process has been quite simplified by generating the rules represented in the knowledge base with an automatic rule writing module developed within the system. With the knowledge base and automatic rule writing module used in the part recognition system, simple, intermediate and complex parts can be recognized by a part recognition program.

  15. Knowledge base and neural network approach for protein secondary structure prediction.

    Science.gov (United States)

    Patel, Maulika S; Mazumdar, Himanshu S

    2014-11-21

    Protein structure prediction is of great relevance given the abundant genomic and proteomic data generated by the genome sequencing projects. Protein secondary structure prediction is addressed as a sub task in determining the protein tertiary structure and function. In this paper, a novel algorithm, KB-PROSSP-NN, which is a combination of knowledge base and modeling of the exceptions in the knowledge base using neural networks for protein secondary structure prediction (PSSP), is proposed. The knowledge base is derived from a proteomic sequence-structure database and consists of the statistics of association between the 5-residue words and corresponding secondary structure. The predicted results obtained using knowledge base are refined with a Backpropogation neural network algorithm. Neural net models the exceptions of the knowledge base. The Q3 accuracy of 90% and 82% is achieved on the RS126 and CB396 test sets respectively which suggest improvement over existing state of art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. A knowledge-based approach for C-factor mapping in Spain using Landsat TM and GIS

    DEFF Research Database (Denmark)

    Veihe (former Folly), Anita; Bronsveld, M.C.; Clavaux, M

    1996-01-01

    The cover and management factor (C) in the Universal Soil Loss Equation (USLE), is one of the most important parameters for assessing erosion. In this study it is shown how a knowledge-based approach can be used to optimize C-factor mapping in the Mediterranean region being characterized...... to the limitations of the USLE...

  17. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.

    Science.gov (United States)

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M

    2017-05-01

    Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  18. Software Development for Auto-Generation of Interlocking Knowledge base using Artificial Intelligence Approach

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Yun Seok [Nanseoul University (Korea); Kim, JOng Sun [Kwangwoon University (Korea)

    1999-06-01

    This paper proposes IIKBAG (Intelligent Interlocking Knowledge Base Generator) which can build automatically the interlocking knowledge base utilized as the real-time interlocking strategy of the electronic interlocking system in order to enhance it's reliability and expansion. The IIKBAG consists of the inference engine and the knowledge base. The former has an auto-learning function which searches all the train routes for the given station model based on heuristic search technique while dynamically searching the model, and then generates automatically the interlocking patterns obtained from the interlocking relations of signal facilities on the routes. The latter is designed as the structure which the real-time expert system embedded on IS (Interlocking System) can use directly in order to enhances the reliability and accuracy. The IIKBAG is implemented in C computer language for the purpose of the build and interface of the station structure database. And, a typical station model is simulated to prove the validity of the proposed IIKBAG. (author). 13 refs., 5 figs., 2 tabs.

  19. Isotopic approach to the provenance study of artifacts

    International Nuclear Information System (INIS)

    Mabuchi, Hisao

    1994-01-01

    Isotopic abundance ratios, which are proved to be generally constant, are known to vary, for certain elements, from one place to another. Light elements, such as hydrogen, lithium, boron, carbon, nitrogen, oxygen and sulfur, show measurable variations of their isotope ratios due to isotopic fractionation which takes place during geochemical or biochemical processes. Isotope ratios of strontium and lead vary due to the decay of long-lived radioactive nuclides 87 Rb and 238 U- 235 U- 232 Th, respectively. Such isotopic anomalies can be applicable to provenance studies of archaeological objects. Thus, 13 C/ 12 C, 18 O/ 16 O, or 87 Sr/ 86 Sr were used to examine the authenticity of Greek marble statues. Also, lead isotope ratios have been used since mid-1960s extensively for provenance studies of glasses and bronzes of different civilizations. As an example, the author presents a series of his own works on lead isotope ratios applied to ancient Japanese bronzes, which are summarized as follows. 1) It was generally observed that lead isotope ratios reflect difference in culture to which bronzes belong. 2) Mirrors in the Western Han period (206 B.C. - A.D.8) are clearly distinguished by lead isotope ratios from those made after the mid-Eastern Han period (ca.A.D.150 - 300). 3) Korean mirrors and weapons excavated from the Yayoi sites contain lead of easily recognizable Mississippi Valley type. 4) Bronze objects made in Japan (imitative Han style mirrors, Dotaku, imitative weapons, arrowheads etc.) in the Yayoi and Kofun periods are classified by lead isotope ratios in the following order: the Korean mirror type to the Western Han mirror type in the Yayoi period and the mid- to post- Han mirror type in the Kofun period. 5) Indigenous Japanese lead seems to have been used after the mid-7th century. (J.P.N.)

  20. Knowledge Bases, Talents and Contexts: On the Usefulness of the Creative Class Approach in Sweden

    DEFF Research Database (Denmark)

    Asheim, Bjørn T.; Hansen, Høgni Kalsø

    2009-01-01

    The geography of the creative class and its impact on regional development has been debated for some years. While the ideas of Richard Florida have permeated local and regional planning strategies in most parts of the Western world, critiques have been numerous. Florida’s 3T’s (technology, talent....... Furthermore, the dominating knowledge base in a region has an influence on the importance of a people climate and a business climate for attracting and retaining talent. In this article, we present an empirical analysis in support of these arguments using original Swedish data....

  1. A selective review of knowledge-based approaches to database design

    Directory of Open Access Journals (Sweden)

    Shahrul Azman Noah

    1995-01-01

    Full Text Available The inclusion of real world knowledge or specialised knowledge has not been addressed by the majority of the systems reviewed. ODA has real world knowledge provided by using a thesaurus-type structure to represent generic models. Only NITDT includes the specialised knowledge in its knowledge base. NITDT classified its knowledge into application specific, domain specific and general knowledge. However the literature does not discuss in detail how this knowledge is applied during the design session. One of the key factors that distinguish computer-based expert systems from human experts is that the latter apply not only their specialised expertise to a problem but also their general knowledge of the world. NITDT is the only system reviewed here that holds any form of internal domain specific knowledge, which can be easily augmented, enriched and updated, as required. This knowledge allows the designer to be an active participant along with the user in the design process and significantly eases the user task. The inclusion of real world knowledge and specialised knowledge is an area that must be further addressed before intelligent tools are able to offer a realistic level of assistance to the human designers.

  2. An approach to development of ontological knowledge base in the field of scientific and research activity in Russia

    Science.gov (United States)

    Murtazina, M. Sh; Avdeenko, T. V.

    2018-05-01

    The state of art and the progress in application of semantic technologies in the field of scientific and research activity have been analyzed. Even elementary empirical comparison has shown that the semantic search engines are superior in all respects to conventional search technologies. However, semantic information technologies are insufficiently used in the field of scientific and research activity in Russia. In present paper an approach to construction of ontological model of knowledge base is proposed. The ontological model is based on the upper-level ontology and the RDF mechanism for linking several domain ontologies. The ontological model is implemented in the Protégé environment.

  3. A knowledge-based approach to estimating the magnitude and spatial patterns of potential threats to soil biodiversity.

    Science.gov (United States)

    Orgiazzi, Alberto; Panagos, Panos; Yigini, Yusuf; Dunbar, Martha B; Gardi, Ciro; Montanarella, Luca; Ballabio, Cristiano

    2016-03-01

    Because of the increasing pressures exerted on soil, below-ground life is under threat. Knowledge-based rankings of potential threats to different components of soil biodiversity were developed in order to assess the spatial distribution of threats on a European scale. A list of 13 potential threats to soil biodiversity was proposed to experts with different backgrounds in order to assess the potential for three major components of soil biodiversity: soil microorganisms, fauna, and biological functions. This approach allowed us to obtain knowledge-based rankings of threats. These classifications formed the basis for the development of indices through an additive aggregation model that, along with ad-hoc proxies for each pressure, allowed us to preliminarily assess the spatial patterns of potential threats. Intensive exploitation was identified as the highest pressure. In contrast, the use of genetically modified organisms in agriculture was considered as the threat with least potential. The potential impact of climate change showed the highest uncertainty. Fourteen out of the 27 considered countries have more than 40% of their soils with moderate-high to high potential risk for all three components of soil biodiversity. Arable soils are the most exposed to pressures. Soils within the boreal biogeographic region showed the lowest risk potential. The majority of soils at risk are outside the boundaries of protected areas. First maps of risks to three components of soil biodiversity based on the current scientific knowledge were developed. Despite the intrinsic limits of knowledge-based assessments, a remarkable potential risk to soil biodiversity was observed. Guidelines to preliminarily identify and circumscribe soils potentially at risk are provided. This approach may be used in future research to assess threat at both local and global scale and identify areas of possible risk and, subsequently, design appropriate strategies for monitoring and protection of soil

  4. An integrated approach for visual analysis of a multisource moving objects knowledge base

    NARCIS (Netherlands)

    Willems, N.; van Hage, W.R.; de Vries, G.; Janssens, J.H.M.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  5. An Integrated Approach for Visual Analysis of a Multi-Source Moving Objects Knowledge Base

    NARCIS (Netherlands)

    Willems, C.M.E.; van Hage, W.R.; de Vries, G.K.D.; Janssens, J.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  6. An integrated approach for visual analysis of a multi-source moving objects knowledge base

    NARCIS (Netherlands)

    Willems, N.; Hage, van W.R.; Vries, de G.; Janssens, J.H.M.; Malaisé, V.

    2010-01-01

    We present an integrated and multidisciplinary approach for analyzing the behavior of moving objects. The results originate from an ongoing research of four different partners from the Dutch Poseidon project (Embedded Systems Institute (2007)), which aims to develop new methods for Maritime Safety

  7. SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety

    International Nuclear Information System (INIS)

    Salomons, G; Kelly, D

    2015-01-01

    Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes that the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective

  8. SU-E-P-43: A Knowledge Based Approach to Guidelines for Software Safety

    Energy Technology Data Exchange (ETDEWEB)

    Salomons, G [Cancer Center of Southeastern Ontario & Queen’s University, Kingston, ON (Canada); Kelly, D [Royal Military College of Canada, Kingston, ON, CA (Canada)

    2015-06-15

    Purpose: In the fall of 2012, a survey was distributed to medical physicists across Canada. The survey asked the respondents to comment on various aspects of software development and use in their clinic. The survey revealed that most centers employ locally produced (in-house) software of some kind. The respondents also indicated an interest in having software guidelines, but cautioned that the realities of cancer clinics include variations, that preclude a simple solution. Traditional guidelines typically involve periodically repeating a set of prescribed tests with defined tolerance limits. However, applying a similar formula to software is problematic since it assumes that the users have a perfect knowledge of how and when to apply the software and that if the software operates correctly under one set of conditions it will operate correctly under all conditions Methods: In the approach presented here the personnel involved with the software are included as an integral part of the system. Activities performed to improve the safety of the software are done with both software and people in mind. A learning oriented approach is taken, following the premise that the best approach to safety is increasing the understanding of those associated with the use or development of the software. Results: The software guidance document is organized by areas of knowledge related to use and development of software. The categories include: knowledge of the underlying algorithm and its limitations; knowledge of the operation of the software, such as input values, parameters, error messages, and interpretation of output; and knowledge of the environment for the software including both data and users. Conclusion: We propose a new approach to developing guidelines which is based on acquiring knowledge-rather than performing tests. The ultimate goal is to provide robust software guidelines which will be practical and effective.

  9. Analytical Features: A Knowledge-Based Approach to Audio Feature Generation

    Directory of Open Access Journals (Sweden)

    Pachet François

    2009-01-01

    Full Text Available We present a feature generation system designed to create audio features for supervised classification tasks. The main contribution to feature generation studies is the notion of analytical features (AFs, a construct designed to support the representation of knowledge about audio signal processing. We describe the most important aspects of AFs, in particular their dimensional type system, on which are based pattern-based random generators, heuristics, and rewriting rules. We show how AFs generalize or improve previous approaches used in feature generation. We report on several projects using AFs for difficult audio classification tasks, demonstrating their advantage over standard audio features. More generally, we propose analytical features as a paradigm to bring raw signals into the world of symbolic computation.

  10. An engineering approach to knowledge-based systems, the alarm processing and diagnostic system

    International Nuclear Information System (INIS)

    Mah, E.; Damon, L.

    1992-01-01

    The number of alarms that may be initiated during transients or accidents in nuclear-generating control rooms may temporarily exceed an operator's ability to assimilate and respond. This phenomenon is characterized as Cognitive Overload. The Alarm Processing and Diagnostic System (APDS) was designed to deal with this problem through a unique and operationally sensitive method of alarm prioritization and filtration. The approach taken attempts to parallel the operator's situation assessment methodology when dealing with transient conditions. A strong criteria for the development methodology employed was its ultimate acceptance by parties engaged in the operation of nuclear power facilities. As such, the methodology used had to be easily understood and consistent with the acceptance standards of nuclear power. This necessitated the verifiable practices found in engineering design. While APDS remains rooted in artificial intelligence or expert systems, it goes beyond the paradigm of rules and inferencing to an object-oriented structure that allows traditional and well-documented engineering-based decision methods to be applied. These features have important consequences when considering final acceptance, implementation, and maintenance. 3 refs., 1 tab

  11. The knowledge base of journalism

    DEFF Research Database (Denmark)

    Svith, Flemming

    In this paper I propose the knowledge base as a fruitful way to apprehend journalism. With the claim that the majority of practice is anchored in knowledge – understood as 9 categories of rationales, forms and levels – this knowledge base appears as a contextual look at journalists’ knowledge......, and place. As an analytical framework, the knowledge base is limited to understand the practice of newspaper journalists, but, conversely, the knowledge base encompasses more general beginnings through the inclusion of overall structural relationships in the media and journalism and general theories...... on practice and knowledge. As the result of an abductive reasoning is a theory proposal, there is a need for more deductive approaches to test the validity of this knowledge base claim. It is thus relevant to investigate which rationales are included in the knowledge base of journalism, as the dimension does...

  12. A knowledge-based approach to improving and homogenizing intensity modulated radiation therapy planning quality among treatment centers: an example application to prostate cancer planning.

    Science.gov (United States)

    Good, David; Lo, Joseph; Lee, W Robert; Wu, Q Jackie; Yin, Fang-Fang; Das, Shiva K

    2013-09-01

    Intensity modulated radiation therapy (IMRT) treatment planning can have wide variation among different treatment centers. We propose a system to leverage the IMRT planning experience of larger institutions to automatically create high-quality plans for outside clinics. We explore feasibility by generating plans for patient datasets from an outside institution by adapting plans from our institution. A knowledge database was created from 132 IMRT treatment plans for prostate cancer at our institution. The outside institution, a community hospital, provided the datasets for 55 prostate cancer cases, including their original treatment plans. For each "query" case from the outside institution, a similar "match" case was identified in the knowledge database, and the match case's plan parameters were then adapted and optimized to the query case by use of a semiautomated approach that required no expert planning knowledge. The plans generated with this knowledge-based approach were compared with the original treatment plans at several dose cutpoints. Compared with the original plan, the knowledge-based plan had a significantly more homogeneous dose to the planning target volume and a significantly lower maximum dose. The volumes of the rectum, bladder, and femoral heads above all cutpoints were nominally lower for the knowledge-based plan; the reductions were significantly lower for the rectum. In 40% of cases, the knowledge-based plan had overall superior (lower) dose-volume histograms for rectum and bladder; in 54% of cases, the comparison was equivocal; in 6% of cases, the knowledge-based plan was inferior for both bladder and rectum. Knowledge-based planning was superior or equivalent to the original plan in 95% of cases. The knowledge-based approach shows promise for homogenizing plan quality by transferring planning expertise from more experienced to less experienced institutions. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Opening Up Climate Research: A Linked Data Approach to Publishing Data Provenance

    Directory of Open Access Journals (Sweden)

    Arif Shaon

    2012-03-01

    Full Text Available Traditionally, the formal scientific output in most fields of natural science has been limited to peer-reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.

  14. Foundation: Transforming data bases into knowledge bases

    Science.gov (United States)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  15. A user-orientated approach to provenance capture and representation for in silico experiments, explored within the atmospheric chemistry community.

    Science.gov (United States)

    Martin, Chris J; Haji, Mohammed H; Jimack, Peter K; Pilling, Michael J; Dew, Peter M

    2009-07-13

    We present a novel user-orientated approach to provenance capture and representation for in silico experiments, contrasted against the more systems-orientated approaches that have been typical within the e-Science domain. In our approach, we seek to capture the scientist's reasoning in the form of annotations as an experiment evolves, while using the scientist's terminology in the representation of process provenance. Our user-orientated approach is applied in a case study within the atmospheric chemistry domain: we consider the design, development and evaluation of an electronic laboratory notebook, a provenance capture and storage tool, for iterative model development.

  16. Dynamic Data Citation through Provenance - new approach for reproducible science in Geoscience Australia.

    Science.gov (United States)

    Bastrakova, I.; Car, N.

    2017-12-01

    Geoscience Australia (GA) is recognised and respected as the National Repository and steward of multiple nationally significance data collections that provides geoscience information, services and capability to the Australian Government, industry and stakeholders. Internally, this brings a challenge of managing large volume (11 PB) of diverse and highly complex data distributed through a significant number of catalogues, applications, portals, virtual laboratories, and direct downloads from multiple locations. Externally, GA is facing constant changer in the Government regulations (e.g. open data and archival laws), growing stakeholder demands for high quality and near real-time delivery of data and products, and rapid technological advances enabling dynamic data access. Traditional approach to citing static data and products cannot satisfy increasing demands for the results from scientific workflows, or items within the workflows to be open, discoverable, thrusted and reproducible. Thus, citation of data, products, codes and applications through the implementation of provenance records is being implemented. This approach involves capturing the provenance of many GA processes according to a standardised data model and storing it, as well as metadata for the elements it references, in a searchable set of systems. This provides GA with ability to cite workflows unambiguously as well as each item within each workflow, including inputs and outputs and many other registered components. Dynamic objects can therefore be referenced flexibly in relation to their generation process - a dataset's metadata indicates where to obtain its provenance from - meaning the relevant facts of its dynamism need not be crammed into a single citation object with a single set of attributes. This allows for simple citations, similar to traditional static document citations such as references in journals, to be used for complex dynamic data and other objects such as software code.

  17. Retrospective analysis of 104 histologically proven adult brainstem gliomas: clinical symptoms, therapeutic approaches and prognostic factors

    International Nuclear Information System (INIS)

    Reithmeier, Thomas; Kuzeawu, Aanyo; Hentschel, Bettina; Loeffler, Markus; Trippel, Michael; Nikkhah, Guido

    2014-01-01

    Adult brainstem gliomas are rare primary brain tumors (<2% of gliomas). The goal of this study was to analyze clinical, prognostic and therapeutic factors in a large series of histologically proven brainstem gliomas. Between 1997 and 2007, 104 patients with a histologically proven brainstem glioma were retrospectively analyzed. Data about clinical course of disease, neuropathological findings and therapeutic approaches were analyzed. The median age at diagnosis was 41 years (range 18-89 years), median KPS before any operative procedure was 80 (range 20-100) and median survival for the whole cohort was 18.8 months. Histopathological examinations revealed 16 grade I, 31 grade II, 42 grade III and 14 grade IV gliomas. Grading was not possible in 1 patient. Therapeutic concepts differed according to the histopathology of the disease. Median overall survival for grade II tumors was 26.4 months, for grade III tumors 12.9 months and for grade IV tumors 9.8 months. On multivariate analysis the relative risk to die increased with a KPS ≤ 70 by factor 6.7, with grade III/IV gliomas by the factor 1.8 and for age ≥ 40 by the factor 1.7. External beam radiation reduced the risk to die by factor 0.4. Adult brainstem gliomas present with a wide variety of neurological symptoms and postoperative radiation remains the cornerstone of therapy with no proven benefit of adding chemotherapy. Low KPS, age ≥ 40 and higher tumor grade have a negative impact on overall survival

  18. An Architecture for Performance Optimization in a Collaborative Knowledge-Based Approach for  Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Juan Ramon Velasco

    2011-09-01

    Full Text Available Over the past few years, Intelligent Spaces (ISs have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a an optimized design for the inference engine; (b a visual interface; (c a module to reduce the redundancy and complexity of the knowledge bases; (d a module to evaluate the accuracy of the new knowledge base; (e a module to adapt the format of the rules to the structure used by the inference engine; and (f a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern. and repilo (caused by the fungus Spilocaea oleagina. The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery without a substantial decrease in the accuracy of the inferred values.

  19. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    OpenAIRE

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S.; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality i...

  20. Monitoring Knowledge Base (MKB)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Monitoring Knowledge Base (MKB) is a compilation of emissions measurement and monitoring techniques associated with air pollution control devices, industrial...

  1. Leveraging on Information Technology to Teach Construction Law to Built Environment Students: A Knowledge-Based System (KBS Approach

    Directory of Open Access Journals (Sweden)

    Faisal Manzoor Arain

    2009-11-01

    Full Text Available Construction law is a vital component of the body of knowledge that is needed by construction professionals in order to successfully operate in the commercial world of construction. Construction law plays an important role in shaping building projects. Construction projects are complex because they involve many human and non-human factors and variables. Teaching construction law is therefore a complex issue with several dimensions. In recent years, Information Technology (IT has become strongly established as a supporting tool for many professions, including teachers. If faculty members have a knowledge base established on similar past projects, it would assist the faculty members to present case studies and contractually based scenarios to students. This paper proposes potential utilisation of a Knowledge-based System (KBS for teaching construction law to built environment students. The KBS is primarily designed for building professionals to learn from similar past projects. The KBS is able to assist professionals by providing accurate and timelyinformation for decision making and a user-friendly tool for analysing and selecting the suggested controls for variations in educational buildings. It is recommended that the wealth of knowledge available in the KBS can be very helpful in teaching construction law to built environment students. The system presents real case studies and scenarios to students to allow them to analyse and learn construction law. The KBS could be useful to students as a general research tool because the students could populate it with their own data and use it with the reported educational projects. With further generic modifications, the KBS will also be useful for built environment students to learn about project management of building projects; thus, it will raise the overall level of professional understanding, and eventually productivity, in the construction industry.

  2. Structural variation of alpha-synuclein with temperature by a coarse-grained approach with knowledge-based interactions

    Directory of Open Access Journals (Sweden)

    Peter Mirau

    2015-09-01

    Full Text Available Despite enormous efforts, our understanding the structure and dynamics of α-synuclein (ASN, a disordered protein (that plays a key role in neurodegenerative disease is far from complete. In order to better understand sequence-structure-property relationships in α-SYNUCLEIN we have developed a coarse-grained model using knowledge-based residue-residue interactions and used it to study the structure of free ASN as a function of temperature (T with a large-scale Monte Carlo simulation. Snapshots of the simulation and contour contact maps show changes in structure formation due to self-assembly as a function of temperature. Variations in the residue mobility profiles reveal clear distinction among three segments along the protein sequence. The N-terminal (1-60 and C-terminal (96-140 regions contain the least mobile residues, which are separated by the higher mobility non-amyloid component (NAC (61-95. Our analysis of the intra-protein contact profile shows a higher frequency of residue aggregation (clumping in the N-terminal region relative to that in the C-terminal region, with little or no aggregation in the NAC region. The radius of gyration (Rg of ASN decays monotonically with decreasing the temperature, consistent with the finding of Allison et al. (JACS, 2009. Our analysis of the structure function provides an insight into the mass (N distribution of ASN, and the dimensionality (D of the structure as a function of temperature. We find that the globular structure with D ≈ 3 at low T, a random coil, D ≈ 2 at high T and in between (2 ≤ D ≤ 3 at the intermediate temperatures. The magnitudes of D are in agreement with experimental estimates (J. Biological Chem 2002.

  3. Proven approaches to organise a large decommissioning project, including the management of local stakeholder interests

    International Nuclear Information System (INIS)

    Rodriguez, A.

    2005-01-01

    Full text: Spanish experience holds a relatively important position in the field of the decommissioning of nuclear and radioactive facilities. Decommissioning projects of uranium concentrate mill facilities are near completion; some old uranium mine sites have already been restored; several projects for the dismantling of various small research nuclear reactors and a few pilot plants are at various phases of the dismantling process, with some already completed. The most notable Spanish project in this field is undoubtedly the decommissioning of the Vandellos 1 nuclear power plant that is currently ready to enter a safe enclosure, or dormancy, period. The management of radioactive wastes in Spain is undertaken by 'Empresa Nacional de Residuos Radioactivos, S.A.' (ENRESA), the Spanish national radioactive waste company, constituted in 1984. ENRESA operates as a management company, whose role is to develop radioactive waste management programmes in accordance with the policy and strategy approved by the Spanish Government. Its responsibilities include the decommissioning and dismantling of nuclear installations. Decommissioning and dismantling nuclear installations is an increasingly important topic for governments, regulators, industries and civil society. There are many aspects that have to be carefully considered, planned and organised in many cases well in advance of when they really need to be implemented. The goal of this paper is describe proven approaches relevant to organizing and managing large decommissioning projects, in particular in the case of Vandellos-1 NPP decommissioning. (author)

  4. Knowledge base mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Suwa, M; Furukawa, K; Makinouchi, A; Mizoguchi, T; Mizoguchi, F; Yamasaki, H

    1982-01-01

    One of the principal goals of the Fifth Generation Computer System Project for the coming decade is to develop a methodology for building knowledge information processing systems which will provide people with intelligent agents. The key notion of the fifth generation computer system is knowledge used for problem solving. In this paper the authors describe the plan of Randd on knowledge base mechanisms. A knowledge representation system is to be designed to support knowledge acquisition for the knowledge information processing systems. The system will include a knowledge representation language, a knowledge base editor and a debugger. It is also expected to perform as a kind of meta-inference system. In order to develop the large scale knowledge base systems, a knowledge base mechanism based on the relational model is to be studied in the earlier stage of the project. Distributed problem solving is also one of the main issues of the project. 19 references.

  5. Improving economics and safety of water cooled reactors. Proven means and new approaches

    International Nuclear Information System (INIS)

    2002-05-01

    Nuclear power plants (NPPs) with water cooled reactors [either light water reactors (LWRs) or heavy water reactors (HWRs)] constitute the large majority of the currently operating plants. Water cooled reactors can make a significant contribution to meeting future energy needs, to reducing greenhouse gas emissions, and to energy security if they can compete economically with fossil alternatives, while continuing to achieve a very high level of safety. It is generally agreed that the largest commercial barrier to the addition of new nuclear power capacity is the high capital cost of nuclear plants relative to other electricity generating alternatives. If nuclear plants are to form part of the future generating mix in competitive electricity markets, capital cost reduction through simplified designs must be an important focus. Reductions in operating, maintenance and fuel costs should also be pursued. The Department of Nuclear Energy of the IAEA is examining the competitiveness of nuclear power and the means for improving its economics. The objective of this TECDOC is to emphasize the need, and to identify approaches, for new nuclear plants with water cooled reactors to achieve competitiveness while maintaining high levels of safety. The cost reduction methods discussed herein can be implemented into plant designs that are currently under development as well as into designs that may be developed in the longer term. Many of the approaches discussed also generally apply to other reactor types (e.g. gas cooled and liquid metal cooled reactors). To achieve the largest possible cost reductions, proven means for reducing costs must be fully implemented, and new approaches described in this document should be developed and implemented. These new approaches include development of advanced technologies, increased use of risk-informed methods for evaluating the safety benefit of design features, and international consensus regarding commonly acceptable safety requirements that

  6. Improving economics and safety of water cooled reactors. Proven means and new approaches

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2002-05-01

    Nuclear power plants (NPPs) with water cooled reactors [either light water reactors (LWRs) or heavy water reactors (HWRs)] constitute the large majority of the currently operating plants. Water cooled reactors can make a significant contribution to meeting future energy needs, to reducing greenhouse gas emissions, and to energy security if they can compete economically with fossil alternatives, while continuing to achieve a very high level of safety. It is generally agreed that the largest commercial barrier to the addition of new nuclear power capacity is the high capital cost of nuclear plants relative to other electricity generating alternatives. If nuclear plants are to form part of the future generating mix in competitive electricity markets, capital cost reduction through simplified designs must be an important focus. Reductions in operating, maintenance and fuel costs should also be pursued. The Department of Nuclear Energy of the IAEA is examining the competitiveness of nuclear power and the means for improving its economics. The objective of this TECDOC is to emphasize the need, and to identify approaches, for new nuclear plants with water cooled reactors to achieve competitiveness while maintaining high levels of safety. The cost reduction methods discussed herein can be implemented into plant designs that are currently under development as well as into designs that may be developed in the longer term. Many of the approaches discussed also generally apply to other reactor types (e.g. gas cooled and liquid metal cooled reactors). To achieve the largest possible cost reductions, proven means for reducing costs must be fully implemented, and new approaches described in this document should be developed and implemented. These new approaches include development of advanced technologies, increased use of risk-informed methods for evaluating the safety benefit of design features, and international consensus regarding commonly acceptable safety requirements that

  7. Fuzzy knowledge bases integration based on ontology

    OpenAIRE

    Ternovoy, Maksym; Shtogrina, Olena

    2012-01-01

    the paper describes the approach for fuzzy knowledge bases integration with the usage of ontology. This approach is based on metadata-base usage for integration of different knowledge bases with common ontology. The design process of metadata-base is described.

  8. Integrated Approach for a Knowledge-Based Process Layout for Simultaneous 5-Axis Milling of Advanced Materials

    Directory of Open Access Journals (Sweden)

    F. Klocke

    2011-01-01

    Full Text Available Advanced materials, like nickel-based alloys, gain importance in turbomachinery manufacturing, where creating complex surfaces constitute a major challenge. However, milling strategies that provide high material removal rates at acceptable tooling costs demand optimized tool geometry and process parameter selection. In this paper, a description of circular milling is given, focusing on resulting engagement conditions. Regarding this, a test bench was designed to investigate the chip formation process in an analogy milling process. Furthermore, the methodology for the approach in the analogy process was developed. Results of a first test run in Inconel 718 verify the presented approach.

  9. Validation Of Critical Knowledge-Based Systems

    Science.gov (United States)

    Duke, Eugene L.

    1992-01-01

    Report discusses approach to verification and validation of knowledge-based systems. Also known as "expert systems". Concerned mainly with development of methodologies for verification of knowledge-based systems critical to flight-research systems; e.g., fault-tolerant control systems for advanced aircraft. Subject matter also has relevance to knowledge-based systems controlling medical life-support equipment or commuter railroad systems.

  10. Knowledge-based approach for functional MRI analysis by SOM neural network using prior labels from Talairach stereotaxic space

    Science.gov (United States)

    Erberich, Stephan G.; Willmes, Klaus; Thron, Armin; Oberschelp, Walter; Huang, H. K.

    2002-04-01

    Among the methods proposed for the analysis of functional MR we have previously introduced a model-independent analysis based on the self-organizing map (SOM) neural network technique. The SOM neural network can be trained to identify the temporal patterns in voxel time-series of individual functional MRI (fMRI) experiments. The separated classes consist of activation, deactivation and baseline patterns corresponding to the task-paradigm. While the classification capability of the SOM is not only based on the distinctness of the patterns themselves but also on their frequency of occurrence in the training set, a weighting or selection of voxels of interest should be considered prior to the training of the neural network to improve pattern learning. Weighting of interesting voxels by means of autocorrelation or F-test significance levels has been used successfully, but still a large number of baseline voxels is included in the training. The purpose of this approach is to avoid the inclusion of these voxels by using three different levels of segmentation and mapping from Talairach space: (1) voxel partitions at the lobe level, (2) voxel partitions at the gyrus level and (3) voxel partitions at the cell level (Brodmann areas). The results of the SOM classification based on these mapping levels in comparison to training with all brain voxels are presented in this paper.

  11. A Knowledge-Based Approach to Automatic Detection of Equipment Alarm Sounds in a Neonatal Intensive Care Unit Environment.

    Science.gov (United States)

    Raboshchuk, Ganna; Nadeu, Climent; Jancovic, Peter; Lilja, Alex Peiro; Kokuer, Munevver; Munoz Mahamud, Blanca; Riverola De Veciana, Ana

    2018-01-01

    A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%.

  12. Biomechanical differences in the stem straightening process among Pinus pinaster provenances. A new approach for early selection of stem straightness.

    Science.gov (United States)

    Sierra-de-Grado, Rosario; Pando, Valentín; Martínez-Zurimendi, Pablo; Peñalvo, Alejandro; Báscones, Esther; Moulia, Bruno

    2008-06-01

    Stem straightness is an important selection trait in Pinus pinaster Ait. breeding programs. Despite the stability of stem straightness rankings in provenance trials, the efficiency of breeding programs based on a quantitative index of stem straightness remains low. An alternative approach is to analyze biomechanical processes that underlie stem form. The rationale for this selection method is that genetic differences in the biomechanical processes that maintain stem straightness in young plants will continue to control stem form throughout the life of the tree. We analyzed the components contributing most to genetic differences among provenances in stem straightening processes by kinetic analysis and with a biomechanical model defining the interactions between the variables involved (Fournier's model). This framework was tested on three P. pinaster provenances differing in adult stem straightness and growth. One-year-old plants were tilted at 45 degrees, and individual stem positions and sizes were recorded weekly for 5 months. We measured the radial extension of reaction wood and the anatomical features of wood cells in serial stem cross sections. The integral effect of reaction wood on stem leaning was computed with Fournier's model. Responses driven by both primary and secondary growth were involved in the stem straightening process, but secondary-growth-driven responses accounted for most differences among provenances. Plants from the straight-stemmed provenance showed a greater capacity for stem straightening than plants from the sinuous provenances mainly because of (1) more efficient reaction wood (higher maturation strains) and (2) more pronounced secondary-growth-driven autotropic decurving. These two process-based traits are thus good candidates for early selection of stem straightness, but additional tests on a greater number of genotypes over a longer period are required.

  13. Knowledge based maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Sturm, A [Hamburgische Electacitaets-Werke AG Hamburg (Germany)

    1998-12-31

    The establishment of maintenance strategies is of crucial significance for the reliability of a plant and the economic efficiency of maintenance measures. Knowledge about the condition of components and plants from the technical and business management point of view therefore becomes one of the fundamental questions and the key to efficient management and maintenance. A new way to determine the maintenance strategy can be called: Knowledge Based Maintenance. A simple method for determining strategies while taking the technical condition of the components of the production process into account to the greatest possible degree which can be shown. A software with an algorithm for Knowledge Based Maintenance leads the user during complex work to the determination of maintenance strategies for this complex plant components. (orig.)

  14. Knowledge based maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Sturm, A. [Hamburgische Electacitaets-Werke AG Hamburg (Germany)

    1997-12-31

    The establishment of maintenance strategies is of crucial significance for the reliability of a plant and the economic efficiency of maintenance measures. Knowledge about the condition of components and plants from the technical and business management point of view therefore becomes one of the fundamental questions and the key to efficient management and maintenance. A new way to determine the maintenance strategy can be called: Knowledge Based Maintenance. A simple method for determining strategies while taking the technical condition of the components of the production process into account to the greatest possible degree which can be shown. A software with an algorithm for Knowledge Based Maintenance leads the user during complex work to the determination of maintenance strategies for this complex plant components. (orig.)

  15. Knowledge Based Economy Assessment

    OpenAIRE

    Madalina Cristina Tocan

    2012-01-01

    The importance of knowledge-based economy (KBE) in the XXI century is evident. In the article the reflection of knowledge on economy is analyzed. The main point is targeted to the analysis of characteristics of knowledge expression in economy and to the construction of structure of KBE expression. This allows understanding the mechanism of functioning of knowledge economy. The authors highlight the possibility to assess the penetration level of KBE which could manifest itself trough the exist...

  16. Knowledge-based utility

    International Nuclear Information System (INIS)

    Chwalowski, M.

    1997-01-01

    This presentation provides industry examples of successful marketing practices by companies facing deregulation and competition. The common thread through the examples is that long term survival of today's utility structure is dependent on the strategic role of knowledge. As opposed to regulated monopolies which usually own huge physical assets and have very little intelligence about their customers, unregulated enterprises tend to be knowledge-based, characterized by higher market value than book value. A knowledge-based enterprise gathers data, creates information and develops knowledge by leveraging it as a competitive weapon. It institutionalizes human knowledge as a corporate asset for use over and over again by the use of databases, computer networks, patents, billing, collection and customer services (BCCS), branded interfaces and management capabilities. Activities to become knowledge-based such as replacing inventory/fixed assets with information about material usage to reduce expenditure and achieve more efficient operations, and by focusing on integration and value-adding delivery capabilities, were reviewed

  17. Knowledge based Entrepreneurship

    DEFF Research Database (Denmark)

    Heebøll, John

    This book is dedicated enterprising people with a technical or a scientific background who consider commercializing ideas and inventions within their field of expertise via a new business activity or a new company. It aims at distilling experiences from many successful and not so successful start......-up ventures from the Technical University of Denmark, 1988 – 2008 into practical, portable knowledge that can be used by future knowledge-based entrepreneurs to set up new companies efficiently or to stay away from it; to do what’s needed and avoid the pitfalls....

  18. Cropland Mapping over Sahelian and Sudanian Agrosystems: A Knowledge-Based Approach Using PROBA-V Time Series at 100-m

    Directory of Open Access Journals (Sweden)

    Marie-Julie Lambert

    2016-03-01

    Full Text Available Early warning systems for food security require accurate and up-to-date information on the location of major crops in order to prevent hazards. A recent systematic analysis of existing cropland maps identified priority areas for cropland mapping and highlighted a major need for the Sahelian and Sudanian agrosystems. This paper proposes a knowledge-based approach to map cropland in the Sahelian and Sudanian agrosystems that benefits from the 100-m spatial resolution of the recent PROBA-V sensor. The methodology uses five temporal features characterizing crop development throughout the vegetative season to optimize cropland discrimination. A feature importance analysis validates the efficiency of using a diversity of temporal features. The fully-automated method offers the first cropland map at 100-m using the PROBA-V sensor with an overall accuracy of 84% and an F-score for the cropland class of 74%. The improvements observed compared to existing cropland products are related to the hectometric resolution, to the methodology and to the quality of the labeling layer from which reliable training samples were automatically extracted. Classification errors are mainly explained by data availability and landscape fragmentation. Further improvements are expected with the upcoming enhanced cloud screening of the PROBA-V sensor.

  19. Marketing moxie for librarians fresh ideas, proven techniques, and innovative approaches

    CERN Document Server

    Watson-Lakamp, Paula

    2015-01-01

    Robust, resilient, and flexible marketing is an absolute necessity for today's libraries. Fortunately, marketing can be fun. Through this savvy guide, you'll discover a wealth of fresh, actionable ideas and approaches that can be combined with tried-and-true marketing techniques to serve any library. Focusing on building platforms rather than chasing trends, the book offers low- and no-budget ideas for those in small libraries as well as information that can be used by libraries that have a staff of professionals. The guide opens with an overview of the basics of marketing and continues throug

  20. Distributed, cooperating knowledge-based systems

    Science.gov (United States)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  1. Knowledge-based systems as decision support tools in an ecosystem approach to fisheries: Comparing a fuzzy-logic and rule-based approach

    DEFF Research Database (Denmark)

    Jarre, Astrid; Paterson, B.; Moloney, C.L.

    2008-01-01

    developing and using. Their strengths lie in (i) synthesis of the problem in a logical and transparent framework, (ii) helping scientists to deliberate how to apply their science to transdisciplinary issues that are not purely scientific, and (iii) representing vehicles for delivering state-of-the-art...... science to those who need to use it. Possible applications of this approach for ecosystems of the Humboldt Current are discussed....

  2. Logically automorphically equivalent knowledge bases

    OpenAIRE

    Aladova, Elena; Plotkin, Tatjana

    2017-01-01

    Knowledge bases theory provide an important example of the field where applications of universal algebra and algebraic logic look very natural, and their interaction with practical problems arising in computer science might be very productive. In this paper we study the equivalence problem for knowledge bases. Our interest is to find out how the informational equivalence is related to the logical description of knowledge. Studying various equivalences of knowledge bases allows us to compare d...

  3. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Chunhua Li

    2017-01-01

    Full Text Available Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  4. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.

    Science.gov (United States)

    Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

  5. Knowledge-based diagnosis for aerospace systems

    Science.gov (United States)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  6. Querying Natural Logic Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Jensen, Per Anker

    2017-01-01

    This paper describes the principles of a system applying natural logic as a knowledge base language. Natural logics are regimented fragments of natural language employing high level inference rules. We advocate the use of natural logic for knowledge bases dealing with querying of classes...... in ontologies and class-relationships such as are common in life-science descriptions. The paper adopts a version of natural logic with recursive restrictive clauses such as relative clauses and adnominal prepositional phrases. It includes passive as well as active voice sentences. We outline a prototype...... for partial translation of natural language into natural logic, featuring further querying and conceptual path finding in natural logic knowledge bases....

  7. Current trends on knowledge-based systems

    CERN Document Server

    Valencia-García, Rafael

    2017-01-01

    This book presents innovative and high-quality research on the implementation of conceptual frameworks, strategies, techniques, methodologies, informatics platforms and models for developing advanced knowledge-based systems and their application in different fields, including Agriculture, Education, Automotive, Electrical Industry, Business Services, Food Manufacturing, Energy Services, Medicine and others. Knowledge-based technologies employ artificial intelligence methods to heuristically address problems that cannot be solved by means of formal techniques. These technologies draw on standard and novel approaches from various disciplines within Computer Science, including Knowledge Engineering, Natural Language Processing, Decision Support Systems, Artificial Intelligence, Databases, Software Engineering, etc. As a combination of different fields of Artificial Intelligence, the area of Knowledge-Based Systems applies knowledge representation, case-based reasoning, neural networks, Semantic Web and TICs used...

  8. Knowledge-based Telecom Industry

    OpenAIRE

    Vinje, Villeman; Nordkvelde, Marius

    2011-01-01

    BI Norwegian School of Management is conducting a national research project entitled “A knowledge-based Norway”. Thirteen major knowledge-based industries in Norway are being analyzed under the auspices of the project. This study assesses the underlying properties of a global knowledge hub to examine the extent to which the Norwegian telecom industry – which encompasses all telecom firms located in Norway regardless of ownership – constitutes a global knowledge hub. It commences with a ge...

  9. A Knowledge-based Recommendation Framework using SVN Numbers

    Directory of Open Access Journals (Sweden)

    Roddy Cabezas Padilla

    2017-06-01

    Full Text Available Current knowledge based recommender systems, despite proven useful and having a high impact, persist with some shortcomings. Among its limitations are the lack of more flexible models and the inclusion of indeterminacy of the factors involved for computing a global similarity.

  10. An Innovative Approach to Addressing Childhood Obesity: A Knowledge-Based Infrastructure for Supporting Multi-Stakeholder Partnership Decision-Making in Quebec, Canada

    Directory of Open Access Journals (Sweden)

    Nii Antiaye Addy

    2015-01-01

    Full Text Available Multi-stakeholder partnerships (MSPs have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a “portrait”, which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed

  11. Conformational temperature-dependent behavior of a histone H2AX: a coarse-grained Monte Carlo approach via knowledge-based interaction potentials.

    Directory of Open Access Journals (Sweden)

    Miriam Fritsche

    Full Text Available Histone proteins are not only important due to their vital role in cellular processes such as DNA compaction, replication and repair but also show intriguing structural properties that might be exploited for bioengineering purposes such as the development of nano-materials. Based on their biological and technological implications, it is interesting to investigate the structural properties of proteins as a function of temperature. In this work, we study the spatial response dynamics of the histone H2AX, consisting of 143 residues, by a coarse-grained bond fluctuating model for a broad range of normalized temperatures. A knowledge-based interaction matrix is used as input for the residue-residue Lennard-Jones potential.We find a variety of equilibrium structures including global globular configurations at low normalized temperature (T* = 0.014, combination of segmental globules and elongated chains (T* = 0.016,0.017, predominantly elongated chains (T* = 0.019,0.020, as well as universal SAW conformations at high normalized temperature (T* ≥ 0.023. The radius of gyration of the protein exhibits a non-monotonic temperature dependence with a maximum at a characteristic temperature (T(c* = 0.019 where a crossover occurs from a positive (stretching at T* ≤ T(c* to negative (contraction at T* ≥ T(c* thermal response on increasing T*.

  12. Exchanging Description Logic Knowledge Bases

    NARCIS (Netherlands)

    Arenas, M.; Botoeva, E.; Calvanese, D.; Ryzhikov, V.; Sherkhonov, E.

    2012-01-01

    In this paper, we study the problem of exchanging knowledge between a source and a target knowledge base (KB), connected through mappings. Differently from the traditional database exchange setting, which considers only the exchange of data, we are interested in exchanging implicit knowledge. As

  13. Machine intelligence and knowledge bases

    Energy Technology Data Exchange (ETDEWEB)

    Furukawa, K

    1981-09-01

    The basic functions necessary in machine intelligence are a knowledge base and a logic programming language such as PROLOG using deductive reasoning. Recently inductive reasoning based on meta knowledge and default reasoning have been developed. The creative thought model of Lenit is reviewed and the concept of knowledge engineering is introduced. 17 references.

  14. Automated knowledge base development from CAD/CAE databases

    Science.gov (United States)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  15. Knowledge base verification based on enhanced colored petri net

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1997-12-31

    Verification is a process aimed at demonstrating whether a system meets it`s specified requirements. As expert systems are used in various applications, the knowledge base verification of systems takes an important position. The conventional Petri net approach that has been studied recently in order to verify the knowledge base is found that it is inadequate to verify the knowledge base of large and complex system, such as alarm processing system of nuclear power plant. Thus, we propose an improved method that models the knowledge base as enhanced colored Petri net. In this study, we analyze the reachability and the error characteristics of the knowledge base and apply the method to verification of simple knowledge base. 8 refs., 4 figs. (Author)

  16. Knowledge base verification based on enhanced colored petri net

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Hyun; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    1998-12-31

    Verification is a process aimed at demonstrating whether a system meets it`s specified requirements. As expert systems are used in various applications, the knowledge base verification of systems takes an important position. The conventional Petri net approach that has been studied recently in order to verify the knowledge base is found that it is inadequate to verify the knowledge base of large and complex system, such as alarm processing system of nuclear power plant. Thus, we propose an improved method that models the knowledge base as enhanced colored Petri net. In this study, we analyze the reachability and the error characteristics of the knowledge base and apply the method to verification of simple knowledge base. 8 refs., 4 figs. (Author)

  17. Transition to knowledge-based economy in Saudi Arabia

    NARCIS (Netherlands)

    Nour, S.

    2014-01-01

    This paper discusses the progress in transition to knowledge-based economy in Saudi Arabia. As for the methodology, this paper uses updated secondary data obtained from different sources. It uses both descriptive and comparative approaches and uses the OECD definition of knowledge-based economy and

  18. Automated knowledge-base refinement

    Science.gov (United States)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  19. Prior knowledge-based approach for associating contaminants with biological effects: A case study in the St. Croix river basin, MN, WI, USA.

    Science.gov (United States)

    Evaluating the potential human health and/or ecological risks associated with exposures to complex chemical mixtures in the ambient environment is one of the central challenges of chemical safety assessment and environmental protection. There is a need for approaches that can he...

  20. Knowledge-Based Aircraft Automation: Managers Guide on the use of Artificial Intelligence for Aircraft Automation and Verification and Validation Approach for a Neural-Based Flight Controller

    Science.gov (United States)

    Broderick, Ron

    1997-01-01

    The ultimate goal of this report was to integrate the powerful tools of artificial intelligence into the traditional process of software development. To maintain the US aerospace competitive advantage, traditional aerospace and software engineers need to more easily incorporate the technology of artificial intelligence into the advanced aerospace systems being designed today. The future goal was to transition artificial intelligence from an emerging technology to a standard technology that is considered early in the life cycle process to develop state-of-the-art aircraft automation systems. This report addressed the future goal in two ways. First, it provided a matrix that identified typical aircraft automation applications conducive to various artificial intelligence methods. The purpose of this matrix was to provide top-level guidance to managers contemplating the possible use of artificial intelligence in the development of aircraft automation. Second, the report provided a methodology to formally evaluate neural networks as part of the traditional process of software development. The matrix was developed by organizing the discipline of artificial intelligence into the following six methods: logical, object representation-based, distributed, uncertainty management, temporal and neurocomputing. Next, a study of existing aircraft automation applications that have been conducive to artificial intelligence implementation resulted in the following five categories: pilot-vehicle interface, system status and diagnosis, situation assessment, automatic flight planning, and aircraft flight control. The resulting matrix provided management guidance to understand artificial intelligence as it applied to aircraft automation. The approach taken to develop a methodology to formally evaluate neural networks as part of the software engineering life cycle was to start with the existing software quality assurance standards and to change these standards to include neural network

  1. A knowledge-based approach for identification of drugs against vivapain-2 protein of Plasmodium vivax through pharmacophore-based virtual screening with comparative modelling.

    Science.gov (United States)

    Yadav, Manoj Kumar; Singh, Amisha; Swati, D

    2014-08-01

    Malaria is one of the most infectious diseases in the world. Plasmodium vivax, the pathogen causing endemic malaria in humans worldwide, is responsible for extensive disease morbidity. Due to the emergence of resistance to common anti-malarial drugs, there is a continuous need to develop a new class of drugs for this pathogen. P. vivax cysteine protease, also known as vivapain-2, plays an important role in haemoglobin hydrolysis and is considered essential for the survival of the parasite. The three-dimensional (3D) structure of vivapain-2 is not predicted experimentally, so its structure is modelled by using comparative modelling approach and further validated by Qualitative Model Energy Analysis (QMEAN) and RAMPAGE tools. The potential binding site of selected vivapain-2 structure has been detected by grid-based function prediction method. Drug targets and their respective drugs similar to vivapain-2 have been identified using three publicly available databases: STITCH 3.1, DrugBank and Therapeutic Target Database (TTD). The second approach of this work focuses on docking study of selected drug E-64 against vivapain-2 protein. Docking reveals crucial information about key residues (Asn281, Cys283, Val396 and Asp398) that are responsible for holding the ligand in the active site. The similarity-search criterion is used for the preparation of our in-house database of drugs, obtained from filtering the drugs from the DrugBank database. A five-point 3D pharmacophore model is generated for the docked complex of vivapain-2 with E-64. This study of 3D pharmacophore-based virtual screening results in identifying three new drugs, amongst which one is approved and the other two are experimentally proved. The ADMET properties of these drugs are found to be in the desired range. These drugs with novel scaffolds may act as potent drugs for treating malaria caused by P. vivax.

  2. Knowledge-based Fragment Binding Prediction

    Science.gov (United States)

    Tang, Grace W.; Altman, Russ B.

    2014-01-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  3. Designing an 'expert knowledge' based approach for the quantification of historical floods - the case study of the Kinzig catchment in Southwest Germany

    Science.gov (United States)

    Bösmeier, Annette; Glaser, Rüdiger; Stahl, Kerstin; Himmelsbach, Iso; Schönbein, Johannes

    2017-04-01

    Future estimations of flood hazard and risk for developing optimal coping and adaption strategies inevitably include considerations of the frequency and magnitude of past events. Methods of historical climatology represent one way of assessing flood occurrences beyond the period of instrumental measurements and can thereby substantially help to extend the view into the past and to improve modern risk analysis. Such historical information can be of additional value and has been used in statistical approaches like Bayesian flood frequency analyses during recent years. However, the derivation of quantitative values from vague descriptive information of historical sources remains a crucial challenge. We explored possibilities of parametrization of descriptive flood related data specifically for the assessment of historical floods in a framework that combines a hermeneutical approach with mathematical and statistical methods. This study forms part of the transnational, Franco-German research project TRANSRISK2 (2014 - 2017), funded by ANR and DFG, with the focus on exploring the floods history of the last 300 years for the regions of Upper and Middle Rhine. A broad data base of flood events had been compiled, dating back to AD 1500. The events had been classified based on hermeneutical methods, depending on intensity, spatial dimension, temporal structure, damages and mitigation measures associated with the specific events. This indexed database allowed the exploration of a link between descriptive data and quantitative information for the overlapping time period of classified floods and instrumental measurements since the end of the 19th century. Thereby, flood peak discharges as a quantitative measure of the severity of a flood were used to assess the discharge intervals for flood classes (upper and lower thresholds) within different time intervals for validating the flood classification, as well as examining the trend in the perception threshold over time

  4. Knowledge-based decision tree approach for mapping spatial distribution of rice crop using C-band synthetic aperture radar-derived information

    Science.gov (United States)

    Mishra, Varun Narayan; Prasad, Rajendra; Kumar, Pradeep; Srivastava, Prashant K.; Rai, Praveen Kumar

    2017-10-01

    Updated and accurate information of rice-growing areas is vital for food security and investigating the environmental impact of rice ecosystems. The intent of this work is to explore the feasibility of dual-polarimetric C-band Radar Imaging Satellite-1 (RISAT-1) data in delineating rice crop fields from other land cover features. A two polarization combination of RISAT-1 backscatter, namely ratio (HH/HV) and difference (HH-HV), significantly enhanced the backscatter difference between rice and nonrice categories. With these inputs, a QUEST decision tree (DT) classifier is successfully employed to extract the spatial distribution of rice crop areas. The results showed the optimal polarization combination to be HH along with HH/HV and HH-HV for rice crop mapping with an accuracy of 88.57%. Results were further compared with a Landsat-8 operational land imager (OLI) optical sensor-derived rice crop map. Spatial agreement of almost 90% was achieved between outputs produced from Landsat-8 OLI and RISAT-1 data. The simplicity of the approach used in this work may serve as an effective tool for rice crop mapping.

  5. Knowledge based management of technical specifications

    International Nuclear Information System (INIS)

    Fiedler, U.; Schalm, S.; Pranckeviciute, K.

    1992-01-01

    TechSPEX is a knowledge based advisory system for checking the status of a nuclear plant on compliance with the safety limits and the limiting conditions of operation. These prescripts for safe reactor operation exist as textual information. For the purpose of its operational use an explicit representation formalism is introduced. On this basis, various approaches of text retrieval are realized, condition based surveillance and control is supported too. Knowledge editing and verification modules ease the adaption to changing requirements. TechSPEX has been implemented in PROLOG. (author). 6 refs, 3 figs

  6. Prior knowledge-based approach for associating contaminants with biological effects: A case study in the St. Croix River basin, MN, WI, USA

    Science.gov (United States)

    Schroeder, Anthony L.; Martinovic-Weigelt, Dalma; Ankley, Gerald T.; Lee, Kathy E.; Garcia-Reyero, Natalia; Perkins, Edward J.; Schoenfuss, Heiko L.; Villeneuve, Daniel L.

    2017-01-01

    Evaluating potential adverse effects of complex chemical mixtures in the environment is challenging. One way to address that challenge is through more integrated analysis of chemical monitoring and biological effects data. In the present study, water samples from five locations near two municipal wastewater treatment plants in the St. Croix River basin, on the border of MN and WI, USA, were analyzed for 127 organic contaminants. Known chemical-gene interactions were used to develop site-specific knowledge assembly models (KAMs) and formulate hypotheses concerning possible biological effects associated with chemicals detected in water samples from each location. Additionally, hepatic gene expression data were collected for fathead minnows (Pimephales promelas) exposed in situ, for 12 d, at each location. Expression data from oligonucleotide microarrays were analyzed to identify functional annotation terms enriched among the differentially-expressed probes. The general nature of many of the terms made hypothesis formulation on the basis of the transcriptome-level response alone difficult. However, integrated analysis of the transcriptome data in the context of the site-specific KAMs allowed for evaluation of the likelihood of specific chemicals contributing to observed biological responses. Thirteen chemicals (atrazine, carbamazepine, metformin, thiabendazole, diazepam, cholesterol, p-cresol, phenytoin, omeprazole, ethyromycin, 17β-estradiol, cimetidine, and estrone), for which there was statistically significant concordance between occurrence at a site and expected biological response as represented in the KAM, were identified. While not definitive, the approach provides a line of evidence for evaluating potential cause-effect relationships between components of a complex mixture of contaminants and biological effects data, which can inform subsequent monitoring and investigation.

  7. In defense of types in knowledge-based CAAD

    DEFF Research Database (Denmark)

    Galle, Per

    1997-01-01

    There are two basic approaches to representation of design knowledge in knowledge-based CAAD systems, the type-based approach which has a long tradition, and the more recent typeless approach. Proponents of the latter have offered a number of arguments against the type-based approach which...

  8. Knowledge Base Editor (SharpKBE)

    Science.gov (United States)

    Tikidjian, Raffi; James, Mark; Mackey, Ryan

    2007-01-01

    The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.

  9. Tourism informatics towards novel knowledge based approaches

    CERN Document Server

    Hashimoto, Kiyota; Iwamoto, Hidekazu

    2015-01-01

    This book introduces new trends of theory and practice of information technologies in tourism. The book does not handle only the fundamental contribution, but also discusses innovative and emerging technologies to promote and develop new generation tourism informatics theory and their applications. Some chapters are concerned with data analysis, web technologies, social media, and their case studies. Travel information on the web provided by travelers is very useful for other travelers make their travel plan. A chapter in this book proposes a method for interactive retrieval of information on accommodation facilities to support travelling customers in their travel preparations. Also an adaptive user interface for personalized transportation guidance system is proposed. Another chapter in this book shows a novel support system for the collaborative tourism planning by using the case reports that are collected via Internet. Also, a system for recommending hotels for the users is proposed and evaluated. Other ch...

  10. Knowledge base rule partitioning design for CLIPS

    Science.gov (United States)

    Mainardi, Joseph D.; Szatkowski, G. P.

    1990-01-01

    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  11. Maximizing the knowledge base: Knowledge Base+ and the Global Open Knowledgebase

    Directory of Open Access Journals (Sweden)

    Liam Earney

    2013-11-01

    Full Text Available The motivation for the two projects discussed in this article is the simple premise that the current inaccuracies of data in the library supply chain are detrimental to the user experience, limit the ability of institutions to effectively manage their collections and that resolving them is increasingly unsustainable at the institutional level. Two projects, Knowledge Base+ (KB+ in the UK and Global Open Knowledgebase (GOKb in the USA, are working in cooperation with a range of other partners, and adopting a communitycentric approach to address these issues and broaden the scope and utility of knowledge bases more generally. The belief is that only through collaboration at a wide range of levels and on a number of fronts can these challenges be overcome.

  12. A Proven Way to Incorporate Catholic Social Thought in Business School Curricula: Teaching Two Approaches to Management in the Classroom

    Science.gov (United States)

    Dyck, Bruno

    2013-01-01

    Widespread agreement suggests that it is appropriate and desirable to develop and teach business theory and practice consistent with Catholic social teaching (CST) in Catholic business schools. Such a curriculum would cover the same mainstream material taught in other business schools, but then offer a CST approach to business that can be…

  13. Comparison of clinical knowledge bases for summarization of electronic health records.

    Science.gov (United States)

    McCoy, Allison B; Sittig, Dean F; Wright, Adam

    2013-01-01

    Automated summarization tools that create condition-specific displays may improve clinician efficiency. These tools require new kinds of knowledge that is difficult to obtain. We compared five problem-medication pair knowledge bases generated using four previously described knowledge base development approaches. The number of pairs in the resulting mapped knowledge bases varied widely due to differing mapping techniques from the source terminologies, ranging from 2,873 to 63,977,738 pairs. The number of overlapping pairs across knowledge bases was low, with one knowledge base having half of the pairs overlapping with another knowledge base, and most having less than a third overlapping. Further research is necessary to better evaluate the knowledge bases independently in additional settings, and to identify methods to integrate the knowledge bases.

  14. Flow-Based Provenance

    Directory of Open Access Journals (Sweden)

    Sabah Al-Fedaghi

    2017-02-01

    Full Text Available Aim/Purpose: With information almost effortlessly created and spontaneously available, current progress in Information and Communication Technology (ICT has led to the complication that information must be scrutinized for trustworthiness and provenance. Information systems must become provenance-aware to be satisfactory in accountability, reproducibility, and trustworthiness of data. Background:\tMultiple models for abstract representation of provenance have been proposed to describe entities, people, and activities involved in producing a piece of data, including the Open Provenance Model (OPM and the World Wide Web Consortium. These models lack certain concepts necessary for specifying workflows and encoding the provenance of data products used and generated. Methodology: Without loss of generality, the focus of this paper is on OPM depiction of provenance in terms of a directed graph. We have redrawn several case studies in the framework of our proposed model in order to compare and evaluate it against OPM for representing these cases. Contribution: This paper offers an alternative flow-based diagrammatic language that can form a foundation for modeling of provenance. The model described here provides an (abstract machine-like representation of provenance. Findings: The results suggest a viable alternative in the area of diagrammatic representation for provenance applications. Future Research: Future work will seek to achieve more accurate comparisons with current models in the field.

  15. Knowledge-based identification of soluble biomarkers: hepatic fibrosis in NAFLD as an example.

    Science.gov (United States)

    Page, Sandra; Birerdinc, Aybike; Estep, Michael; Stepanova, Maria; Afendy, Arian; Petricoin, Emanuel; Younossi, Zobair; Chandhoke, Vikas; Baranova, Ancha

    2013-01-01

    The discovery of biomarkers is often performed using high-throughput proteomics-based platforms and is limited to the molecules recognized by a given set of purified and validated antigens or antibodies. Knowledge-based, or systems biology, approaches that involve the analysis of integrated data, predominantly molecular pathways and networks may infer quantitative changes in the levels of biomolecules not included by the given assay from the levels of the analytes profiled. In this study we attempted to use a knowledge-based approach to predict biomarkers reflecting the changes in underlying protein phosphorylation events using Nonalcoholic Fatty Liver Disease (NAFLD) as a model. Two soluble biomarkers, CCL-2 and FasL, were inferred in silico as relevant to NAFLD pathogenesis. Predictive performance of these biomarkers was studied using serum samples collected from patients with histologically proven NAFLD. Serum levels of both molecules, in combination with clinical and demographic data, were predictive of hepatic fibrosis in a cohort of NAFLD patients. Our study suggests that (1) NASH-specific disruption of the kinase-driven signaling cascades in visceral adipose tissue lead to detectable changes in the levels of soluble molecules released into the bloodstream, and (2) biomarkers discovered in silico could contribute to predictive models for non-malignant chronic diseases.

  16. Knowledge-based identification of soluble biomarkers: hepatic fibrosis in NAFLD as an example.

    Directory of Open Access Journals (Sweden)

    Sandra Page

    Full Text Available The discovery of biomarkers is often performed using high-throughput proteomics-based platforms and is limited to the molecules recognized by a given set of purified and validated antigens or antibodies. Knowledge-based, or systems biology, approaches that involve the analysis of integrated data, predominantly molecular pathways and networks may infer quantitative changes in the levels of biomolecules not included by the given assay from the levels of the analytes profiled. In this study we attempted to use a knowledge-based approach to predict biomarkers reflecting the changes in underlying protein phosphorylation events using Nonalcoholic Fatty Liver Disease (NAFLD as a model. Two soluble biomarkers, CCL-2 and FasL, were inferred in silico as relevant to NAFLD pathogenesis. Predictive performance of these biomarkers was studied using serum samples collected from patients with histologically proven NAFLD. Serum levels of both molecules, in combination with clinical and demographic data, were predictive of hepatic fibrosis in a cohort of NAFLD patients. Our study suggests that (1 NASH-specific disruption of the kinase-driven signaling cascades in visceral adipose tissue lead to detectable changes in the levels of soluble molecules released into the bloodstream, and (2 biomarkers discovered in silico could contribute to predictive models for non-malignant chronic diseases.

  17. A knowledge base browser using hypermedia

    Science.gov (United States)

    Pocklington, Tony; Wang, Lui

    1990-01-01

    A hypermedia system is being developed to browse CLIPS (C Language Integrated Production System) knowledge bases. This system will be used to help train flight controllers for the Mission Control Center. Browsing this knowledge base will be accomplished either by having navigating through the various collection nodes that have already been defined, or through the query languages.

  18. Proven Weight Loss Methods

    Science.gov (United States)

    Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org

  19. Irrelevance Reasoning in Knowledge Based Systems

    Science.gov (United States)

    Levy, A. Y.

    1993-01-01

    This dissertation considers the problem of reasoning about irrelevance of knowledge in a principled and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. The dissertation describes a novel tool, the query-tree, for reasoning about irrelevance. Based on the query-tree, we develop several algorithms for deciding what formulas are irrelevant to a query. Our general framework sheds new light on the problem of detecting independence of queries from updates. We present new results that significantly extend previous work in this area. The framework also provides a setting in which to investigate the connection between the notion of irrelevance and the creation of abstractions. We propose a new approach to research on reasoning with abstractions, in which we investigate the properties of an abstraction by considering the irrelevance claims on which it is based. We demonstrate the potential of the approach for the cases of abstraction of predicates and projection of predicate arguments. Finally, we describe an application of relevance reasoning to the domain of modeling physical devices.

  20. Formal Support for Development of Knowledge-Based Systems

    NARCIS (Netherlands)

    Fensel, Dieter; Van Harmelen, Frank; Reif, Wolfgang; Ten Teije, Annette

    1998-01-01

    This article provides an approach for developing reliable knowledge-based systems. Its main contributions are: Specification is done at an architectural level that abstracts from a specific implementation formalism. The model of expertise of CommonKADS distinguishs different types of knowledge and

  1. An Insulating Glass Knowledge Base

    Energy Technology Data Exchange (ETDEWEB)

    Michael L. Doll; Gerald Hendrickson; Gerard Lagos; Russell Pylkki; Chris Christensen; Charlie Cureija

    2005-08-01

    This report will discuss issues relevant to Insulating Glass (IG) durability performance by presenting the observations and developed conclusions in a logical sequential format. This concluding effort discusses Phase II activities and focuses on beginning to quantifying IG durability issues while continuing the approach presented in the Phase I activities (Appendix 1) which discuss a qualitative assessment of durability issues. Phase II developed a focus around two specific IG design classes previously presented in Phase I of this project. The typical box spacer and thermoplastic spacer design including their Failure Modes and Effect Analysis (FMEA) and Fault Tree diagrams were chosen to address two currently used IG design options with varying components and failure modes. The system failures occur due to failures of components or their interfaces. Efforts to begin quantifying the durability issues focused on the development and delivery of an included computer based IG durability simulation program. The focus/effort to deliver the foundation for a comprehensive IG durability simulation tool is necessary to address advancements needed to meet current and future building envelope energy performance goals. This need is based upon the current lack of IG field failure data and the lengthy field observation time necessary for this data collection. Ultimately, the simulation program is intended to be used by designers throughout the current and future industry supply chain. Its use is intended to advance IG durability as expectations grow around energy conservation and with the growth of embedded technologies as required to meet energy needs. In addition the tool has the immediate benefit of providing insight for research and improvement prioritization. Included in the simulation model presentation are elements and/or methods to address IG materials, design, process, quality, induced stress (environmental and other factors), validation, etc. In addition, acquired data

  2. An Ebola virus-centered knowledge base

    Science.gov (United States)

    Kamdar, Maulik R.; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. Database URL: http://ebola.semanticscience.org. PMID:26055098

  3. An Ebola virus-centered knowledge base.

    Science.gov (United States)

    Kamdar, Maulik R; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. © The Author(s) 2015. Published by Oxford University Press.

  4. IGENPRO knowledge-based operator support system

    International Nuclear Information System (INIS)

    Morman, J. A.

    1998-01-01

    Research and development is being performed on the knowledge-based IGENPRO operator support package for plant transient diagnostics and management to provide operator assistance during off-normal plant transient conditions. A generic thermal-hydraulic (T-H) first-principles approach is being implemented using automated reasoning, artificial neural networks and fuzzy logic to produce a generic T-H system-independent/plant-independent package. The IGENPRO package has a modular structure composed of three modules: the transient trend analysis module PROTREN, the process diagnostics module PRODIAG and the process management module PROMANA. Cooperative research and development work has focused on the PRODIAG diagnostic module of the IGENPRO package and the operator training matrix of transients used at the Braidwood Pressurized Water Reactor station. Promising simulator testing results with PRODIAG have been obtained for the Braidwood Chemical and Volume Control System (CVCS), and the Component Cooling Water System. Initial CVCS test results have also been obtained for the PROTREN module. The PROMANA effort also involves the CVCS. Future work will be focused on the long-term, slow and mild degradation transients where diagnoses of incipient T-H component failure prior to forced outage events is required. This will enhance the capability of the IGENPRO system as a predictive maintenance tool for plant staff and operator support

  5. The Coming of Knowledge-Based Business.

    Science.gov (United States)

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  6. Methodology for testing and validating knowledge bases

    Science.gov (United States)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  7. Automatic Knowledge Base Evolution by Learning Instances

    OpenAIRE

    Kim, Sundong

    2016-01-01

    Knowledge base is the way to store structured and unstructured data throughout the web. Since the size of the web is increasing rapidly, there are huge needs to structure the knowledge in a fully automated way. However fully-automated knowledge-base evolution on the Semantic Web is a major challenges, although there are many ontology evolution techniques available. Therefore learning ontology automatically can contribute to the semantic web society significantly. In this paper, we propose ful...

  8. Identification of Managerial Competencies in Knowledge-based Organizations

    Directory of Open Access Journals (Sweden)

    Königová Martina

    2012-03-01

    Full Text Available Managerial competencies identification and development are important tools of human resources management that is aimed at achieving strategic organizational goals. Due to current dynamic development and changes, more and more attention is being paid to the personality of managers and their competencies, since they are viewed as important sources of achieving a competitive advantage. The objective of this article is to identify managerial competencies in the process of filling vacant working positions in knowledge-based organizations in the Czech Republic. The objective was determined with reference to the Czech Science Foundation GACR research project which focuses on the identification of managerial competencies in knowledge-based organizations in the Czech Republic. This identification within the frame of the research project is primarily designed and subsequently realised on the basis of content analysis of media communications such as advertisements - a means through which knowledge- based organizations search for suitable candidates for vacant managerial positions. The first part of the article deals with theoretical approaches to knowledge-based organizations and issues on competencies. The second part evaluates the outcomes of the survey carried out, and also summarizes the basic steps of the application of competencies. The final part summarizes the benefits and difficulties of applying the competency-based approach as a tool of efficient management of organizations for the purpose of achieving a competitive advantage.

  9. Knowledge-based scheduling of arrival aircraft

    Science.gov (United States)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  10. A relational data-knowledge base system and its potential in developing a distributed data-knowledge system

    Science.gov (United States)

    Rahimian, Eric N.; Graves, Sara J.

    1988-01-01

    A new approach used in constructing a rational data knowledge base system is described. The relational database is well suited for distribution due to its property of allowing data fragmentation and fragmentation transparency. An example is formulated of a simple relational data knowledge base which may be generalized for use in developing a relational distributed data knowledge base system. The efficiency and ease of application of such a data knowledge base management system is briefly discussed. Also discussed are the potentials of the developed model for sharing the data knowledge base as well as the possible areas of difficulty in implementing the relational data knowledge base management system.

  11. Knowledge bases for modelisation of industrial plants

    International Nuclear Information System (INIS)

    Lorre, J.P.; Evrard, J.M.; Dorlet, E.

    1992-01-01

    Our experience in the development of numerous knowledge based control systems for large industrial applications has led us to the expression of a generic problem and to the implementation of the tools to address it. This paper illustrates, with different practical examples that we have encountered, the principal concepts found in the modelling and management of large industrial knowledge bases. We thus arrive at the definition of the formalism to be used. The principles described are now integrated into the tool SPIRAL and are currently being employed in the development of several applications

  12. Information modelling and knowledge bases XXV

    CERN Document Server

    Tokuda, T; Jaakkola, H; Yoshida, N

    2014-01-01

    Because of our ever increasing use of and reliance on technology and information systems, information modelling and knowledge bases continue to be important topics in those academic communities concerned with data handling and computer science. As the information itself becomes more complex, so do the levels of abstraction and the databases themselves. This book is part of the series Information Modelling and Knowledge Bases, which concentrates on a variety of themes in the important domains of conceptual modeling, design and specification of information systems, multimedia information modelin

  13. Innocent Until Proven Guilty

    Science.gov (United States)

    Case, Catherine; Whitaker, Douglas

    2016-01-01

    In the criminal justice system, defendants accused of a crime are presumed innocent until proven guilty. Statistical inference in any context is built on an analogous principle: The null hypothesis--often a hypothesis of "no difference" or "no effect"--is presumed true unless there is sufficient evidence against it. In this…

  14. The analysis phase in development of knowledge-based systems

    International Nuclear Information System (INIS)

    Brooking, A.G.

    1986-01-01

    Over the past twenty years computer scientists have realized that, in order to produce reliable software that is easily modifiable, a proven methodology is required. Unlike conventional systems there is little knowledge of the life cycle of these knowledge-based systems. However, if the life cycle of conventional systems, it is not unreasonable to assume that analysis will come first. With respect to the analysis task there is an enormous difference in types of analysis. Conventional systems analysis is predominately concerned with what happens within the system. Typically, procedures will be noted in the way they relate to each other, the way data moves and changes within the system. There is often an example, on paper or machine, that can be observed

  15. Process fault diagnosis using knowledge-based systems

    International Nuclear Information System (INIS)

    Sudduth, A.L.

    1991-01-01

    Advancing technology in process plants has led to increased need for computer based process diagnostic systems to assist the operator. One approach to this problem is to use an embedded knowledge based system to interpret measurement signals. Knowledge based systems using only symptom based rules are inadequate for real time diagnosis of dynamic systems; therefore a model based approach is necessary. Though several forms of model based reasoning have been proposed, the use of qualitative causal models incorporating first principles knowledge of process behavior structure, and function appear to have the most promise as a robust modeling methodology. In this paper the structure of a diagnostic system is described which uses model based reasoning and conventional numerical methods to perform process diagnosis. This system is being applied to emergency diesel generator system in nuclear stations

  16. VICKEY: Mining Conditional Keys on Knowledge Bases

    DEFF Research Database (Denmark)

    Symeonidou, Danai; Prado, Luis Antonio Galarraga Del; Pernelle, Nathalie

    2017-01-01

    A conditional key is a key constraint that is valid in only a part of the data. In this paper, we show how such keys can be mined automatically on large knowledge bases (KBs). For this, we combine techniques from key mining with techniques from rule mining. We show that our method can scale to KBs...

  17. Improving the Knowledge Base in Teacher Education.

    Science.gov (United States)

    Rockler, Michael J.

    Education in the United States for most of the last 50 years has built its knowledge base on a single dominating foundation--behavioral psychology. This paper analyzes the history of behaviorism. Syntheses are presented of the theories of Ivan P. Pavlov, J. B. Watson, and B. F. Skinner, all of whom contributed to the body of works on behaviorism.…

  18. Tiger: knowledge based gas turbine condition monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Trave-Massuyes, L. [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France); Quevedo, J. [University of Catalonia, (Spain); Milne, R.; Nicol, Ch.

    1995-12-31

    Exxon petrochemical plant in Scotland requires continuous ethylene supply from offshore site in North Sea. The supply is achieved thanks to compressors driven by a 28 MW gas turbine, whose monitoring is of major importance. The TIGER fault diagnostic system is a knowledge base system containing a prediction model. (D.L.) 11 refs.

  19. Tiger: knowledge based gas turbine condition monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Trave-Massuyes, L [Centre National de la Recherche Scientifique (CNRS), 31 - Toulouse (France); Quevedo, J [University of Catalonia, (Spain); Milne, R; Nicol, Ch

    1996-12-31

    Exxon petrochemical plant in Scotland requires continuous ethylene supply from offshore site in North Sea. The supply is achieved thanks to compressors driven by a 28 MW gas turbine, whose monitoring is of major importance. The TIGER fault diagnostic system is a knowledge base system containing a prediction model. (D.L.) 11 refs.

  20. Explicit Knowledge-based Reasoning for Visual Question Answering

    OpenAIRE

    Wang, Peng; Wu, Qi; Shen, Chunhua; Hengel, Anton van den; Dick, Anthony

    2015-01-01

    We describe a method for visual question answering which is capable of reasoning about contents of an image on the basis of information extracted from a large-scale knowledge base. The method not only answers natural language questions using concepts not contained in the image, but can provide an explanation of the reasoning by which it developed its answer. The method is capable of answering far more complex questions than the predominant long short-term memory-based approach, and outperform...

  1. Data Provenance and Trust

    Directory of Open Access Journals (Sweden)

    Stratis D Viglas

    2013-07-01

    Full Text Available The Oxford Dictionary defines provenance as “the place of origin, or earliest known history of something.” The term, when transferred to its digital counterpart, has morphed into a more general meaning. It is not only used to refer to the origin of a digital artefact but also to its changes over time. By changes in this context we may not only refer to its digital snapshots but also to the processes that caused and materialised the change. As an example, consider a database record r created at point in time t0; an update u to that record at time t1 causes it to have a value r’. In terms of provenance, we do not only want to record the snapshots (t0, r and (t1, r’ but also the transformation u that when applied to (t0, r results in (t1, r’, that is u(t0, r = (t1, r’.

  2. Provenance Store Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Paulson, Patrick R.; Gibson, Tara D.; Schuchardt, Karen L.; Stephan, Eric G.

    2008-03-01

    Requirements for the provenance store and access API are developed. Existing RDF stores and APIs are evaluated against the requirements and performance benchmarks. The team’s conclusion is to use MySQL as a database backend, with a possible move to Oracle in the near-term future. Both Jena and Sesame’s APIs will be supported, but new code will use the Jena API

  3. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    Science.gov (United States)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  4. A knowledge based method for nuclear plant loading pattern determination

    International Nuclear Information System (INIS)

    Dauboin, P.

    1990-01-01

    This paper deals with the design of a knowledge based system for solving an industrial problem which occurs in nuclear fuel management. The problem lies in determining satisfactory loading patterns for nuclear plants. Its primary feature consists in the huge search space involved. Conventional resolution processes are formally defined and analyzed: there is no general algorithm which guarantees to always provide a reasonable solution in each situation. We propose a new approach to solve this constrained search problem using domain-specific knowledge and general constraint-based heuristics. During a preprocessing step, a problem dependent search algorithm is designed. This procedure is then automatically implemented in FORTRAN. The generated routines have proved to be very efficient finding solutions which could not have been provided using logic programming. A prototype expert system has already been applied to actual reload pattern searches. While combining efficiency and flexibility, this knowledge based system enables human experts to rapidly match new constraints and requirements

  5. A knowledge-based system for prototypical reasoning

    Science.gov (United States)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  6. Knowledge based systems for intelligent robotics

    Science.gov (United States)

    Rajaram, N. S.

    1982-01-01

    It is pointed out that the construction of large space platforms, such as space stations, has to be carried out in the outer space environment. As it is extremely expensive to support human workers in space for large periods, the only feasible solution appears to be related to the development and deployment of highly capable robots for most of the tasks. Robots for space applications will have to possess characteristics which are very different from those needed by robots in industry. The present investigation is concerned with the needs of space robotics and the technologies which can be of assistance to meet these needs, giving particular attention to knowledge bases. 'Intelligent' robots are required for the solution of arising problems. The collection of facts and rules needed for accomplishing such solutions form the 'knowledge base' of the system.

  7. Semantic computing and language knowledge bases

    Science.gov (United States)

    Wang, Lei; Wang, Houfeng; Yu, Shiwen

    2017-09-01

    As the proposition of the next-generation Web - semantic Web, semantic computing has been drawing more and more attention within the circle and the industries. A lot of research has been conducted on the theory and methodology of the subject, and potential applications have also been investigated and proposed in many fields. The progress of semantic computing made so far cannot be detached from its supporting pivot - language resources, for instance, language knowledge bases. This paper proposes three perspectives of semantic computing from a macro view and describes the current status of affairs about the construction of language knowledge bases and the related research and applications that have been carried out on the basis of these resources via a case study in the Institute of Computational Linguistics at Peking University.

  8. Bridging the gap: simulations meet knowledge bases

    Science.gov (United States)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  9. One knowledge base or many knowledge pools?

    DEFF Research Database (Denmark)

    Lundvall, Bengt-Åke

    It is increasingly realized that knowledge is the most important resource and that learning is the most important process in the economy. Sometimes this is expressed by coining the current era as characterised by a ‘knowledge based economy'. But this concept might be misleading by indicating...... that there is one common knowledge base on which economic activities can be built. In this paper we argue that it is more appropriate to see the economy as connecting to different ‘pools of knowledge'. The argument is built upon a conceptual framework where we make distinctions between private/public, local....../global, individual/collective and tacit/codified knowledge. The purpose is both ‘academic' and practical. Our analysis demonstrates the limits of a narrowly economic perspective on knowledge and we show that these distinctions have important implications both for innovation policy and for management of innovation....

  10. The Ontology of Knowledge Based Optimization

    OpenAIRE

    Nasution, Mahyuddin K. M.

    2012-01-01

    Optimization has been becoming a central of studies in mathematic and has many areas with different applications. However, many themes of optimization came from different area have not ties closing to origin concepts. This paper is to address some variants of optimization problems using ontology in order to building basic of knowledge about optimization, and then using it to enhance strategy to achieve knowledge based optimization.

  11. Knowledge based diagnostics in nuclear power plants

    International Nuclear Information System (INIS)

    Baldeweg, F.; Fiedler, U.; Weiss, F.P.; Werner, M.

    1987-01-01

    In this paper a special process diagnostic system (PDS) is presented. It must be seen as the result of a long term work on computerized process surveillance and control; it includes a model based system for noise analysis of mechanical vibrations, which has recently been enhanced by using of knowledge based technique (expert systems). The paper discusses the process diagnostic frame concept and emphasize the vibration analysis expert system

  12. A STEPPING STONE TOWARDS KNOWLEDGE BASED MAINTENANCE

    OpenAIRE

    G. Waeyenbergh; L. Pintelon; L. Gelders

    2012-01-01

    Maintenance decision making becomes more and more a management concern. Some decades ago, maintenance was still often considered as an unavoidable side effect of production. The perception of maintenance has evolved considerably. One of the current issues is the maintenance concept, being the mix of maintenance interventions and the general framework for determining this mix. In this paper we describe a modular framework, called Knowledge Based Maintenance, for developing a customised mainten...

  13. Knowledge Based Understanding of Radiology Text

    OpenAIRE

    Ranum, David L.

    1988-01-01

    A data acquisition tool which will extract pertinent diagnostic information from radiology reports has been designed and implemented. Pertinent diagnostic information is defined as that clinical data which is used by the HELP medical expert system. The program uses a memory based semantic parsing technique to “understand” the text. Moreover, the memory structures and lexicon necessary to perform this action are automatically generated from the diagnostic knowledge base by using a special purp...

  14. Satellite Contamination and Materials Outgassing Knowledge base

    Science.gov (United States)

    Minor, Jody L.; Kauffman, William J. (Technical Monitor)

    2001-01-01

    Satellite contamination continues to be a design problem that engineers must take into account when developing new satellites. To help with this issue, NASA's Space Environments and Effects (SEE) Program funded the development of the Satellite Contamination and Materials Outgassing Knowledge base. This engineering tool brings together in one location information about the outgassing properties of aerospace materials based upon ground-testing data, the effects of outgassing that has been observed during flight and measurements of the contamination environment by on-orbit instruments. The knowledge base contains information using the ASTM Standard E- 1559 and also consolidates data from missions using quartz-crystal microbalances (QCM's). The data contained in the knowledge base was shared with NASA by government agencies and industry in the US and international space agencies as well. The term 'knowledgebase' was used because so much information and capability was brought together in one comprehensive engineering design tool. It is the SEE Program's intent to continually add additional material contamination data as it becomes available - creating a dynamic tool whose value to the user is ever increasing. The SEE Program firmly believes that NASA, and ultimately the entire contamination user community, will greatly benefit from this new engineering tool and highly encourages the community to not only use the tool but add data to it as well.

  15. Presentation planning using an integrated knowledge base

    Science.gov (United States)

    Arens, Yigal; Miller, Lawrence; Sondheimer, Norman

    1988-01-01

    A description is given of user interface research aimed at bringing together multiple input and output modes in a way that handles mixed mode input (commands, menus, forms, natural language), interacts with a diverse collection of underlying software utilities in a uniform way, and presents the results through a combination of output modes including natural language text, maps, charts and graphs. The system, Integrated Interfaces, derives much of its ability to interact uniformly with the user and the underlying services and to build its presentations, from the information present in a central knowledge base. This knowledge base integrates models of the application domain (Navy ships in the Pacific region, in the current demonstration version); the structure of visual displays and their graphical features; the underlying services (data bases and expert systems); and interface functions. The emphasis is on a presentation planner that uses the knowledge base to produce multi-modal output. There has been a flurry of recent work in user interface management systems. (Several recent examples are listed in the references). Existing work is characterized by an attempt to relieve the software designer of the burden of handcrafting an interface for each application. The work has generally focused on intelligently handling input. This paper deals with the other end of the pipeline - presentations.

  16. From Provenance Standards and Tools to Queries and Actionable Provenance

    Science.gov (United States)

    Ludaescher, B.

    2017-12-01

    The W3C PROV standard provides a minimal core for sharing retrospective provenance information for scientific workflows and scripts. PROV extensions such as DataONE's ProvONE model are necessary for linking runtime observables in retrospective provenance records with conceptual-level prospective provenance information, i.e., workflow (or dataflow) graphs. Runtime provenance recorders, such as DataONE's RunManager for R, or noWorkflow for Python capture retrospective provenance automatically. YesWorkflow (YW) is a toolkit that allows researchers to declare high-level prospective provenance models of scripts via simple inline comments (YW-annotations), revealing the computational modules and dataflow dependencies in the script. By combining and linking both forms of provenance, important queries and use cases can be supported that neither provenance model can afford on its own. We present existing and emerging provenance tools developed for the DataONE and SKOPE (Synthesizing Knowledge of Past Environments) projects. We show how the different tools can be used individually and in combination to model, capture, share, query, and visualize provenance information. We also present challenges and opportunities for making provenance information more immediately actionable for the researchers who create it in the first place. We argue that such a shift towards "provenance-for-self" is necessary to accelerate the creation, sharing, and use of provenance in support of transparent, reproducible computational and data science.

  17. Active Provenance in Data-intensive Research

    Science.gov (United States)

    Spinuso, Alessandro; Mihajlovski, Andrej; Filgueira, Rosa; Atkinson, Malcolm

    2017-04-01

    Scientific communities are building platforms where the usage of data-intensive workflows is crucial to conduct their research campaigns. However managing and effectively support the understanding of the 'live' processes, fostering computational steering, sharing and re-use of data and methods, present several bottlenecks. These are often caused by the poor level of documentation on the methods and the data and how users interact with it. This work wants to explore how in such systems, flexibility in the management of the provenance and its adaptation to the different users and application contexts can lead to new opportunities for its exploitation, improving productivity. In particular, this work illustrates a conceptual and technical framework enabling tunable and actionable provenance in data-intensive workflow systems in support of reproducible science. It introduces the concept of Agile data-intensive systems to define the characteristic of our target platform. It shows a novel approach to the integration of provenance mechanisms, offering flexibility in the scale and in the precision of the provenance data collected, ensuring its relevance to the domain of the data-intensive task, fostering its rapid exploitation. The contributions address aspects of the scale of the provenance records, their usability and active role in the research life-cycle. We will discuss the use of dynamically generated provenance types as the approach for the integration of provenance mechanisms into a data-intensive workflow system. Enabling provenance can be transparent to the workflow user and developer, as well as fully controllable and customisable, depending from their expertise and the application's reproducibility, monitoring and validation requirements. The API that allows the realisation and adoption of a provenance type is presented, especially for what concerns the support of provenance profiling, contextualisation and precision. An actionable approach to provenance

  18. VICKEY: Mining Conditional Keys on Knowledge Bases

    OpenAIRE

    Symeonidou , Danai; Galárraga , Luis; Pernelle , Nathalie; Saïs , Fatiha; Suchanek , Fabian

    2017-01-01

    International audience; A conditional key is a key constraint that is valid in only a part of the data. In this paper, we show how such keys can be mined automatically on large knowledge bases (KBs). For this, we combine techniques from key mining with techniques from rule mining. We show that our method can scale to KBs of millions of facts. We also show that the conditional keys we mine can improve the quality of entity linking by up to 47 percentage points.

  19. A STEPPING STONE TOWARDS KNOWLEDGE BASED MAINTENANCE

    Directory of Open Access Journals (Sweden)

    G. Waeyenbergh

    2012-01-01

    Full Text Available Maintenance decision making becomes more and more a management concern. Some decades ago, maintenance was still often considered as an unavoidable side effect of production. The perception of maintenance has evolved considerably. One of the current issues is the maintenance concept, being the mix of maintenance interventions and the general framework for determining this mix. In this paper we describe a modular framework, called Knowledge Based Maintenance, for developing a customised maintenance concept. After describing the general framework and its decision support use, some case experiences are given. This experience covers some elements of the proposed framework.

  20. Risk Management of New Microelectronics for NASA: Radiation Knowledge-base

    Science.gov (United States)

    LaBel, Kenneth A.

    2004-01-01

    Contents include the following: NASA Missions - implications to reliability and radiation constraints. Approach to Insertion of New Technologies Technology Knowledge-base development. Technology model/tool development and validation. Summary comments.

  1. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    Science.gov (United States)

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  2. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    Science.gov (United States)

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  3. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    Science.gov (United States)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  4. The HEP Software and Computing Knowledge Base

    Science.gov (United States)

    Wenaus, T.

    2017-10-01

    HEP software today is a rich and diverse domain in itself and exists within the mushrooming world of open source software. As HEP software developers and users we can be more productive and effective if our work and our choices are informed by a good knowledge of what others in our community have created or found useful. The HEP Software and Computing Knowledge Base, hepsoftware.org, was created to facilitate this by serving as a collection point and information exchange on software projects and products, services, training, computing facilities, and relating them to the projects, experiments, organizations and science domains that offer them or use them. It was created as a contribution to the HEP Software Foundation, for which a HEP S&C knowledge base was a much requested early deliverable. This contribution will motivate and describe the system, what it offers, its content and contributions both existing and needed, and its implementation (node.js based web service and javascript client app) which has emphasized ease of use for both users and contributors.

  5. Development of a tool for knowledge base verification of expert system based on Design/CPN

    International Nuclear Information System (INIS)

    Kim, Jong Hyun

    1998-02-01

    Verification is a necessary work in developing a reliable expert system. Verification is a process aimed at demonstrating whether a system meets it's specified requirements. As expert systems are used in various applications, the knowledge base verification of systems takes an important position. The conventional Petri net approach that has been studied recently in order to verify the knowledge base is found that it is inadequate to verify the knowledge base of large and complex system, such as alarm processing system of nuclear power plant. Thus, we propose an improved method that models the knowledge base as enhanced colored Petri net. In this study, we analyze the reachability and the error characteristics of the knowledge base. Generally, verification process requires computational support by automated tools. For this reason, this study developed a tool for knowledge base verification based on Design/CPN, which is a tool for editing, modeling, and simulating Colored Petri net. This tool uses Enhanced Colored Petri net as a modeling method. By applying this tool to the knowledge base of nuclear power plant, it is noticed that it can successfully check most of the anomalies that can occur in a knowledge base

  6. Aggregation by Provenance Types: A Technique for Summarising Provenance Graphs

    Directory of Open Access Journals (Sweden)

    Luc Moreau

    2015-04-01

    Full Text Available As users become confronted with a deluge of provenance data, dedicated techniques are required to make sense of this kind of information. We present Aggregation by Provenance Types, a provenance graph analysis that is capable of generating provenance graph summaries. It proceeds by converting provenance paths up to some length k to attributes, referred to as provenance types, and by grouping nodes that have the same provenance types. The summary also includes numeric values representing the frequency of nodes and edges in the original graph. A quantitative evaluation and a complexity analysis show that this technique is tractable; with small values of k, it can produce useful summaries and can help detect outliers. We illustrate how the generated summaries can further be used for conformance checking and visualization.

  7. Building a knowledge based economy in Russia using guided entrepreneurship

    Science.gov (United States)

    Reznik, Boris N.; Daniels, Marc; Ichim, Thomas E.; Reznik, David L.

    2005-06-01

    Despite advanced scientific and technological (S&T) expertise, the Russian economy is presently based upon manufacturing and raw material exports. Currently, governmental incentives are attempting to leverage the existing scientific infrastructure through the concept of building a Knowledge Based Economy. However, socio-economic changes do not occur solely by decree, but by alteration of approach to the market. Here we describe the "Guided Entrepreneurship" plan, a series of steps needed for generation of an army of entrepreneurs, which initiate a chain reaction of S&T-driven growth. The situation in Russia is placed in the framework of other areas where Guided Entrepreneurship has been successful.

  8. Provenance tracking in the ViroLab Virtual Laboratory

    NARCIS (Netherlands)

    Baliś, B.; Bubak, M.; Wach, J.

    2008-01-01

    Provenance describes the process which led to the creation of a piece of data. Tracking provenance of experiment results is essential in modern environments which support conducting of in silico experiments. We present a provenance tracking approach developed as part of the virtual laboratory of the

  9. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  10. Tracing where and who provenance in Linked Data: A calculus

    OpenAIRE

    Dezani-Ciancaglini, Mariangiola; Horne, Ross; Sassone, Vladimiro

    2012-01-01

    Linked Data provides some sensible guidelines for publishing and consuming data on the Web. Data published on the Web has no inherent truth, yet its quality can often be assessed based on its provenance. This work introduces a new approach to provenance for Linked Data. The simplest notion of provenance-viz., a named graph indicating where the data is now-is extended with a richer provenance format. The format reflects the behaviour of processes interacting with Linked Data, tracing where the...

  11. DeepDive: Declarative Knowledge Base Construction.

    Science.gov (United States)

    De Sa, Christopher; Ratner, Alex; Ré, Christopher; Shin, Jaeho; Wang, Feiran; Wu, Sen; Zhang, Ce

    2016-03-01

    The dark data extraction or knowledge base construction (KBC) problem is to populate a SQL database with information from unstructured data sources including emails, webpages, and pdf reports. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems. The key idea in DeepDive is that statistical inference and machine learning are key tools to attack classical data problems in extraction, cleaning, and integration in a unified and more effective manner. DeepDive programs are declarative in that one cannot write probabilistic inference algorithms; instead, one interacts by defining features or rules about the domain. A key reason for this design choice is to enable domain experts to build their own KBC systems. We present the applications, abstractions, and techniques of DeepDive employed to accelerate construction of KBC systems.

  12. A display to support knowledge based behavior

    International Nuclear Information System (INIS)

    Lindsay, R.W.

    1990-01-01

    A computerized display has been created for the Experimental Breeder Reactor II (EBR-II) that incorporates information from plant sensors in a thermodynamic model display. The display is designed to provide an operator with an overall view of the plant process as a heat engine. The thermodynamics of the plant are depicted through the use of ionic figures, animated by plant signals, that are related to the major plant components and systems such as the reactor, intermediate heat exchanger, secondary system, evaporators, superheaters, steam system, steam drum, and turbine-generator. This display supports knowledge based reasoning for the operator as well as providing the traditional rule and skill based behavior, and includes side benefits such a inherent signal validation

  13. NASDA knowledge-based network planning system

    Science.gov (United States)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  14. Knowledge-based information systems in practice

    CERN Document Server

    Jain, Lakhmi; Watada, Junzo; Howlett, Robert

    2015-01-01

    This book contains innovative research from leading researchers who presented their work at the 17th International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, KES 2013, held in Kitakyusha, Japan, in September 2013. The conference provided a competitive field of 236 contributors, from which 38 authors expanded their contributions and only 21 published. A plethora of techniques and innovative applications are represented within this volume. The chapters are organized using four themes. These topics include: data mining, knowledge management, advanced information processes and system modelling applications. Each topic contains multiple contributions and many offer case studies or innovative examples. Anyone that wants to work with information repositories or process knowledge should consider reading one or more chapters focused on their technique of choice. They may also benefit from reading other chapters to assess if an alternative technique represents a more suitable app...

  15. Knowledge based economy in European Union

    Directory of Open Access Journals (Sweden)

    Ecaterina Stănculescu

    2012-04-01

    Full Text Available Nowadays we assist at a fundamental change from the economy based mainly on resources to the one based mostly on knowledge. The concept has been launched in the last decade of the past century. The knowledge became a production agent and a value creation instrument for whatever country and of course for an entire community like European Union which is constantly concerned by its development and competitiveness. This paper presents the principal characteristics of the present EU preoccupations with the expansion of a knowledge based economy through the 2020 European Development Strategy for smart, sustainable and inclusive economy, and especially for the Framework Programs (Framework Programme 7 and Competitiveness and Innovation Framework Programme.

  16. Knowledge-based computer security advisor

    International Nuclear Information System (INIS)

    Hunteman, W.J.; Squire, M.B.

    1991-01-01

    The rapid expansion of computer security information and technology has included little support to help the security officer identify the safeguards needed to comply with a policy and to secure a computing system. This paper reports that Los Alamos is developing a knowledge-based computer security system to provide expert knowledge to the security officer. This system includes a model for expressing the complex requirements in computer security policy statements. The model is part of an expert system that allows a security officer to describe a computer system and then determine compliance with the policy. The model contains a generic representation that contains network relationships among the policy concepts to support inferencing based on information represented in the generic policy description

  17. A display to support knowledge based behavior

    International Nuclear Information System (INIS)

    Lindsay, R.W.

    1990-01-01

    This paper reports on a computerized display that has been created for the Experimental Breeder Reactor II that incorporates information from plant sensors in a thermodynamic model display. The display is designed to provide an operator with an overall view of the plant process as a heat engine. The thermodynamics of the plant are depicted through the use of iconic figures, animated by plant signals, that are related to the major plant components and systems such as the reactor, intermediate heat exchanger, secondary system, evaporators, superheaters, steam system, steam drum, and turbine-generator. This display supports knowledge based reasoning for the operator as well as providing data for the traditional rule and skill based behavior, and includes side benefits such as inherent signal validation

  18. The Development of the IMIA Knowledge Base

    Directory of Open Access Journals (Sweden)

    Graham Wright

    2011-03-01

    Full Text Available Background: The discipline of health or medical informatics is relatively new in that the literature has existed for only 40 years. The British Computer Society (BCS health group was of the opinion that work should be undertaken to explore the scope of medical or health informatics. Once the mapping work was completed the International Medical Informatics Association (IMIA expressed the wish to develop it further to define the knowledge base of the discipline and produce a comprehensive internationally applicable framework. This article will also highlight the move from the expert opinion of a small group to the analysis of publications to generalise and refine the initial findings, and illustrate the importance of triangulation.Objectives: The aim of the project was to explore the theoretical constructs underpinning the discipline of health informatics and produce a cognitive map of the existing understanding of the discipline and develop the knowledge base of health informatics for the IMIA and the BCS.Method: The five-phase project, described in this article, undertaken to define the discipline of health informatics used four forms of triangulation.Results: The output from the project is a framework giving the 14 major headings (Subjects and 245 elements, which together describe the current perception of the discipline of health informatics.Conclusion: This article describes how each phase of the project was strengthened, through using triangulation within and between the different phases. This was done to ensure that the investigators could be confident in the confirmation and completeness of data, and assured of the validity and reliability of the final output of the ‘IMIA Knowledge Base’ that was endorsed by the IMIA Board in November 2009.

  19. Empowering Provenance in Data Integration

    Science.gov (United States)

    Kondylakis, Haridimos; Doerr, Martin; Plexousakis, Dimitris

    The provenance of data has recently been recognized as central to the trust one places in data. This paper presents a novel framework in order to empower provenance in a mediator based data integration system. We use a simple mapping language for mapping schema constructs, between an ontology and relational sources, capable to carry provenance information. This language extends the traditional data exchange setting by translating our mapping specifications into source-to-target tuple generating dependencies (s-t tgds). Then we define formally the provenance information we want to retrieve i.e. annotation, source and tuple provenance. We provide three algorithms to retrieve provenance information using information stored on the mappings and the sources. We show the feasibility of our solution and the advantages of our framework.

  20. Trace-element and Nd-isotope systematics in detrital apatite of the Po river catchment: Implications for provenance discrimination and the lag-time approach to detrital thermochronology

    Science.gov (United States)

    Malusà, Marco G.; Wang, Jiangang; Garzanti, Eduardo; Liu, Zhi-Chao; Villa, Igor M.; Wittmann, Hella

    2017-10-01

    Detrital thermochronology is often employed to assess the evolutionary stage of an entire orogenic belt using the lag-time approach, i.e., the difference between the cooling and depositional ages of detrital mineral grains preserved in a stratigraphic succession. The impact of different eroding sources to the final sediment sink is controlled by several factors, including the short-term erosion rate and the mineral fertility of eroded bedrock. Here, we use apatite fertility data and cosmogenic-derived erosion rates in the Po river catchment (Alps-Apennines) to calculate the expected percentage of apatite grains supplied to the modern Po delta from the major Alpine and Apenninic eroding sources. We test these predictions by using a cutting-edge dataset of trace-element and Nd-isotope signatures on 871 apatite grains from 14 modern sand samples, and we use apatite fission-track data to validate our geochemical approach to provenance discrimination. We found that apatite grains shed from different sources are geochemically distinct. Apatites from the Lepontine dome in the Central Alps show relative HREE enrichment, lower concentrations in Ce and U, and higher 147Sm/144Nd ratios compared to apatites derived from the External Massifs. Derived provenance budgets point to a dominant apatite contribution to the Po delta from the high-fertility Lepontine dome, consistent with the range independently predicted from cosmonuclide and mineral-fertility data. Our results demonstrate that the single-mineral record in the final sediment sink can be largely determined by high-fertility source rocks exposed in rapidly eroding areas within the drainage. This implies that the detrital thermochronology record may reflect processes affecting relatively small parts of the orogenic system under consideration. A reliable approach to lag-time analysis would thus benefit from an independent provenance discrimination of dated mineral grains, which may allow to proficiently reconsider many

  1. Knowledge base technology: a developer view

    Directory of Open Access Journals (Sweden)

    G. Ginkul

    1996-09-01

    Full Text Available In present paper we have endeavoured to tell about some reasonings, conclusions and pricticals results, to which we have come being busy with one of most interesting problems of modern science. This paper is a brief report of the group of scientists from the Laboratory of Artificial Intelligence Systems about their experience of work in the field of knowledge engineering. The researches in this area was started in our Laboratory more than 10 years ago, i.e. about in the moment, when there was just another rise in Artificial Intelligence, caused by mass emerging of expert systems. The tasks of knowledge engineering were being varied, and focal point of our researches was being varied too. Certainly, we have not solved all the problems, originating in this area. Our knowledge still has an approximate nature, but nevertheless, the outcomes obtained by us seem rather important and interesting. So, we want to tell about our experience in building of knowledge-based systems, and expert systems, in particular.

  2. The Development of the IMIA Knowledge Base

    Directory of Open Access Journals (Sweden)

    Graham Wright

    2011-10-01

    Objectives: The aim of the project was to explore the theoretical constructs underpinning the discipline of health informatics and produce a cognitive map of the existing understanding of the discipline and develop the knowledge base of health informatics for the IMIA and the BCS. Method: The five-phase project, described in this article, undertaken to define the discipline of health informatics used four forms of triangulation. Results: The output from the project is a framework giving the 14 major headings (Subjects and 245 elements, which together describe the current perception of the discipline of health informatics. Conclusion: This article describes how each phase of the project was strengthened, through using triangulation within and between the different phases. This was done to ensure that the investigators could be confident in the confirmation and completeness of data, and assured of the validity and reliability of the final output of the ‘IMIA Knowledge Base’ that was endorsed by the IMIA Board in November 2009.

  3. A knowledge based system for plant diagnosis

    International Nuclear Information System (INIS)

    Motoda, H.; Yamada, N.; Yoshida, K.

    1984-01-01

    A knowledge based system for plant diagnosis is proposed in which both event-oriented and function-oriented knowledge are used. For the proposed system to be of practical use, these two types of knowledge are represented by mutually nested four frames, i.e. the component, causality, criteriality, and simulator frames, and production rules. The system provides fast inference capability for use as both a production system and a formal reasoning system, with uncertainty of knowledge taken into account in the former. Event-oriented knowledge is used in both diagnosis and guidance and function-oriented knowledge, in diagnosis only. The inference capability required is forward chaining in the former and resolution in the latter. The causality frame guides in the use of event-oriented knowledge, whereas the criteriality frame does so for function-oriented knowledge. Feedback nature of the plant requires the best first search algorithm that uses histories in the resolution process. The inference program is written in Lisp and the plant simulator and the process I/O control programs in Fortran. Fast data transfer between these two languages is realized by enhancing the memory management capability of Lisp to control the numerical data in the global memory. Simulation applications to a BWR plant demonstrated its diagnostic capability

  4. Knowledge-based system for automatic MBR control.

    Science.gov (United States)

    Comas, J; Meabe, E; Sancho, L; Ferrero, G; Sipma, J; Monclús, H; Rodriguez-Roda, I

    2010-01-01

    of the knowledge-based DSS and details the knowledge-based control module. Preliminary results of the application of the control module to regulate the air flow rate of an MBR working with variable flux demonstrates the usefulness of this approach.

  5. Health Care Leadership: Managing Knowledge Bases as Stakeholders.

    Science.gov (United States)

    Rotarius, Timothy

    Communities are composed of many organizations. These organizations naturally form clusters based on common patterns of knowledge, skills, and abilities of the individual organizations. Each of these spontaneous clusters represents a distinct knowledge base. The health care knowledge base is shown to be the natural leader of any community. Using the Central Florida region's 5 knowledge bases as an example, each knowledge base is categorized as a distinct type of stakeholder, and then a specific stakeholder management strategy is discussed to facilitate managing both the cooperative potential and the threatening potential of each "knowledge base" stakeholder.

  6. Recording Process Documentation for Provenance

    NARCIS (Netherlands)

    Groth, P.T.; Moreau, L

    2009-01-01

    Scientific and business communities are adopting large-scale distributed systems as a means to solve a wide range of resource-intensive tasks. These communities also have requirements in terms of provenance. We define the provenance of a result produced by a distributed system as the process that

  7. Model-based Abstraction of Data Provenance

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, René Rydhof

    2014-01-01

    to bigger models, and the analyses adapt accordingly. Our approach extends provenance both with the origin of data, the actors and processes involved in the handling of data, and policies applied while doing so. The model and corresponding analyses are based on a formal model of spatial and organisational......Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions....... This extension is based on provenance analysis performed on system models. System models have been introduced to model and analyse spatial and organisational aspects of organisations, to identify, e.g., potential insider threats. Both the models and analyses are naturally modular; models can be combined...

  8. Knowledge-Based Environmental Context Modeling

    Science.gov (United States)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient

  9. Weather, knowledge base and life-style

    Science.gov (United States)

    Bohle, Martin

    2015-04-01

    Why to main-stream curiosity for earth-science topics, thus to appraise these topics as of public interest? Namely, to influence practices how humankind's activities intersect the geosphere. How to main-stream that curiosity for earth-science topics? Namely, by weaving diverse concerns into common threads drawing on a wide range of perspectives: be it beauty or particularity of ordinary or special phenomena, evaluating hazards for or from mundane environments, or connecting the scholarly investigation with concerns of citizens at large; applying for threading traditional or modern media, arts or story-telling. Three examples: First "weather"; weather is a topic of primordial interest for most people: weather impacts on humans lives, be it for settlement, for food, for mobility, for hunting, for fishing, or for battle. It is the single earth-science topic that went "prime-time" since in the early 1950-ties the broadcasting of weather forecasts started and meteorologists present their work to the public, daily. Second "knowledge base"; earth-sciences are a relevant for modern societies' economy and value setting: earth-sciences provide insights into the evolution of live-bearing planets, the functioning of Earth's systems and the impact of humankind's activities on biogeochemical systems on Earth. These insights bear on production of goods, living conditions and individual well-being. Third "life-style"; citizen's urban culture prejudice their experiential connections: earth-sciences related phenomena are witnessed rarely, even most weather phenomena. In the past, traditional rural communities mediated their rich experiences through earth-centric story-telling. In course of the global urbanisation process this culture has given place to society-centric story-telling. Only recently anthropogenic global change triggered discussions on geoengineering, hazard mitigation, demographics, which interwoven with arts, linguistics and cultural histories offer a rich narrative

  10. Knowledge-based fault diagnosis system for refuse collection vehicle

    International Nuclear Information System (INIS)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-01-01

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle

  11. Knowledge-based fault diagnosis system for refuse collection vehicle

    Energy Technology Data Exchange (ETDEWEB)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y. [Centre of Advanced Research on Energy, Universiti Teknikal Malaysia Melaka, Hang Tuah Jaya, Durian Tunggal, Melaka (Malaysia)

    2015-05-15

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  12. Structure of the knowledge base for an expert labeling system

    Science.gov (United States)

    Rajaram, N. S.

    1981-01-01

    One of the principal objectives of the NASA AgRISTARS program is the inventory of global crop resources using remotely sensed data gathered by Land Satellites (LANDSAT). A central problem in any such crop inventory procedure is the interpretation of LANDSAT images and identification of parts of each image which are covered by a particular crop of interest. This task of labeling is largely a manual one done by trained human analysts and consequently presents obstacles to the development of totally automated crop inventory systems. However, development in knowledge engineering as well as widespread availability of inexpensive hardware and software for artificial intelligence work offers possibilities for developing expert systems for labeling of crops. Such a knowledge based approach to labeling is presented.

  13. Knowledge-based system for flight information management. Thesis

    Science.gov (United States)

    Ricks, Wendell R.

    1990-01-01

    The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.

  14. Advanced software development workstation. Knowledge base design: Design of knowledge base for flight planning application

    Science.gov (United States)

    Izygon, Michel E.

    1992-01-01

    The development process of the knowledge base for the generation of Test Libraries for Mission Operations Computer (MOC) Command Support focused on a series of information gathering interviews. These knowledge capture sessions are supporting the development of a prototype for evaluating the capabilities of INTUIT on such an application. the prototype includes functions related to POCC (Payload Operation Control Center) processing. It prompts the end-users for input through a series of panels and then generates the Meds associated with the initialization and the update of hazardous command tables for a POCC Processing TLIB.

  15. Modern Approaches of the Banking Services in Knowledge Based Economy

    OpenAIRE

    Andreea ZAMFIR

    2007-01-01

    Nowadays, the unprecedented development of the information and communication technologies generates profound and irreversible changes of the entire society. Because of its specific characteristic features, the banking services’ field is one of the most dynamic sectors in the economy. Therefore, the paper presents some issues regarding the development of the electronic banking services and the importance of the creativity and innovation of banking services. The success factors in banking servi...

  16. A knowledge-based approach for recognition of handwritten Pitman ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Department of Studies in Computer Science, University of Mysore, ... the successor method based on stochastic regular grammar but makes use of the ... In general, a stroke in PSL represents a character or a word in English at the simplest.

  17. A Knowledge-Based Approach to Robust Parsing

    NARCIS (Netherlands)

    Oltmans, J.A.

    2000-01-01

    The research presented in this thesis describes the design, implementation and evaluation of a natural-language processing system that is used as part of an information retrieval system. Specifically, I focus on the development of a system that performs robust syntactic analysis of scientific texts

  18. Influence of the Migration Process on the Learning Performances of Fuzzy Knowledge Bases

    DEFF Research Database (Denmark)

    Akrout, Khaled; Baron, Luc; Balazinski, Marek

    2007-01-01

    This paper presents the influence of the process of migration between populations in GENO-FLOU, which is an environment of learning of fuzzy knowledge bases by genetic algorithms. Initially the algorithm did not use the process of migration. For the learning, the algorithm uses a hybrid coding......, binary for the base of rules and real for the data base. This hybrid coding used with a set of specialized operators of reproduction proven to be an effective environment of learning. Simulations were made in this environment by adding a process of migration. While varying the number of populations...

  19. The Knowledge Base Interface for Parametric Grid Information

    International Nuclear Information System (INIS)

    Hipp, James R.; Simons, Randall W.; Young, Chris J.

    1999-01-01

    The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary

  20. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    Science.gov (United States)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  1. Dacfood: a knowledge-based system for decision support in case of radiological contamination of foodstuffs

    International Nuclear Information System (INIS)

    Diaz, A.; Despres, A.; Soulatges, D.

    1991-01-01

    In case of radiological contamination of foodstuffs, the introduction of a countermeasure has to be justified by balancing its advantages and drawbacks, as recommended by ICRP. Also, to provide authorities with information about the decision context, it has been decided to develop a Decision Support System (DSS). A knowledge-based approach is used for the DSS. Indeed, it allows: . better modelling thanks to, for instance, object oriented programming and rules, . ability to introduce more knowledge thanks to an easier consistency and validity control of the knowledge base, . handling of uncertainties (incomplete, uncertain or evolving knowledge). The present state of the system is presented. DACFOOD is a decision aiding system for contamined foodstuffs, based on a knowledge-based approach. A demonstration model has been developed in a post-Chernobyl CEC research program. It evaluates the sanitary situation, the alternative actions through costs and sanitary effects, and gives information on the decisional background

  2. Drug knowledge bases and their applications in biomedical informatics research.

    Science.gov (United States)

    Zhu, Yongjun; Elemento, Olivier; Pathak, Jyotishman; Wang, Fei

    2018-01-03

    Recent advances in biomedical research have generated a large volume of drug-related data. To effectively handle this flood of data, many initiatives have been taken to help researchers make good use of them. As the results of these initiatives, many drug knowledge bases have been constructed. They range from simple ones with specific focuses to comprehensive ones that contain information on almost every aspect of a drug. These curated drug knowledge bases have made significant contributions to the development of efficient and effective health information technologies for better health-care service delivery. Understanding and comparing existing drug knowledge bases and how they are applied in various biomedical studies will help us recognize the state of the art and design better knowledge bases in the future. In addition, researchers can get insights on novel applications of the drug knowledge bases through a review of successful use cases. In this study, we provide a review of existing popular drug knowledge bases and their applications in drug-related studies. We discuss challenges in constructing and using drug knowledge bases as well as future research directions toward a better ecosystem of drug knowledge bases. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  3. Identifying the professional knowledge base for multi-grade teaching ...

    African Journals Online (AJOL)

    This paper reports a small-scale qualitative study of teachers and teaching principals in multi-grade rural schools in Australia, focusing on identifying the professional knowledge base required for teachers in such contexts. Such a knowledge base is essential for improving the quality of multi-grade teaching. Interviews and ...

  4. De la extracción al modelado del conocimiento en un Sistema Basado en el Conocimiento. Un enfoque desde el agrupamiento conceptual lógico combinatorio (From the extraction to knowledge modeling in a Knowledge Based System. A logical combinatorial conceptual grouping approach

    Directory of Open Access Journals (Sweden)

    Yunia Reyes González

    2017-10-01

    acquisition process required in a knowledge-based system can be automated or partially automated. The idea is to reduce the working time between the knowledge engineer and the knowledge expert in the intelligent computer system that is to be built. This paper presents the potential of logical combinatorial grouping for both extraction and knowledge modeling in the construction of this type of computer systems. Three specific cases of Knowledge Based Systems are presented in which concepts are used in their essential processes: how to represent the knowledge and method of solving the problem. This approach allows, among other advantages, the automation of knowledge extraction process which makes it possible to separate it from human experts and bring the Knowledge Based Systems theory to more current paradigms where techniques like Big Data are applied.

  5. Provenance an introduction to PROV

    CERN Document Server

    Moreau, Luc

    2013-01-01

    The World Wide Web is now deeply intertwined with our lives, and has become a catalyst for a data deluge, making vast amounts of data available online, at a click of a button. With Web 2.0, users are no longer passive consumers, but active publishers and curators of data. Hence, from science to food manufacturing, from data journalism to personal well-being, from social media to art, there is a strong interest in provenance, a description of what influenced an artifact, a data set, a document, a blog, or any resource on the Web and beyond. Provenance is a crucial piece of information that can

  6. A Model of an Expanded-Frame Hypermedia Knowledge-Base for Instruction.

    Science.gov (United States)

    Lacy, Mark J.; Wood, R. Kent

    1993-01-01

    Argues that current computer-based instruction does not exploit the instructional possibilities of computers. Critiques current models of computer-based instruction: behaviorist as too linear and constructivist as too unstructured. Offers a design model of Expanded-frame Hypermedia Knowledge-bases as an instructional approach allowing hypermedia…

  7. Automatic Recognition of Chinese Personal Name Using Conditional Random Fields and Knowledge Base

    Directory of Open Access Journals (Sweden)

    Chuan Gu

    2015-01-01

    Full Text Available According to the features of Chinese personal name, we present an approach for Chinese personal name recognition based on conditional random fields (CRF and knowledge base in this paper. The method builds multiple features of CRF model by adopting Chinese character as processing unit, selects useful features based on selection algorithm of knowledge base and incremental feature template, and finally implements the automatic recognition of Chinese personal name from Chinese document. The experimental results on open real corpus demonstrated the effectiveness of our method and obtained high accuracy rate and high recall rate of recognition.

  8. Knowledge-based public health situation awareness

    Science.gov (United States)

    Mirhaji, Parsa; Zhang, Jiajie; Srinivasan, Arunkumar; Richesson, Rachel L.; Smith, Jack W.

    2004-09-01

    There have been numerous efforts to create comprehensive databases from multiple sources to monitor the dynamics of public health and most specifically to detect the potential threats of bioterrorism before widespread dissemination. But there are not many evidences for the assertion that these systems are timely and dependable, or can reliably identify man made from natural incident. One must evaluate the value of so called 'syndromic surveillance systems' along with the costs involved in design, development, implementation and maintenance of such systems and the costs involved in investigation of the inevitable false alarms1. In this article we will introduce a new perspective to the problem domain with a shift in paradigm from 'surveillance' toward 'awareness'. As we conceptualize a rather different approach to tackle the problem, we will introduce a different methodology in application of information science, computer science, cognitive science and human-computer interaction concepts in design and development of so called 'public health situation awareness systems'. We will share some of our design and implementation concepts for the prototype system that is under development in the Center for Biosecurity and Public Health Informatics Research, in the University of Texas Health Science Center at Houston. The system is based on a knowledgebase containing ontologies with different layers of abstraction, from multiple domains, that provide the context for information integration, knowledge discovery, interactive data mining, information visualization, information sharing and communications. The modular design of the knowledgebase and its knowledge representation formalism enables incremental evolution of the system from a partial system to a comprehensive knowledgebase of 'public health situation awareness' as it acquires new knowledge through interactions with domain experts or automatic discovery of new knowledge.

  9. FROM TRADITIONAL ACCOUNTING TO KNOWLEDGE BASED ACCOUNTING ORGANIZATIONS

    Directory of Open Access Journals (Sweden)

    NICOLETA RADNEANTU

    2010-01-01

    Full Text Available Nowadays, we may observe that the rules of traditional economy have changed. The new economy – the knowledge based economy determine also major change in organizations resources, structure, strategic objectives, departments, accounting, goods. In our research we want to underline how the accounting rules, regulations and paradigms have changed to cope with political, economic and social challenges, as well as to the emergence of knowledge based organization. We also try to find out where Romanian accounting is on the hard road of evolution from traditional to knowledge based.

  10. IGENPRO knowledge-based digital system for process transient diagnostics and management

    International Nuclear Information System (INIS)

    Morman, J.A.; Reifman, J.; Vitela, J.E.; Wei, T.Y.C.; Applequist, C.A.; Hippely, P.; Kuk, W.; Tsoukalas, L.H.

    1998-01-01

    Verification and validation issues have been perceived as important factors in the large scale deployment of knowledge-based digital systems for plant transient diagnostics and management. Research and development (R and D) is being performed on the IGENPRO package to resolve knowledge base issues. The IGENPRO approach is to structure the knowledge bases on generic thermal-hydraulic (T-H) first principles and not use the conventional event-basis structure. This allows for generic comprehensive knowledge, relatively small knowledge bases and above all the possibility of T-H system/plant independence. To demonstrate concept feasibility the knowledge structure has been implemented in the diagnostic module PRODIAG. Promising laboratory testing results have been obtained using data from the full scope Braidwood PWR operator training simulator. This knowledge structure is now being implemented in the transient management module PROMANA to treat unanticipated events and the PROTREN module is being developed to process actual plant data. Achievement of the IGENPRO R and D goals should contribute to the acceptance of knowledge-based digital systems for transient diagnostics and management. (author)

  11. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships.

    Science.gov (United States)

    Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong

    2010-01-18

    The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.

  12. IGENPRO knowledge-based digital system for process transient diagnostics and management

    International Nuclear Information System (INIS)

    Morman, J.A.; Reifman, J.; Wei, T.Y.C.

    1997-01-01

    Verification and validation issues have been perceived as important factors in the large scale deployment of knowledge-based digital systems for plant transient diagnostics and management. Research and development (R ampersand D) is being performed on the IGENPRO package to resolve knowledge base issues. The IGENPRO approach is to structure the knowledge bases on generic thermal-hydraulic (T-H) first principles and not use the conventional event-basis structure. This allows for generic comprehensive knowledge, relatively small knowledge bases and above all the possibility of T-H system/plant independence. To demonstrate concept feasibility the knowledge structure has been implemented in the diagnostic module PRODIAG. Promising laboratory testing results have been obtained using data from the full scope Braidwood PWR operator training simulator. This knowledge structure is now being implemented in the transient management module PROMANA to treat unanticipated events and the PROTREN module is being developed to process actual plant data. Achievement of the IGENPRO R ampersand D goals should contribute to the acceptance of knowledge-based digital systems for transient diagnostics and management

  13. Evaluation of knowledge-based reconstruction for magnetic resonance volumetry of the right ventricle in tetralogy of Fallot

    International Nuclear Information System (INIS)

    Nyns, Emile Christian Arie; Dragulescu, Andreea; Yoo, Shi-Joon; Grosse-Wortmann, Lars

    2014-01-01

    Cardiac magnetic resonance using the Simpson method is the gold standard for right ventricular volumetry. However, this method is time-consuming and not without sources of error. Knowledge-based reconstruction is a novel post-processing approach that reconstructs the right ventricular endocardial shape based on anatomical landmarks and a database of various right ventricular configurations. To assess the feasibility, accuracy and labor intensity of knowledge-based reconstruction in repaired tetralogy of Fallot (TOF). The short-axis cine cardiac MR datasets of 35 children and young adults (mean age 14.4 ± 2.5 years) after TOF repair were studied using both knowledge-based reconstruction and the Simpson method. Intraobserver, interobserver and inter-method variability were assessed using Bland-Altman analyses. Knowledge-based reconstruction was feasible and highly accurate as compared to the Simpson method. Intra- and inter-method variability for knowledge-based reconstruction measurements showed good agreement. Volumetric assessment using knowledge-based reconstruction was faster when compared with the Simpson method (10.9 ± 2.0 vs. 7.1 ± 2.4 min, P < 0.001). In patients with repaired tetralogy of Fallot, knowledge-based reconstruction is a feasible, accurate and reproducible method for measuring right ventricular volumes and ejection fraction. The post-processing time of right ventricular volumetry using knowledge-based reconstruction was significantly shorter when compared with the routine Simpson method. (orig.)

  14. Evaluation of knowledge-based reconstruction for magnetic resonance volumetry of the right ventricle in tetralogy of Fallot

    Energy Technology Data Exchange (ETDEWEB)

    Nyns, Emile Christian Arie; Dragulescu, Andreea [University of Toronto, The Labatt Family Heart Centre, The Hospital for Sick Children, Toronto (Canada); Yoo, Shi-Joon; Grosse-Wortmann, Lars [University of Toronto, The Labatt Family Heart Centre, The Hospital for Sick Children, Toronto (Canada); University of Toronto, Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto (Canada)

    2014-12-15

    Cardiac magnetic resonance using the Simpson method is the gold standard for right ventricular volumetry. However, this method is time-consuming and not without sources of error. Knowledge-based reconstruction is a novel post-processing approach that reconstructs the right ventricular endocardial shape based on anatomical landmarks and a database of various right ventricular configurations. To assess the feasibility, accuracy and labor intensity of knowledge-based reconstruction in repaired tetralogy of Fallot (TOF). The short-axis cine cardiac MR datasets of 35 children and young adults (mean age 14.4 ± 2.5 years) after TOF repair were studied using both knowledge-based reconstruction and the Simpson method. Intraobserver, interobserver and inter-method variability were assessed using Bland-Altman analyses. Knowledge-based reconstruction was feasible and highly accurate as compared to the Simpson method. Intra- and inter-method variability for knowledge-based reconstruction measurements showed good agreement. Volumetric assessment using knowledge-based reconstruction was faster when compared with the Simpson method (10.9 ± 2.0 vs. 7.1 ± 2.4 min, P < 0.001). In patients with repaired tetralogy of Fallot, knowledge-based reconstruction is a feasible, accurate and reproducible method for measuring right ventricular volumes and ejection fraction. The post-processing time of right ventricular volumetry using knowledge-based reconstruction was significantly shorter when compared with the routine Simpson method. (orig.)

  15. KNOWLEDGE-BASED OBJECT DETECTION IN LASER SCANNING POINT CLOUDS

    Directory of Open Access Journals (Sweden)

    F. Boochs

    2012-07-01

    Full Text Available Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This “understanding” enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL, used for formulating the knowledge base and the Semantic Web Rule Language (SWRL with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists’ knowledge of the scene and algorithmic processing.

  16. A Knowledge Based Recommender System with Multigranular Linguistic Information

    Directory of Open Access Journals (Sweden)

    Luis Martinez

    2008-08-01

    Full Text Available Recommender systems are applications that have emerged in the e-commerce area in order to assist users in their searches in electronic shops. These shops usually offer a wide range of items that cover the necessities of a great variety of users. Nevertheless, searching in such a wide range of items could be a very difficult and time-consuming task. Recommender systems assist users to find out suitable items by means of recommendations based on information provided by different sources such as: other users, experts, item features, etc. Most of the recommender systems force users to provide their preferences or necessities using an unique numerical scale of information fixed in advance. In spite of this information is usually related to opinions, tastes and perceptions, therefore, it seems that is usually better expressed in a qualitative way, with linguistic terms, than in a quantitative way, with precise numbers. We propose a Knowledge Based Recommender System that uses the fuzzy linguistic approach to define a flexible framework to capture the uncertainty of the user's preferences. Thus, this framework will allow users to express their necessities in scales closer to their own knowledge, and different from the scale utilized to describe the items.

  17. Knowledge-based control of an adaptive interface

    Science.gov (United States)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  18. Knowledge-based computer systems for radiotherapy planning.

    Science.gov (United States)

    Kalet, I J; Paluszynski, W

    1990-08-01

    Radiation therapy is one of the first areas of clinical medicine to utilize computers in support of routine clinical decision making. The role of the computer has evolved from simple dose calculations to elaborate interactive graphic three-dimensional simulations. These simulations can combine external irradiation from megavoltage photons, electrons, and particle beams with interstitial and intracavitary sources. With the flexibility and power of modern radiotherapy equipment and the ability of computer programs that simulate anything the machinery can do, we now face a challenge to utilize this capability to design more effective radiation treatments. How can we manage the increased complexity of sophisticated treatment planning? A promising approach will be to use artificial intelligence techniques to systematize our present knowledge about design of treatment plans, and to provide a framework for developing new treatment strategies. Far from replacing the physician, physicist, or dosimetrist, artificial intelligence-based software tools can assist the treatment planning team in producing more powerful and effective treatment plans. Research in progress using knowledge-based (AI) programming in treatment planning already has indicated the usefulness of such concepts as rule-based reasoning, hierarchical organization of knowledge, and reasoning from prototypes. Problems to be solved include how to handle continuously varying parameters and how to evaluate plans in order to direct improvements.

  19. Competencies for Central American SMEs in the Knowledge-Based ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Knowledge-based economy not only affects well developed countries but ... to the information and communication technologies (ICT) related competencies, as technologies plays a key and growing role in managing knowledge and exchange.

  20. Knowledge-Base Application to Ground Moving Target Detection

    National Research Council Canada - National Science Library

    Adve, R

    2001-01-01

    This report summarizes a multi-year in-house effort to apply knowledge-base control techniques and advanced Space-Time Adaptive Processing algorithms to improve detection performance and false alarm...

  1. Towards Modeling False Memory With Computational Knowledge Bases.

    Science.gov (United States)

    Li, Justin; Kohanyi, Emma

    2017-01-01

    One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.

  2. Knowledge representation and knowledge base design for operator advisor system

    International Nuclear Information System (INIS)

    Hangos, K.M.; Sziano, T.; Tapolcai, L.

    1990-01-01

    The problems of knowledge representation, knowledge base handling and design has been described for an Operator Advisor System in the Paks Nuclear Power Plant. The Operator Advisor System is to be implemented as a part of the 5th and 6th unit. The knowledge of the Operator Advisor system is described by a few elementary knowledge items (diagnostic event functions, fault graph, action trees), weighted directed graphs have been found as their common structure. List-type and relational representation of these graphs have been used for the on-line and off-line part of the knowledge base respectively. A uniform data base design and handling has been proposed which consists of a design system, a knowledge base editor and a knowledge base compiler

  3. XML-Based SHINE Knowledge Base Interchange Language

    Science.gov (United States)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  4. Knowledge-based development in Singapore and Malaysia

    OpenAIRE

    Menkhoff, Thomas; Gerke, Solvay; Evers, Hans-Dieter; Chay, Yue Wah

    2009-01-01

    This paper addresses the question how knowledge is used to benefit the economic development of Singapore and Malaysia. Both countries have followed strict science policies to establish knowledge governance regimes for a knowledge-based economy. On the basis of empirical studies in both countries we show, how ethnic and religious diversity impact on the ability to develop an epistemic culture of knowledge sharing and ultimately an innovative knowledge-based economy.

  5. KNOWLEDGE SOCIETY, GENERAL FRAMEWORK FOR KNOWLEDGE BASED ECONOMY

    Directory of Open Access Journals (Sweden)

    Dragos CRISTEA

    2011-03-01

    Full Text Available This paper tries to present the existent relation between knowledge society and knowledge based economy. We will identify the main pillars of knowledge society and present their importance for the development of knowledge societies. Further, we will present two perspectives over knowledge societies, respectively science and learning perspectives, that directly affects knowledge based economies. At the end, we will conclude by identifying some important questions that must be answered regarding this new social paradigm.

  6. Role of Knowledge Based Communities in Knowledge Process

    Directory of Open Access Journals (Sweden)

    Sebastian Ion CEPTUREANU

    2015-12-01

    Full Text Available In the new economy, knowledge is an essential component of economic and social systems. The organizational focus has to be on building knowledge-based management, development of human resource and building intellectual capital capabilities. Knowledge-based management is defined, at company level, by economic processes that emphasize creation, selling, buying, learning, storing, developing, sharing and protection of knowledge as a decisive condition for profit and long-term sustainability of the company. Hence, knowledge is, concurently, according to a majoritiy of specialists, raw material, capital, product and an essential input. Knowledge-based communities are one of the main constituent elements of a framework for knowledge based management. These are peer networks consisting of practitioners within an organization, supporting each other to perform better through the exchange and sharing of knowledge. Some large companies have contributed or supported the establishment of numerous communities of practice, some of which may have several thousand members. They operate in different ways, are of different sizes, have different areas of interest and addresses knowledge at different levels of its maturity. This article examines the role of knowledge-based communities from the perspective of knowledge based management, given that the arrangements for organizational learning, creating, sharing, use of knowledge within organizations become more heterogeneous and take forms more difficult to predict by managers and specialists.

  7. PAV ontology: provenance, authoring and versioning.

    Science.gov (United States)

    Ciccarese, Paolo; Soiland-Reyes, Stian; Belhajjame, Khalid; Gray, Alasdair Jg; Goble, Carole; Clark, Tim

    2013-11-22

    approaches, namely Provenance Vocabulary (PRV), DC Terms and BIBFRAME. We identify similarities and analyze differences between those vocabularies and PAV, outlining strengths and weaknesses of our proposed model. We specify SKOS mappings that align PAV with DC Terms. We conclude the paper with general remarks on the applicability of PAV.

  8. Knowledge Base Applications to Adaptive Space-Time Processing, Volume 5: Knowledge-Based Tracker Rule Book

    National Research Council Canada - National Science Library

    Morgan, Charles

    2001-01-01

    ... processing algorithm can be applied. The proactive knowledge-based tracker uses information from other sources such as digital terrain maps, radar clutter and interference maps, and target priority assessments to determine the nature...

  9. Development of a component centered fault monitoring and diagnosis knowledge based system for space power system

    Science.gov (United States)

    Lee, S. C.; Lollar, Louis F.

    1988-01-01

    The overall approach currently being taken in the development of AMPERES (Autonomously Managed Power System Extendable Real-time Expert System), a knowledge-based expert system for fault monitoring and diagnosis of space power systems, is discussed. The system architecture, knowledge representation, and fault monitoring and diagnosis strategy are examined. A 'component-centered' approach developed in this project is described. Critical issues requiring further study are identified.

  10. Knowledge-Based Control Systems via Internet Part I. Applications in Biotechnology

    Directory of Open Access Journals (Sweden)

    Georgi Georgiev

    2005-04-01

    Full Text Available An extensive approach towards the dissemination of expert knowledge and coordination efforts to distributed points and seamless integration of control strategies applied to distributed yet identical systems is crucial to enhance overall efficiency and operational costs. Application of Knowledge-Based Control System via Internet will be very efficient especially in biotechnology, because many industrial bioprocesses, based on the same technological principles, are distributed in the whole world. Brewing industry oriented practical solutions illustrate this approach.

  11. Virk: An Active Learning-based System for Bootstrapping Knowledge Base Development in the Neurosciences

    Directory of Open Access Journals (Sweden)

    Kyle H. Ambert

    2013-12-01

    Full Text Available The frequency and volume of newly-published scientific literature is quickly making manual maintenance of publicly-available databases of primary data unrealistic and costly. Although machine learning can be useful for developing automated approaches to identifying scientific publications containing relevant information for a database, developing such tools necessitates manually annotating an unrealistic number of documents. One approach to this problem, active learning, builds classification models by iteratively identifying documents that provide the most information to a classifier. Although this approach has been shown to be effective for related problems, in the context of scientific databases curation, it falls short. We present Virk, an active learning system that, while being trained, simultaneously learns a classification model and identifies documents having information of interest for a knowledge base. Our approach uses a support vector machine classifier with input features derived from neuroscience-related publications from the primary literature. Using our approach, we were able to increase the size of the Neuron Registry, a knowledge base of neuron-related information, by a factor of 90%, a knowledge base of neuron-related information, in 3 months. Using standard biocuration methods, it would have taken between 1-2 years to make the same number of contributions to the Neuron Registry. Here, we describe the system pipeline in detail, and evaluate its performance against other approaches to sampling in active learning.

  12. Logical provenance in data-oriented workflows?

    KAUST Repository

    Ikeda, R.

    2013-04-01

    We consider the problem of defining, generating, and tracing provenance in data-oriented workflows, in which input data sets are processed by a graph of transformations to produce output results. We first give a new general definition of provenance for general transformations, introducing the notions of correctness, precision, and minimality. We then determine when properties such as correctness and minimality carry over from the individual transformations\\' provenance to the workflow provenance. We describe a simple logical-provenance specification language consisting of attribute mappings and filters. We provide an algorithm for provenance tracing in workflows where logical provenance for each transformation is specified using our language. We consider logical provenance in the relational setting, observing that for a class of Select-Project-Join (SPJ) transformations, logical provenance specifications encode minimal provenance. We have built a prototype system supporting the features and algorithms presented in the paper, and we report a few preliminary experimental results. © 2013 IEEE.

  13. Provenance management in Swift with implementation details.

    Energy Technology Data Exchange (ETDEWEB)

    Gadelha, L. M. R; Clifford, B.; Mattoso, M.; Wilde, M.; Foster, I. (Mathematics and Computer Science); ( CLS-CI); (Federal Univ. of Rio de Janeiro); (National Lab. for Scientific Computing, Brazil); (Univ. of Chicago)

    2011-04-01

    The Swift parallel scripting language allows for the specification, execution and analysis of large-scale computations in parallel and distributed environments. It incorporates a data model for recording and querying provenance information. In this article we describe these capabilities and evaluate interoperability with other systems through the use of the Open Provenance Model. We describe Swift's provenance data model and compare it to the Open Provenance Model. We also describe and evaluate activities performed within the Third Provenance Challenge, which consisted of implementing a specific scientific workflow, capturing and recording provenance information of its execution, performing provenance queries, and exchanging provenance information with other systems. Finally, we propose improvements to both the Open Provenance Model and Swift's provenance system.

  14. File Level Provenance Tracking in CMS

    CERN Document Server

    Jones, C D; Paterno, M; Sexton-Kennedy, L; Tanenbaum, W; Riley, D S

    2009-01-01

    The CMS off-line framework stores provenance information within CMS's standard ROOT event data files. The provenance information is used to track how each data product was constructed, including what other data products were read to do the construction. We will present how the framework gathers the provenance information, the efforts necessary to minimise the space used to store the provenance in the file and the tools that will be available to use the provenance.

  15. Provenance data in social media

    CERN Document Server

    Barbier, Geoffrey; Gundecha, Pritam

    2013-01-01

    Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data

  16. The Underdetermined Knowledge-Based Theory of the MNC

    DEFF Research Database (Denmark)

    Fransson, Anders; Håkanson, Lars; W. Liesch, Peter

    2011-01-01

    In this note we revisit two core propositions of the knowledge-based view of the firm found in the seminal work of Kogut and Zander: (1) that multinational corporations (MNCs) exist because transfers and re-combinations of knowledge occur more efficiently inside MNCs than between MNCs and third...... parties; and (2) that the threat of opportunism is not necessary, although it may be sufficient, to explain the existence of the MNC. Their knowledge-based view shifted the conceptualization of the firm from an institution arising from market failure and transaction costs economizing to a progeny......-combination of knowledge among their members. Important insights may be gained by applying the concept of epistemic communities implicit in the knowledge-based perspective beyond firm-level hierarchies....

  17. Enhancing acronym/abbreviation knowledge bases with semantic information.

    Science.gov (United States)

    Torii, Manabu; Liu, Hongfang

    2007-10-11

    In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.

  18. Knowledge based system for fouling assessment of power plant boiler

    International Nuclear Information System (INIS)

    Afgan, N.H.; He, X.; Carvalho, M.G.; Azevedo, J.L.T.

    1999-01-01

    The paper presents the design of an expert system for fouling assessment in power plant boilers. It is an on-line expert system based on selected criteria for the fouling assessment. Using criteria for fouling assessment based on 'clean' and 'not-clean' radiation heat flux measurements, the diagnostic variable are defined for the boiler heat transfer surface. The development of the prototype knowledge-based system for fouling assessment in power plants boiler comprise the integrations of the elements including knowledge base, inference procedure and prototype configuration. Demonstration of the prototype knowledge-based system for fouling assessment was performed on the Sines power plant. It is a 300 MW coal fired power plant. 12 fields are used with 3 on each side of boiler

  19. KNOWLEDGE BASE AND EFL TEACHER EDUCATION PROGRAMS: A COLOMBIAN PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Yamith Fandiño

    2013-04-01

    Full Text Available In the 21st century, Colombian pre-service EFL Teacher Education Programs (TEPs should study what constitutes the core knowledge base for language teachers to be effective in their profession. These programs must refrain from simply conceptualizing knowledge base as the acquisition of the basic skills required for teaching, the competency of educators in their subject matter area, and the use of pedagogical skills. Instead, they should strive to reflect on what Colombian language teachers need to know about teaching and learning, and study how their knowledge, beliefs, and attitudes inform their practices. A starting point to do so is to interpret the variety of proposals that have been generated through the years in the field. This paperoffers a review of what teacher knowledge base is, presents an overview of how Colombian EFL TEPs are working on teacher knowledge,and suggests some strategies to envision a more complete framework of reference for teacher formation in Colombia.

  20. Arranging ISO 13606 archetypes into a knowledge base.

    Science.gov (United States)

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.

  1. Knowledge-based operator guidance system for Japanese PWRs

    International Nuclear Information System (INIS)

    Fujita, Y.; Ito, K.; Kawanago, S.; Tani, M.; Murata, R.

    1986-01-01

    A knowledge-based operator support system for nuclear power plant operation is under development. The main theme of the study is the incorporation of operator's cognitive structure as the framework of the knowledge representation and inference control mechanisms. Based upon information collected from interviews, and experiments using a real-time simulator, an operator's model related to diagnostic tasks was developed. A knowledge-based system incorporating the proposed model demonstrated highly efficient problem solving capabilities and the dynamic fitness to operator's perceptual feeling, thereby suggesting the potential importance and practical benefit of such a study

  2. Launch Services, a Proven Model

    Science.gov (United States)

    Trafton, W. C.; Simpson, J.

    2002-01-01

    From a commercial perspective, the ability to justify "leap frog" technology such as reusable systems has been difficult to justify because the estimated 5B to 10B investment is not supported in the current flat commercial market coupled with an oversupply of launch service suppliers. The market simply does not justify investment of that magnitude. Currently, next generation Expendable Launch Systems, including Boeing's Delta IV, Lockheed Martin's Atlas 5, Ariane V ESCA and RSC's H-IIA are being introduced into operations signifying that only upgrades to proven systems are planned to meet the changes in anticipated satellite demand (larger satellites, more lifetime, larger volumes, etc.) in the foreseeable future. We do not see a new fleet of ELVs emerging beyond that which is currently being introduced, only continuous upgrades of the fleet to meet the demands. To induce a radical change in the provision of launch services, a Multinational Government investment must be made and justified by World requirements. The commercial market alone cannot justify such an investment. And if an investment is made, we cannot afford to repeat previous mistakes by relying on one system such as shuttle for commercial deployment without having any back-up capability. Other issues that need to be considered are national science and security requirements, which to a large extent fuels the Japanese, Chinese, Indian, Former Soviet Union, European and United States space transportation entries. Additionally, this system must support or replace current Space Transportation Economies with across-the-board benefits. For the next 10 to 20 years, Multinational cooperation will be in the form of piecing together launch components and infrastructure to supplement existing launch systems and reducing the amount of non-recurring investment while meeting the future requirements of the End-User. Virtually all of the current systems have some form of multinational participation: Sea Launch

  3. Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue

    Science.gov (United States)

    Ayache, S.; Haziza, M.; Cayrac, D.

    1994-01-01

    Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.

  4. Statistical method application to knowledge base building for reactor accident diagnostic system

    International Nuclear Information System (INIS)

    Yoshida, Kazuo; Yokobayashi, Masao; Matsumoto, Kiyoshi; Kohsaka, Atsuo

    1989-01-01

    In the development of a knowledge based expert system, one of key issues is how to build the knowledge base (KB) in an efficient way with keeping the objectivity of KB. In order to solve this issue, an approach has been proposed to build a prototype KB systematically by a statistical method, factor analysis. For the verification of this approach, factor analysis was applied to build a prototype KB for the JAERI expert system DISKET. To this end, alarm and process information was generated by a PWR simulator and the factor analysis was applied to this information to define taxonomy of accident hypotheses and to extract rules for each hypothesis. The prototype KB thus built was tested through inferring against several types of transients including double-failures. In each diagnosis, the transient type was well identified. Furthermore, newly introduced standards for rule extraction showed good effects on the enhancement of the performance of prototype KB. (author)

  5. Knowledge-Basing Teaching Professions and Professional Practice

    DEFF Research Database (Denmark)

    Thingstrup, Signe Hvid

    This paper discusses the demand for knowledge-based practice and two different answers to this demand, namely evidence-based thinking and critical-political thinking. The paper discusses the implications these have for views on knowledge and professional development. The paper presents and discus...

  6. Ada as an implementation language for knowledge based systems

    Science.gov (United States)

    Rochowiak, Daniel

    1990-01-01

    Debates about the selection of programming languages often produce cultural collisions that are not easily resolved. This is especially true in the case of Ada and knowledge based programming. The construction of programming tools provides a desirable alternative for resolving the conflict.

  7. Malaysia Transitions toward a Knowledge-Based Economy

    Science.gov (United States)

    Mustapha, Ramlee; Abdullah, Abu

    2004-01-01

    The emergence of a knowledge-based economy (k-economy) has spawned a "new" notion of workplace literacy, changing the relationship between employers and employees. The traditional covenant where employees expect a stable or lifelong employment will no longer apply. The retention of employees will most probably be based on their skills…

  8. Intelligent Tools for Planning Knowledge base Development and Verification

    Science.gov (United States)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  9. Towards an Intelligent Planning Knowledge Base Development Environment

    Science.gov (United States)

    Chien, S.

    1994-01-01

    ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.

  10. KBGIS-2: A knowledge-based geographic information system

    Science.gov (United States)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  11. Development of a Knowledge Base for Incorporating Technology into Courses

    Science.gov (United States)

    Rath, Logan

    2013-01-01

    This article discusses a project resulting from the request of a group of faculty at The College at Brockport to create a website for best practices in teaching and technology. The project evolved into a knowledge base powered by WordPress. Installation and configuration of WordPress resulted in the creation of custom taxonomies and post types,…

  12. Knowledge based support for multiagent control and automation

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten

    2011-01-01

    This paper presents a mechanism for developing knowledge based support in multiagent based control and diagnosis. In particular it presents a way for autonomous agents to utilize a qualitative means-ends based model for reasoning about control situations. The proposed mechanism have been used...

  13. Knowledge based system for control rod programming of BWRs

    International Nuclear Information System (INIS)

    Fukuzaki, Takaharu; Yoshida, Ken-ichi; Kobayashi, Yasuhiro

    1988-01-01

    A knowledge based system has been developed to support designers in control rod programming of BWRs. The programming searches through optimal control rod patterns to realize safe and effective burning of nuclear fuel. Knowledge of experienced designers plays the main role in minimizing the number of calculations by the core performance evaluation code. This code predicts power distibution and thermal margins of the nuclear fuel. This knowledge is transformed into 'if-then' type rules and subroutines, and is stored in a knowledge base of the knowledge based system. The system consists of working area, an inference engine and the knowledge base. The inference engine can detect those data which have to be regenerated, call those subroutine which control the user's interface and numerical computations, and store competitive sets of data in different parts of the working area. Using this system, control rod programming of a BWR plant was traced with about 500 rules and 150 subroutines. Both the generation of control rod patterns for the first calculation of the code and the modification of a control rod pattern to reflect the calculation were completed more effectively than in a conventional method. (author)

  14. Development of a knowledge-based system for loop diagnosis

    International Nuclear Information System (INIS)

    Liao, L.Y.; Tang, H.C.; Chen, S.S.

    1987-01-01

    An accident diagnostic system is developed as an attempt to provide a useful aid for the operators of an experimental loop or a nuclear power plant in the case of emergency condition. Because the current practices in the system diagnosis are not satisfactory, there is an increasing demand on the establishment of various operator decision support systems. The knowledge based system is a new and promising technique which can be used to fulfill this demand. With the capability of automatic reasoning and by incorporating the information about system status, the knowledge based system can simulate the process of human thinking and serve as a good decision support system. This knowledge based decision support system can be helpful for both a fast, violent accident and a slowly developed accident. Specifically, a fast diagnostic report can be provided for a fast and violent accident of which time is the main concern and a complete diagnostic report can be provided for a slowly developed accident of which complexity is the main concern. Such a knowledge based decision support system also provides many other equally important advantages, such as the elimination of human error, the automatic validation of signal readings, the establishment of human error, the automatic validation of signal readings, and the establishment of a simulation environment

  15. Mobile Communication and Work Practices in Knowledge-based Organizations

    Directory of Open Access Journals (Sweden)

    Pertti Hurme

    2005-01-01

    Full Text Available This paper examines the role of mobile communication, mobile tools and work practices in the context of organizations, especially knowledge-based organizations. Today, organizations are highly complex and diverse. Not surprisingly, various solutions to incorporating mobile tools and mobile communication in organizations have been devised. Challenges to technological development and research on mobile communication are presented.

  16. Representability in DL-Lite_R knowledge base exchange

    NARCIS (Netherlands)

    Arenas, M.; Botoeva, E.; Calvanese, D.; Ryzhikov, V.; Sherkhonov, E.

    2012-01-01

    Knowledge base exchange can be considered as a generalization of data exchange in which the aim is to exchange between a source and a target connected through mappings, not only explicit knowledge, i.e., data, but also implicit knowledge in the form of axioms. Such problem has been investigated

  17. Value Creation in the Knowledge-Based Economy

    Science.gov (United States)

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  18. The Ignorance of the Knowledge-Based Economy. The Iconoclast.

    Science.gov (United States)

    McMurtry, John

    1996-01-01

    Castigates the supposed "knowledge-based economy" as simply a public relations smokescreen covering up the free market exploitation of people and resources serving corporate interests. Discusses the many ways that private industry, often with government collusion, has controlled or denied dissemination of information to serve its own interests.…

  19. Integrating knowledge based functionality in commercial hospital information systems.

    Science.gov (United States)

    Müller, M L; Ganslandt, T; Eich, H P; Lang, K; Ohmann, C; Prokosch, H U

    2000-01-01

    Successful integration of knowledge-based functions in the electronic patient record depends on direct and context-sensitive accessibility and availability to clinicians and must suit their workflow. In this paper we describe an exemplary integration of an existing standalone scoring system for acute abdominal pain into two different commercial hospital information systems using Java/Corba technolgy.

  20. Knowledge-based analysis of functional impacts of mutations in ...

    Indian Academy of Sciences (India)

    Knowledge-based analysis of functional impacts of mutations in microRNA seed regions. Supplementary figure 1. Summary of predicted miRNA targets from ... All naturally occurred SNPs in seed regions of human miRNAs. The information of the columns is given in the second sheet. Hihly expressed miRNAs are ...

  1. Generic knowledge-based analysis of social media for recommendations

    NARCIS (Netherlands)

    de Graaff, V.; van de Venis, Anne; van Keulen, Maurice; de By, R.A.; Bogers, Toine; Koolen, Marijn

    2015-01-01

    Recommender systems have been around for decades to help people find the best matching item in a pre-defined item set. Knowledge-based recommender systems are used to match users based on information that links the two, but they often focus on a single, specific application, such as movies to watch

  2. Competencies for Central American SMEs in the Knowledge-Based ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Knowledge-based economy not only affects well developed countries but also the performance and possibilities of small economic actors positioned on less developed countries. Micro, small and medium enterprises - characterized by low financial and human capital - are also exposed to the opportunities and risks ...

  3. Knowledge-based society, peer production and the common good

    DEFF Research Database (Denmark)

    Orsi, Cosma

    2009-01-01

    This article investigates the societal conditions that might help the establishment of peer-to-peer modes of production. First, the context within which such a new model is emerging - the neoliberal knowledge-based-societies - is described, and its shortcomings unveiled; and second, a robust argu...

  4. Knowledge Based Engineering for Spatial Database Management and Use

    Science.gov (United States)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  5. Knowledge base, information search and intention to adopt innovation

    NARCIS (Netherlands)

    Rijnsoever, van F.J.; Castaldi, C.

    2008-01-01

    Innovation is a process that involves searching for new information. This paper builds upon theoretical insights on individual and organizational learning and proposes a knowledge based model of how actors search for information when confronted with innovation. The model takes into account different

  6. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    Science.gov (United States)

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  7. Enhancing Canadian Civil Society Research and Knowledge-Based ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Enhancing Canadian Civil Society Research and Knowledge-Based Practice in a Rapidly Changing Landscape for International Development ... Women in the developing world continue to face obstacles that limit their ability to establish careers and become leaders in the fields of science, technology, engineering, and ...

  8. Dynamic Strategic Planning in a Professional Knowledge-Based Organization

    Science.gov (United States)

    Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte

    2010-01-01

    Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…

  9. MATHEMATICAL APPARATUS FOR KNOWLEDGE BASE PROJECT MANAGEMENT OF OCCUPATIONAL SAFETY

    Directory of Open Access Journals (Sweden)

    Валентина Николаевна ПУРИЧ

    2015-05-01

    Full Text Available The occupational safety project (OSP management is aimed onto a rational choice implementation. With respect to the subjectivity of management goals the project selection is considered as a minimum formalization level information process, The proposed project selection model relies upon the enterprise’s occupational and industrial safety assessment using fuzzy logic and linguistic variables based on occupational safety knowledge base.

  10. Knowledge base combinations and innovation performance in Swedish regions

    Czech Academy of Sciences Publication Activity Database

    Grillitsch, M.; Martin, R.; Srholec, Martin

    2017-01-01

    Roč. 93, č. 5 (2017), s. 458-479 ISSN 0013-0095 Institutional support: Progres-Q24 Keywords : knowledge base * knowledge combination * region Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 5.344, year: 2016

  11. Knowledge base combinations and innovation performance in Swedish regions

    Czech Academy of Sciences Publication Activity Database

    Grillitsch, M.; Martin, R.; Srholec, Martin

    2017-01-01

    Roč. 93, č. 5 (2017), s. 458-479 ISSN 0013-0095 Institutional support: RVO:67985998 Keywords : knowledge base * knowledge combination * region Subject RIV: AH - Economics OBOR OECD: Applied Economics, Econometrics Impact factor: 5.344, year: 2016

  12. Scientific publications in XML - towards a global knowledge base

    Directory of Open Access Journals (Sweden)

    Peter Murray-Rust

    2006-01-01

    Full Text Available Recent developments on the World-Wide Web provide an unparalleled opportunity to revolutionise scientific, technical and medical publication. The technology exists for the scientific world to use primary publication to create a knowledge base, or Semantic Web, with a potential greatly beyond the paper archives and electronic databases of today.

  13. Integrative pathway knowledge bases as a tool for systems molecular medicine.

    Science.gov (United States)

    Liang, Mingyu

    2007-08-20

    There exists a sense of urgency to begin to generate a cohesive assembly of biomedical knowledge as the pace of knowledge accumulation accelerates. The urgency is in part driven by the emergence of systems molecular medicine that emphasizes the combination of systems analysis and molecular dissection in the future of medical practice and research. A potentially powerful approach is to build integrative pathway knowledge bases that link organ systems function with molecules.

  14. Utilizing knowledge base of amino acids structural neighborhoods to predict protein-protein interaction sites.

    Science.gov (United States)

    Jelínek, Jan; Škoda, Petr; Hoksza, David

    2017-12-06

    Protein-protein interactions (PPI) play a key role in an investigation of various biochemical processes, and their identification is thus of great importance. Although computational prediction of which amino acids take part in a PPI has been an active field of research for some time, the quality of in-silico methods is still far from perfect. We have developed a novel prediction method called INSPiRE which benefits from a knowledge base built from data available in Protein Data Bank. All proteins involved in PPIs were converted into labeled graphs with nodes corresponding to amino acids and edges to pairs of neighboring amino acids. A structural neighborhood of each node was then encoded into a bit string and stored in the knowledge base. When predicting PPIs, INSPiRE labels amino acids of unknown proteins as interface or non-interface based on how often their structural neighborhood appears as interface or non-interface in the knowledge base. We evaluated INSPiRE's behavior with respect to different types and sizes of the structural neighborhood. Furthermore, we examined the suitability of several different features for labeling the nodes. Our evaluations showed that INSPiRE clearly outperforms existing methods with respect to Matthews correlation coefficient. In this paper we introduce a new knowledge-based method for identification of protein-protein interaction sites called INSPiRE. Its knowledge base utilizes structural patterns of known interaction sites in the Protein Data Bank which are then used for PPI prediction. Extensive experiments on several well-established datasets show that INSPiRE significantly surpasses existing PPI approaches.

  15. Logical provenance in data-oriented workflows?

    KAUST Repository

    Ikeda, R.; Das Sarma, Akash; Widom, J.

    2013-01-01

    for general transformations, introducing the notions of correctness, precision, and minimality. We then determine when properties such as correctness and minimality carry over from the individual transformations' provenance to the workflow provenance. We

  16. Restful Implementation of Catalogue Service for Geospatial Data Provenance

    Science.gov (United States)

    Jiang, L. C.; Yue, P.; Lu, X. C.

    2013-10-01

    Provenance, also known as lineage, is important in understanding the derivation history of data products. Geospatial data provenance helps data consumers to evaluate the quality and reliability of geospatial data. In a service-oriented environment, where data are often consumed or produced by distributed services, provenance could be managed by following the same service-oriented paradigm. The Open Geospatial Consortium (OGC) Catalogue Service for the Web (CSW) is used for the registration and query of geospatial data provenance by extending ebXML Registry Information Model (ebRIM). Recent advance of the REpresentational State Transfer (REST) paradigm has shown great promise for the easy integration of distributed resources. RESTful Web Service aims to provide a standard way for Web clients to communicate with servers based on REST principles. The existing approach for provenance catalogue service could be improved by adopting the RESTful design. This paper presents the design and implementation of a catalogue service for geospatial data provenance following RESTful architecture style. A middleware named REST Converter is added on the top of the legacy catalogue service to support a RESTful style interface. The REST Converter is composed of a resource request dispatcher and six resource handlers. A prototype service is developed to demonstrate the applicability of the approach.

  17. New frontiers in information and production systems modelling and analysis incentive mechanisms, competence management, knowledge-based production

    CERN Document Server

    Novikov, Dmitry; Bakhtadze, Natalia; Zaikin, Oleg

    2016-01-01

    This book demonstrates how to apply modern approaches to complex system control in practical applications involving knowledge-based systems. The dimensions of knowledge-based systems are extended by incorporating new perspectives from control theory, multimodal systems and simulation methods.  The book is divided into three parts: theory, production system and information system applications. One of its main focuses is on an agent-based approach to complex system analysis. Moreover, specialised forms of knowledge-based systems (like e-learning, social network, and production systems) are introduced with a new formal approach to knowledge system modelling.   The book, which offers a valuable resource for researchers engaged in complex system analysis, is the result of a unique cooperation between scientists from applied computer science (mainly from Poland) and leading system control theory researchers from the Russian Academy of Sciences’ Trapeznikov Institute of Control Sciences.

  18. A knowledge-based system for optimization of fuel reload configurations

    International Nuclear Information System (INIS)

    Galperin, A.; Kimhi, S.; Segev, M.

    1989-01-01

    The authors discuss a knowledge-based production system developed for generating optimal fuel reload configurations. The system was based on a heuristic search method and implemented in Common Lisp programming language. The knowledge base embodied the reactor physics, reactor operations, and a general approach to fuel management strategy. The data base included a description of the physical system involved, i.e., the core geometry and fuel storage. The fifth cycle of the Three Mile Island Unit 1 pressurized water reactor was chosen as a test case. Application of the system to the test case revealed a self-learning process by which a relatively large number of near-optimal configurations were discovered. Several selected solutions were subjected to detailed analysis and demonstrated excellent performance. To summarize, applicability of the proposed heuristic search method in the domain of nuclear fuel management was proved unequivocally

  19. CRITERIA AND FACTORS USED BY MANAGERS IMPLEMENTING THE KNOWLEDGE-BASED MANAGEMENT IN TOURISM SMES

    Directory of Open Access Journals (Sweden)

    State Cristna

    2012-12-01

    Full Text Available Knowledge-based economy requires both in Romania and internationally, the presence of intelligent organizations, with advanced management capabilities of their collective skills, as sources of performance. As a result, worldwide, more than ever, knowledge is accepted as one of the main sources of competitive advantage. Small and medium sized enterprises (SMEs are the most dynamic and vital factor of progress in the contemporary society, main generator of economic performance and substance in any country, employment opportunity provider for most of population, major contributor to the national budget, and engine to improve the living standard of the population. SMEs represent 99% from all enterprises, drawing up the main human resource agglomeration. In this context, knowledge-based management approaches are inevitable, arising from systemic complexity that goes beyond the rigid hierarchies and traditional practices and entails the emergence of non-hierarchical organizational structures.

  20. Consultancy on 'IAEA initiative to establish a fast reactor knowledge base'. Working material

    International Nuclear Information System (INIS)

    2005-01-01

    At the outset of the meeting, Member States interest in establishing Fast Reactor Knowledge Base was acknowledged by the participants. While the broader objective of the initiative was to develop a Knowledge Base into which the existing Knowledge Preservation Systems will fit, the specific objectives of the meeting were: Make recommendations on FRKP methodology and guidance, Review the proposed structure of the Agency's FRKP Initiative, Make recommendations on the role of the Agency and the Member States implementing the Agency's FRKP Initiative, Develop an approach for the implementation of the structure of the Agency's RFKP Initiative. The meeting concluded covering many aspects of the initiative namely systematic method of data capturing, structuring and functions of FRKP System etc. and placed a strong emphasis on the continues role of IAEA's support and coordination in the data retrieval and knowledge preservation efforts

  1. A knowledge-based system for fluidization studies

    Energy Technology Data Exchange (ETDEWEB)

    1990-04-01

    Chemical engineers depend on process simulation models to determine optimal'' plant configurations which are technically feasible and economically viable. This research was undertaken to develop a comprehensive knowledge based simulation environment IPSE (Intelligent Process Simulation Environment) that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant during all phases of process simulation involving fossil energy processes. In summary, the goals of this research are: application of knowledge-based techniques to the process modeling domain for enhancement of productivity; archiving and distribution of the knowledge of the best experts of process modeling; cross-model inference assistance to modelers not familiar with the process; and development of IPSE to serve as an intelligent tutoring system for process simulation. 18 figs.

  2. Radiation protection optimization using a knowledge based methodology

    International Nuclear Information System (INIS)

    Reyes-Jimenez, J.; Tsoukalas, L.H.

    1991-01-01

    This paper presents a knowledge based methodology for radiological planning and radiation protection optimization. The cost-benefit methodology described on International Commission of Radiation Protection Report No. 37 is employed within a knowledge based framework for the purpose of optimizing radiation protection and plan maintenance activities while optimizing radiation protection. 1, 2 The methodology is demonstrated through an application to a heating ventilation and air conditioning (HVAC) system. HVAC is used to reduce radioactivity concentration levels in selected contaminated multi-compartment models at nuclear power plants when higher than normal radiation levels are detected. The overall objective is to reduce personnel exposure resulting from airborne radioactivity, when routine or maintenance access is required in contaminated areas. 2 figs, 15 refs

  3. Developing a knowledge base for the management of severe accidents

    International Nuclear Information System (INIS)

    Nelson, W.R.; Jenkins, J.P.

    1986-01-01

    Prior to the accident at Three Mile Island, little attention was given to the development of procedures for the management of severe accidents, that is, accidents in which the reactor core is damaged. Since TMI, however, significant effort has been devoted to developing strategies for severe accident management. At the same time, the potential application of artificial intelligence techniques, particularly expert systems, to complex decision-making tasks such as accident diagnosis and response has received considerable attention. The need to develop strategies for accident management suggests that a computerized knowledge base such as used by an expert system could be developed to collect and organize knowledge for severe accident management. This paper suggests a general method which could be used to develop such a knowledge base, and how it could be used to enhance accident management capabilities

  4. Knowledge-based automated radiopharmaceutical manufacturing for Positron Emission Tomography

    International Nuclear Information System (INIS)

    Alexoff, D.L.

    1991-01-01

    This article describes the application of basic knowledge engineering principles to the design of automated synthesis equipment for radiopharmaceuticals used in Positron Emission Tomography (PET). Before discussing knowledge programming, an overview of the development of automated radiopharmaceutical synthesis systems for PET will be presented. Since knowledge systems will rely on information obtained from machine transducers, a discussion of the uses of sensory feedback in today's automated systems follows. Next, the operation of these automated systems is contrasted to radiotracer production carried out by chemists, and the rationale for and basic concepts of knowledge-based programming are explained. Finally, a prototype knowledge-based system supporting automated radiopharmaceutical manufacturing of 18FDG at Brookhaven National Laboratory (BNL) is described using 1stClass, a commercially available PC-based expert system shell

  5. SPRINT RA 230: Methodology for knowledge based developments

    International Nuclear Information System (INIS)

    Wallsgrove, R.; Munro, F.

    1991-01-01

    SPRINT RA 230: A Methodology for Knowledge Based Developments, funded by the European Commission, was set up to investigate the use of KBS in the engineering industry. Its aim was to find out low KBS were currently used and what people's conceptions of them was, to disseminate current knowledge and to recommend further research into this area. A survey (by post and face to face interviews) was carried out under SPRINT RA 230 to investigate requirements for more intelligent software. In the survey we looked both at how people think about Knowledge Based Systems (KBS), what they find useful and what is not useful, and what current expertise problems or limitations of conventional software might suggest KBS solutions. (orig./DG)

  6. Knowledge based economy: The role of expert diaspora

    Directory of Open Access Journals (Sweden)

    Filipović Jovan

    2012-01-01

    Full Text Available Diasporas stand out as an economic or cultural avant-garde of transformation. This is especially true for academic and other intellectual Diaspora communities, because science and knowledge creation are global enterprises. Proclivity of knowledge workers to move in order to improve and absorb transnational knowledge through Diaspora networks might be an essential quality of an emerging national economy of a developing country. The article treats the role of expert Diaspora in knowledge based economy, innovation and talent management. Besides presenting the essentials of knowledge based economy and innovation, it discusses the role of expert Diaspora in science, technology and innovation (STI capacity building. Also, the article emphasizes the importance of leadership for talent and its implications for Diaspora. Using WEF statistics, it illustrates negative consequences of the sad policy of “Chaseaway the brightest and the best” for innovative capacity, competitiveness, and prosperity of nations.

  7. A Representation System User Interface for Knowledge Base Designers

    OpenAIRE

    Fikes, Richard E.

    1982-01-01

    A major strength of frame-based knowledge representation languages is their ability to provide the knowledge base designer with a concise and intuitively appealing means expression. The claim of intuitive appeal is based on the observation that the object -centered style of description provided by these languages often closely matches a designer's understanding of the domain being modeled and therefore lessens the burden of reformulation involved in developing a formal description. To be effe...

  8. An Evaluation of Applying Knowledge Base to Academic Information Service

    OpenAIRE

    Seok-Hyoung Lee; Hwan-Min Kim; Ho-Seop Choe

    2013-01-01

    Through a series of precise text handling processes, including automatic extraction of information from documents with knowledge from various fields, recognition of entity names, detection of core topics, analysis of the relations between the extracted information and topics, and automatic inference of new knowledge, the most efficient knowledge base of the relevant field is created, and plans to apply these to the information knowledge management and service are the core requirements necessa...

  9. Knowledge based operation assist system for JAERI AVF cyclotron

    International Nuclear Information System (INIS)

    Agematsu, T.; Okumura, S.; Yokota, W.; Arakawa, K.; Murakami, T.; Okamura, T.

    1992-01-01

    We have developed two operation assist systems for easy and rapid operation of the JAERI AVF cyclotron. One is a knowledge based expert system guiding the sequence of parameter adjustment to inexperienced cyclotron operators. The other is a real-time simulation of the beam trajectories which are calculated from actual operating parameters. It graphically indicates feasible setting range of parameters that satisfies the acceptance of the cyclotron. These systems provide a human interface to adjust the parameters of the cyclotron. (author)

  10. THE ROLE OF INTELLECTUAL CAPITAL IN KNOWLEDGE - BASED SOCIETY

    OpenAIRE

    Denisa-Elena Parpandel

    2013-01-01

    In a knowledge - based society, organizations undergo permanent changes and transformations, and the key factor of such changes is intellectual capital regarded as one of the most critical, yet most strategic values an organization might own. Analyzing intelectual capital and knowledge society over the last decades has primarily emerged in private companies, whereas at present there is an increasing concern in all the fields of activity. The goal of this paper is to emphasize the im...

  11. Internal Communication and Social Dialogue in Knowledge-Based Organizations

    OpenAIRE

    Diana-Maria CISMARU; Cristina LEOVARIDIS

    2014-01-01

    Knowledge-based organizations are constructed on intangible assets, such as the expertise and the values of the employees. As a consequence, motivation and professional excellence of employees are the main objectives of management teams. For this type of organizations, considered as true “knowledge systems”, the employees represent the most valuable resource that is not motivated only through financial means, but also through internal communication, autonomy or social rewards. The research of...

  12. 3cixty: Building comprehensive knowledge bases for city exploration

    OpenAIRE

    Troncy, Raphaël; Rizzo, Giuseppe; Jameson, Anthony; Corcho, Oscar; Plu, Julien; Palumbo, Enrico; Ballesteros Hermida, Juan Carlos; Spirescu, Adrian; Kuhn, Kai Dominik; Barbu, Catalin; Rossi, Matteo; Celino, Irene; Agarwal, Rachit; Scanu, Christian; Valla, Massimo

    2017-01-01

    International audience; Planning a visit to Expo Milano 2015 or simply touring in Milan are activities that require a certain amount of a priori knowledge of the city. In this paper, we present the process of building such comprehensive knowledge bases that contain descriptions of events and activities, places and sights, transportation facilities as well as social activities, collected from numerous static, near-and real-time local and global data providers, including hyper local sources suc...

  13. Wikipedia Infobox Temporal RDF Knowledge Base and Indices

    OpenAIRE

    Song, Aige

    2015-01-01

    As real world evolves, Infoboxes for Wikipedia subjects are updated to reflect the information changes in the real world, and there is a growing interest in the evolution history of subjects in the Wikipedia. Thus, the management of historical information and the efficiencies of queries for these temporal information have become the major concern. In this paper, we introduce the Wikipedia Infobox temporal RDF knowledge base that constructed from the Wikipedia Infobox history dump, and evaluat...

  14. Knowledge-based View in the Franchising Research Literature

    OpenAIRE

    TSAI, Fu-Sheng; KUO, Chin-Chiung; LIU, Chi-Fang

    2017-01-01

    Abstract. This study was conducted to understand the state of research on applications of Knowledge-based View in franchise systems. First, we used SALSA (Search, Appraisal, Synthesis, and Analysis), a simple systematic data search method, to obtain 61 sample papers. Second, the citations of authors and publications were analyzed using the bibliometric method to understand the authors and the publications that had the most impact as well as the trend of current studies in the field of knowled...

  15. Managing Knowledge-Based Resource Capabilities Under Uncertainty

    OpenAIRE

    Janice E. Carrillo; Cheryl Gaimon

    2004-01-01

    A firm's ability to manage its knowledge-based resource capabilities has become increasingly important as a result of performance threats triggered by technology change and intense competition. At the manufacturing plant level, we focus on three repositories of knowledge that drive performance. First, the physical production or information systems represent knowledge embedded in the plant's technical systems. Second, the plant's workforce has knowledge, including diverse scientific informatio...

  16. A Knowledge-Based Consultant for Financial Marketing

    OpenAIRE

    Kastner, John; Apte, Chidanand; Griesmer, James

    1986-01-01

    This article describes an effort to develop a knowledge-based financial marketing consultant system. Financial marketing is an excellent vehicle for both research and application in artificial intelligence (AI). This domain differs from the great majority of previous expert system domains in that there are no well-defined answers (in traditional sense); the goal here is to obtain satisfactory arguments to support the conclusions made. A large OPS5-based system was implemented as an initial pr...

  17. Quality Management in the Knowledge Based Economy – Kaizen Method

    OpenAIRE

    Popa Liliana Viorica

    2010-01-01

    In the knowledge based economy, organisations are influenced by the quality movement, Kaizen, which plays a strategic role for optimization of organizational capabilities of managers as well as of all the employees. Kaizen represents the philosophy of continuous improvement, until the economic activities inside the organisation reach zero deficiencies. In order to implement in a proper manner Kaizen into the organization, managers must decide what to improve, why to improve, who shall improve...

  18. A Case for Embedded Natural Logic for Ontological Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Nilsson, Jørgen Fischer

    2014-01-01

    We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life sciences...... negation in description logic. We embed the natural logic in DATALOG clauses which is to take care of the computational inference in connection with querying...

  19. End user interface and knowledge base editing system of CSPAR: a knowledge-based consultation system for preventive maintenance in nuclear plants

    International Nuclear Information System (INIS)

    Sinohara, Yasusi; Terano, Takao; Nishiyama, Takuya

    1988-01-01

    Consultation System for Prevention of Abnormal-event Recurrence (CSPAR) is a knowledge-based system to analyze the same kind of events to a given fault reported on a nuclear power plant and to give users some informations for effective measures preventing them. This report discusses the interfaces of CSPAR for both end-users and knowledge-base editors. The interfaces are highly interactive and multi-window oriented. The features are as follows: (1) The end-user interfaces has Japanese language processing facility, which enables the users to consult CSPAR with various synonims and related terms for knowledge-base handling; (2) The knowledge-base editing system is used by knowledge-base editors for maintaining the knowledge on both plants' equipments and abnormal events sequences. It has facilities for easy maintenance of knowledge-bases, which includes a graphic oriented browser, a knowledge-base retriever, and a knowledge-base checker. (author)

  20. ISPE: A knowledge-based system for fluidization studies

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  1. From conventional software based systems to knowledge based systems

    International Nuclear Information System (INIS)

    Bologna, S.

    1995-01-01

    Even if todays nuclear power plants have a very good safety record, there is a continuous search for still improving safety. One direction of this effort address operational safety, trying to improve the handling of disturbances and accidents partly by further automation, partly by creating a better control room environment, providing the operator with intelligent support systems to help in the decision making process. Introduction of intelligent computerised operator support systems has proved to be an efficient way of improving the operators performance. A number of systems have been developed worldwide, assisting in tasks like process fault detection and diagnosis, selection and implementation of proper remedial actions. Unfortunately, the use of Knowledge Based Systems (KBSs), introduces a new dimension to the problem of the licensing process. KBSs, despite the different technology employed, are still nothing more than a computer program. Unfortunately, quite a few people building knowledge based systems seem to ignore the many good programming practices that have evolved over the years for producing traditional computer programs. In this paper the author will try to point out similarities and differences between conventional software based systems, and knowledge based systems, introducing also the concept of model based reasoning. (orig.) (25 refs., 2 figs.)

  2. Development of knowledge base system linked to material database

    International Nuclear Information System (INIS)

    Kaji, Yoshiyuki; Tsuji, Hirokazu; Mashiko, Shinichi; Miyakawa, Shunichi; Fujita, Mitsutane; Kinugawa, Junichi; Iwata, Shuichi

    2002-01-01

    The distributed material database system named 'Data-Free-Way' has been developed by four organizations (the National Institute for Materials Science, the Japan Atomic Energy Research Institute, the Japan Nuclear Cycle Development Institute, and the Japan Science and Technology Corporation) under a cooperative agreement in order to share fresh and stimulating information as well as accumulated information for the development of advanced nuclear materials, for the design of structural components, etc. In order to create additional values of the system, knowledge base system, in which knowledge extracted from the material database is expressed, is planned to be developed for more effective utilization of Data-Free-Way. XML (eXtensible Markup Language) has been adopted as the description method of the retrieved results and the meaning of them. One knowledge note described with XML is stored as one knowledge which composes the knowledge base. Since this knowledge note is described with XML, the user can easily convert the display form of the table and the graph into the data format which the user usually uses. This paper describes the current status of Data-Free-Way, the description method of knowledge extracted from the material database with XML and the distributed material knowledge base system. (author)

  3. Internal Communication and Social Dialogue in Knowledge-Based Organizations

    Directory of Open Access Journals (Sweden)

    Diana-Maria CISMARU

    2014-02-01

    Full Text Available Knowledge-based organizations are constructed on intangible assets, such as the expertise and the values of the employees. As a consequence, motivation and professional excellence of employees are the main objectives of management teams. For this type of organizations, considered as true “knowledge systems”, the employees represent the most valuable resource that is not motivated only through financial means, but also through internal communication, autonomy or social rewards. The research of Eurofound shows that knowledge-based organizations have a low number of trade unions, while professional associations are more relevant for them. There is no tradition to defend through negotiation the working conditions of employees, thus it is important for managers to use the best practices, in order to increase the employees’ loyalty. We conducted a qualitative research concerning the quality of professional life of employees in five sectors of knowledge-based services: advertising-marketing, IT, banking and finance, research and development, and higher education; 15-20 employees from each sector were interviewed. Some of the questions referred directly to trade unions and affiliation, and also to internal communication. Although the results showed a different situation in each of the five sectors, there are few common characteristics: descendant communication is more frequent than ascendant communication, trade unions were reported as missing, unrepresentative or not very active, and the greatest part of employees in this sector are not affiliated, facts that limits the possibility of maintaining employees’ motivation on long term.

  4. Knowledge-based processing for aircraft flight control

    Science.gov (United States)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  5. Toward improved software security training using a cyber warfare opposing force (CW OPFOR): the knowledge base design

    Science.gov (United States)

    Stytz, Martin R.; Banks, Sheila B.

    2005-03-01

    "Train the way you will fight" has been a guiding principle for military training and has served the warfighter well as evidenced by numerous successful operations over the last decade. This need for realistic training for all combatants has been recognized and proven by the warfighter and continues to guide military training. However, to date, this key training principle has not been applied fully in the arena of cyberwarfare due to the lack of realistic, cost effective, reasonable, and formidable cyberwarfare opponents. Recent technological advances, improvements in the capability of computer-generated forces (CGFs) to emulate human behavior, and current results in research in information assurance and software protection, coupled with increasing dependence upon information superiority, indicate that the cyberbattlespace will be a key aspect of future conflict and that it is time to address the cyberwarfare training shortfall. To address the need for a cyberwarfare training and defensive testing capability, we propose research and development to yield a prototype computerized, semi-autonomous (SAF) red team capability. We term this capability the Cyber Warfare Opposing Force (CW OPFOR). There are several technologies that are now mature enough to enable, for the first time, the development of this powerful, effective, high fidelity CW OPFOR. These include improved knowledge about cyberwarfare attack and defense, improved techniques for assembling CGFs, improved techniques for capturing and expressing knowledge, software technologies that permit effective rapid prototyping to be effectively used on large projects, and the capability for effective hybrid reasoning systems. Our development approach for the CW OPFOR lays out several phases in order to address these requirements in an orderly manner and to enable us to test the capabilities of the CW OPFOR and exploit them as they are developed. We have completed the first phase of the research project, which

  6. A knowledge-based system framework for real-time monitoring applications

    International Nuclear Information System (INIS)

    Heaberlin, J.O.; Robinson, A.H.

    1989-01-01

    A real-time environment presents a challenge for knowledge-based systems for process monitoring with on-line data acquisition in nuclear power plants. These applications are typically data intensive. This, coupled with the dynamic nature of events on which problematic decisions are based, requires the development of techniques fundamentally different from those generally employed. Traditional approaches involve knowledge management techniques developed for static data, the majority of which is elicited directly from the user in a consultation environment. Inference mechanisms are generally noninterruptible, requiring all appropriate rules to be fired before new data can be accommodated. As a result, traditional knowledge-based applications in real-time environments have inherent problems in dealing with the time dependence of both the data and the solution process. For example, potential problems include obtaining a correct solution too late to be of use or focusing computing resources on problems that no longer exist. A knowledge-based system framework, the real-time framework (RTF), has been developed that can accommodate the time dependencies and resource trade-offs required for real-time process monitoring applications. This framework provides real-time functionality by using generalized problem-solving goals and control strategies that are modifiable during system operation and capable of accommodating feedback for redirection of activities

  7. A knowledge based on-line diagnostic system for the fast breeder reactor KNKII

    International Nuclear Information System (INIS)

    Eggert, H.; Scherer, K.P.; Stiller, P.

    1989-01-01

    In the nuclear research center at Karlsruhe, a diagnostic expert system is developed to supervise a fast breeder process (KNKII). The problem is to detect critical phases in the beginning state before fault propagation. The expert system itself is integrated in a computer network (realized by a local area network), where different computers are involved as special detection systems (for example acoustic noise, temperature noise, covergas monitoring and so on), which produce partial diagnoses, based on intelligent signal processing techniques like pattern recognition. Additional to the detection systems a process computer is integrated as well as a test computer, which simulates hypothetical and real fault data. On the logical top level the expert system manages the partial diagnoses of the detection systems with the operating data of the process computer and to produce a final diagnosis including the explanation part for operator support. The knowledge base is developed by typical Artificial Intelligence tools. Both fact based and rule based knowledge representations are stored in form of flavors and predications. The inference engine operates on a rule based approach. Specific detail knowledge, based on experience about any years, is available to influence the decision process by increasing or decreasing of the generated hypotheses. In a meta knowledge base, a rule master triggers the special domain experts and contributes the tasks to the specific rule complexes. Such a system management guarantees a problem solving strategy, which operates event triggered and situation specific in a local inference domain. (author). 3 refs, 6 figs, 2 tabs

  8. Reducing a Knowledge-Base Search Space When Data Are Missing

    Science.gov (United States)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  9. A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene

    Directory of Open Access Journals (Sweden)

    Xu-Feng Xing

    2018-01-01

    Full Text Available LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.

  10. SAFOD Brittle Microstructure and Mechanics Knowledge Base (BM2KB)

    Science.gov (United States)

    Babaie, Hassan A.; Broda Cindi, M.; Hadizadeh, Jafar; Kumar, Anuj

    2013-07-01

    Scientific drilling near Parkfield, California has established the San Andreas Fault Observatory at Depth (SAFOD), which provides the solid earth community with short range geophysical and fault zone material data. The BM2KB ontology was developed in order to formalize the knowledge about brittle microstructures in the fault rocks sampled from the SAFOD cores. A knowledge base, instantiated from this domain ontology, stores and presents the observed microstructural and analytical data with respect to implications for brittle deformation and mechanics of faulting. These data can be searched on the knowledge base‧s Web interface by selecting a set of terms (classes, properties) from different drop-down lists that are dynamically populated from the ontology. In addition to this general search, a query can also be conducted to view data contributed by a specific investigator. A search by sample is done using the EarthScope SAFOD Core Viewer that allows a user to locate samples on high resolution images of core sections belonging to different runs and holes. The class hierarchy of the BM2KB ontology was initially designed using the Unified Modeling Language (UML), which was used as a visual guide to develop the ontology in OWL applying the Protégé ontology editor. Various Semantic Web technologies such as the RDF, RDFS, and OWL ontology languages, SPARQL query language, and Pellet reasoning engine, were used to develop the ontology. An interactive Web application interface was developed through Jena, a java based framework, with AJAX technology, jsp pages, and java servlets, and deployed via an Apache tomcat server. The interface allows the registered user to submit data related to their research on a sample of the SAFOD core. The submitted data, after initial review by the knowledge base administrator, are added to the extensible knowledge base and become available in subsequent queries to all types of users. The interface facilitates inference capabilities in the

  11. Knowledge Based Product Configuration - a documentatio tool for configuration projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Malis, Martin

    2003-01-01

    . A lot of knowledge isput into these systems and many domain experts are involved. This calls for an effective documentation system in order to structure this knowledge in a way that fits to the systems. Standard configuration systems do not support this kind of documentation. The chapter deals...... with the development of a Lotus Notes application that serves as a knowledge based documentation tool for configuration projects. A prototype has been developed and tested empirically in an industrial case-company. It has proved to be a succes....

  12. A Case for Embedded Natural Logic for Ontological Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Nilsson, Jørgen Fischer

    2014-01-01

    We argue in favour of adopting a form of natural logic for ontology-structured knowledge bases as an alternative to description logic and rule based languages. Natural logic is a form of logic resembling natural language assertions, unlike description logic. This is essential e.g. in life sciences......, where the large and evolving knowledge specifications should be directly accessible to domain experts. Moreover, natural logic comes with intuitive inference rules. The considered version of natural logic leans toward the closed world assumption (CWA) unlike the open world assumption with classical...

  13. Knowledge-based support system for requirement elaboration in design

    International Nuclear Information System (INIS)

    Furuta, Kazuo; Kondo, Shunsuke

    1994-01-01

    Design requirements are the seeds of every design activity, but elicitation and formalization of them are not easy tasks. This paper proposes a method to support designers in such requirement elaboration process with a computer. In this method the cognitive work space of designers is modeled by abstraction and structural hierarchies, and supporting functions of knowledge-based requirement elaboration, requirement classification and assessment of contentment status of requirements are provided on this framework. A prototype system was developed and tested using fast breeder reactor design. (author)

  14. A real time knowledge-based alarm system EXTRA

    International Nuclear Information System (INIS)

    Ancelin, J.; Gaussot, J.P.; Legaud, P.

    1987-01-01

    EXTRA is an experimental expert system for industrial process control. The main objectives are the diagnosis and operation aids. From a methodological point of view, EXTRA is based on a deep knowledge of the plant operation and on qualitative simulation principles. The application concerns all the electric power and the Chemical and Volume Control System of a P.W.R. nuclear plant. The tests conducted on a full-scope simulator representative of the real plant yielded excellent results and taught the authors a number of lessons. The main lesson concerns the efficiency and flexibility provided by the combination of a knowledge-based system and of an advanced mini-computer

  15. The earth knowledge base and the global information society

    Directory of Open Access Journals (Sweden)

    A Martynenko

    2006-01-01

    Full Text Available Today many countries have applied the strategy of developing an information-oriented society and data infrastructure. Although varying it their details and means of realization, all these policies have the same aim - to build a global information society. Here in Russia this crucial role belongs to the Electronic (Digital Earth initiative, which integrates geoinformation technologies in the Earth Knowledge Base (EKB. It i designed to promote the economic, social and scientific progress. An analysis of the problem has been done in the article.

  16. Building the knowledge base for environmental action and sustainability

    DEFF Research Database (Denmark)

    2015-01-01

    was “Building the knowledge base for environmental action and sustainability”. The joint conference was designed to facilitate ‘within‐the‐domain’, as well as to create a space for developing synergies between the two communities. Altogether 125 research and applied papers (including extended abstracts) from 42......“Knowledge is power” (Sir Francis Bacon (1561 – 1626), Religious Meditations, Of Heresis, 1597) “Science is organised knowledge. Wisdom is organised life” (Immanuel Kant (1724 – 1804)) The 29th International Conference on Informatics for Environmental Protection and the third International...

  17. Knowledge-based dialogue in Intelligent Decision Support Systems

    International Nuclear Information System (INIS)

    Hollnagel, E.

    1987-01-01

    The overall goal for the design of Intelligent Decision Support Systems (IDSS) is to enhance understanding of the process under all operating conditions. For an IDSS to be effective, it must: select or generate the right information; produce reliable and consistent information; allow flexible and effective operator interaction; relate information presentation to current plant status and problems; and make the presentation at the right time. Several ongoing R and D programs try to design and build IDSSs. A particular example is the ESPRIT project Graphics and Knowledge Based Diaglogue for Dynamic Systems (GRADIENT). This project, the problems it addresses, and its uses, are discussed here

  18. Application of knowledge based software to industrial automation in Japan

    International Nuclear Information System (INIS)

    Matsumoto, Yoshihiro

    1985-01-01

    In Japan, large industrial undertakings such as electric utilities or steel works are making first steps towards knowledge engineering, testing the applicability of knowledge based software to industrial automation. The goal is to achieve more intelligent, computer-aided assistance for the personnel and thus to enhance safety, reliability, and maintenance efficiency in large industrial plants. The article presents various examples showing advantages and draw-backs of such systems, and potential applications among others in nuclear or fossil fueled power plants or in electricity supply control systems. (orig./HP) [de

  19. Knowledge based systems advanced concepts, techniques and applications

    CERN Document Server

    1997-01-01

    The field of knowledge-based systems (KBS) has expanded enormously during the last years, and many important techniques and tools are currently available. Applications of KBS range from medicine to engineering and aerospace.This book provides a selected set of state-of-the-art contributions that present advanced techniques, tools and applications. These contributions have been prepared by a group of eminent researchers and professionals in the field.The theoretical topics covered include: knowledge acquisition, machine learning, genetic algorithms, knowledge management and processing under unc

  20. Facilitating Fine Grained Data Provenance using Temporal Data Model

    NARCIS (Netherlands)

    Huq, M.R.; Wombacher, Andreas; Apers, Peter M.G.

    2010-01-01

    E-science applications use fine grained data provenance to maintain the reproducibility of scientific results, i.e., for each processed data tuple, the source data used to process the tuple as well as the used approach is documented. Since most of the e-science applications perform on-line

  1. Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model

    Science.gov (United States)

    Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward

    2018-04-01

    A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.

  2. Knowledge-Based Reinforcement Learning for Data Mining

    Science.gov (United States)

    Kudenko, Daniel; Grzes, Marek

    experts have developed heuristics that help them in planning and scheduling resources in their work place. However, this domain knowledge is often rough and incomplete. When the domain knowledge is used directly by an automated expert system, the solutions are often sub-optimal, due to the incompleteness of the knowledge, the uncertainty of environments, and the possibility to encounter unexpected situations. RL, on the other hand, can overcome the weaknesses of the heuristic domain knowledge and produce optimal solutions. In the talk we propose two techniques, which represent first steps in the area of knowledge-based RL (KBRL). The first technique [1] uses high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We showed that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPSbased method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluated the robustness of the proposed STRIPS-based technique to errors in the plan knowledge. In case that STRIPS knowledge is not available, we propose a second technique [2] that shapes the reward with hierarchical tile coding. Where the Q-function is represented with low-level tile coding, a V-function with coarser tile coding can be learned in parallel and used to approximate the potential for ground states. In the context of data mining, our KBRL approaches can also be used for any data collection task where the acquisition of data may incur considerable cost. In addition, observing the data collection agent in specific scenarios may lead to new insights into optimal data

  3. Secondary Mathematics Teachers' Beliefs, Attitudes, Knowledge Base, and Practices in Meeting the Needs of English Language Learners

    Science.gov (United States)

    Gann, Linda

    2013-01-01

    The research centered on secondary mathematics teachers' beliefs, attitudes, knowledge base, and practices in meeting the academic and language needs of English language learners. Using socio-cultural theory and social practice theory to frame the study, the research design employed a mixed methods approach incorporating self-reported surveys,…

  4. A knowledge based system for training radiation emergency response personnel

    International Nuclear Information System (INIS)

    Kuriakose, K.K.; Peter, T.U.; Natarajan, A.

    1992-01-01

    One of the important aspects of radiation emergency preparedness is to impart training to emergency handling staff. Mock exercises are generally used for this purpose. But practical considerations limit the frequency of such exercises. A suitably designed computer software can be effectively used to impart training. With the advent of low cost personal computers, the frequency with which the training programme can be conducted is unlimited. A computer software with monotonic behaviour is inadequate for such training. It is necessary to provide human like tutoring capabilities. With the advances in knowledge based computer systems, it is possible to develop such a system. These systems have the capability of providing individualized training. This paper describes the development of such a system for training and evaluation of agencies associated with the management of radiation emergency. It also discusses the utility of the software as a general purpose tutor. The details required for the preparation of data files and knowledge base files are included. It uses a student model based on performance measures. The software is developed in C under MS-DOS. It uses a rule based expert system shell developed in C. The features of this shell are briefly described. (author). 5 refs

  5. Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction.

    Science.gov (United States)

    Shin, Jaeho; Ré, Christopher; Cafarella, Michael

    2015-08-01

    End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger.

  6. Knowledge-based simulation using object-oriented programming

    Science.gov (United States)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  7. Secure Location Provenance for Mobile Devices

    Science.gov (United States)

    2015-07-01

    A malicious user should not be able to hide a temporary off-track movement from the claimed location provenance. • A malicious user may want to...Seltzer, “Provenance-aware storage systems,” in Proc. of USENIX ATC. USENIX Association, May 2006, pp. 43–56. [48] D. Bhagwat, L. Chiticariu, W.-C. Tan

  8. Uncertainty, Pluralism, and the Knowledge-Based Theory of the Firm

    DEFF Research Database (Denmark)

    Reihlen, Markus; Ringberg, Torsten

    2013-01-01

    -cultural conventions and other social processes. Although comprehensive in scope, we argue that a knowledge-based theory of the firm needs to integrate a cognitivist approach that includes the synergetic production of tacit and explicit knowledge, the role of reflective thinking in resolving strategic uncertainties......, and the interaction between the individual and the social. This socio-cognitive theory of the firm posits that sustained competitive advantage of a firm is founded on the ability to align knowledge internally within the firm as well as externally with its stakeholders through the individual sense-making of feedback...

  9. Provenance-based refresh in data-oriented workflows

    KAUST Repository

    Ikeda, Robert; Salihoglu, Semih; Widom, Jennifer

    2011-01-01

    We consider a general workflow setting in which input data sets are processed by a graph of transformations to produce output results. Our goal is to perform efficient selective refresh of elements in the output data, i.e., compute the latest values of specific output elements when the input data may have changed. We explore how data provenance can be used to enable efficient refresh. Our approach is based on capturing one-level data provenance at each transformation when the workflow is run initially. Then at refresh time provenance is used to determine (transitively) which input elements are responsible for given output elements, and the workflow is rerun only on that portion of the data needed for refresh. Our contributions are to formalize the problem setting and the problem itself, to specify properties of transformations and provenance that are required for efficient refresh, and to provide algorithms that apply to a wide class of transformations and workflows. We have built a prototype system supporting the features and algorithms presented in the paper. We report preliminary experimental results on the overhead of provenance capture, and on the crossover point between selective refresh and full workflow recomputation. © 2011 ACM.

  10. A method of knowledge base verification and validation for nuclear power plants expert systems

    International Nuclear Information System (INIS)

    Kwon, Il Won

    1996-02-01

    The adoption of expert systems mainly as operator supporting systems is becoming increasingly popular as the control algorithms of system become more and more sophisticated and complicated. As a result of this popularity, a large number of expert systems are developed. The nature of expert systems, however, requires that they be verified and validated carefully and that detailed methodologies for their development be devised. Therefore, it is widely noted that assuring the reliability of expert systems is very important, especially in nuclear industry, and it is also recognized that the process of verification and validation is an essential part of reliability assurance for these systems. Research and practices have produced numerous methods for expert system verification and validation (V and V) that suggest traditional software and system approaches to V and V. However, many approaches and methods for expert system V and V are partial, unreliable, and not uniform. The purpose of this paper is to present a new approach to expert system V and V, based on Petri nets, providing a uniform model. We devise and suggest an automated tool, called COKEP (Checker Of Knowledge base using Extended Petri net), for checking incorrectness, inconsistency, and incompleteness in a knowledge base. We also suggest heuristic analysis for validation process to show that the reasoning path is correct

  11. Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Minoo Aminian

    2014-01-01

    Full Text Available We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC clades. The proposed knowledge-based Bayesian network (KBBN treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes, since these are routinely gathered from MTBC isolates of tuberculosis (TB patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web.

  12. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    Science.gov (United States)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  13. Special Section: The third provenance challenge on using the open provenance model for interoperability

    NARCIS (Netherlands)

    Simmhan, Y; Groth, P.T.; Moreau, L

    2011-01-01

    The third provenance challenge was organized to evaluate the efficacy of the Open Provenance Model (OPM) in representing and sharing provenance with the goal of improving the specification. A data loading scientific workflow that ingests data files into a relational database for the Pan-STARRS sky

  14. THE IMPORTANCE OF MENTORING IN THE KNOWLEDGE BASED ORGANIZATIONS’ MANAGEMENT

    Directory of Open Access Journals (Sweden)

    Alexandra Teodora RUGINOSU

    2014-11-01

    Full Text Available Knowledgebased organizations means continuous learning, performance and networking. People’s development depends on their lifelong learning. Mentoring combines the need of development and performance of individuals with the organizational ones. Organizations nowadays face difficulties in recruiting and retaining qualified employees. The work force migration is a phenomenon they have to fight constantly. Employees are being faithful to companies that give them an environment suitable for development: supportive, safe, non-judgmental and comfortable. Teamwork and trust in the co-workers enables employees to show their true potential and trial with no fear. This kind of environment can be created through a mentoring program. This paper highlights the importance of mentoring in the knowledge based organizations management. Mentoring helps staff insertion, development and succession planning, increases employee’s motivation and talent retention and promotes organizational culture. This study presents the benefits and drawbacks that mentoring brings to organizations and employees.

  15. A model for a knowledge-based system's life cycle

    Science.gov (United States)

    Kiss, Peter A.

    1990-01-01

    The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.

  16. The effect of knowledge based view on sustainable competitive advantage

    Directory of Open Access Journals (Sweden)

    Fatemeh Rezaee

    2015-11-01

    Full Text Available This study investigates the quantitative relationship between knowledge based view (i.e. empowering employees, promoting confidence, coding rules and sustainability competitive advantage (i.e. market, customer, financial within the banking industry of Iran. A valid research instrument was utilized to conduct a survey of 150 top- and middle-level managers from Mellat bank of Iran. With a response rate of 81.3 percent, 122 questionnaires are returned; the number of valid and usable questionnaires was 101. In order to determine validity of questionnaire, the content validity was used and Cronbach's alpha was used to determine the reliability of the questionnaire (KBV questionnaire 0.886, SCA questionnaire 0.843. Utilizing the structural equation modeling, and after a series of exploratory and confirmatory factor analyses, it was found that KBV had the greatest effect on the market centered SCA, while it had the least influence on the customer centered.

  17. Update Knowledge Base for Long-term Core Cooling Reliability

    International Nuclear Information System (INIS)

    Agrell, Maria; Sandervag, Oddbjoern; Amri, Abdallah; ); Bang, Young S.; Blomart, Philippe; Broecker, Annette; Pointner, Winfried; Ganzmann, Ingo; Lenogue, Bruno; Guzonas, David; Herer, Christophe; Mattei, Jean-Marie; Tricottet, Matthieu; Masaoka, Hideaki; Soltesz, Vojtech; Tarkiainen, Seppo; Ui, Atsushi; Villalba, Cristina; Zigler, Gilbert

    2013-11-01

    This revision of the Knowledge Base for Emergency Core Cooling System Recirculation Reliability (NEA/CSNI/R (95)11) describes the current status (late 2012) of the knowledge base on emergency core cooling system (ECCS) and containment spray system (CSS) suction strainer performance and long-term cooling in operating power reactors. New reactors, such as the AP1000, EPR and APR1400 that are under construction in some Organization for Economic Co-operation and Development (OECD) member countries, are not addressed in detail in this revision. The containment sump (also known as the emergency or recirculation sump in pressurized water reactors (PWRs) and pressurized heavy water reactors (PHWRs) or the suppression pools or wet wells in boiling water reactors (BWRs)) and associated ECCS strainers are parts of the ECCS in both reactor types. All nuclear power plants (NPPs) are required to have an ECCS that is capable of mitigating a design basis accident (DBA). The containment sump collects reactor coolant, ECCS injection water, and containment spray solutions, if applicable, after a loss-of-coolant accident (LOCA). The sump serves as the water source to support long-term recirculation for residual heat removal, emergency core cooling, and containment atmosphere clean-up. This water source, the related pump suction inlets, and the piping between the source and inlets are important safety-related components. In addition, if fibrous material is deposited at the fuel element spacers, core cooling can be endangered. The performance of ECCS/CSS strainers was recognized many years ago as an important regulatory and safety issue. One of the primary concerns is the potential for debris generated by a jet of high-pressure coolant during a LOCA to clog the strainer and obstruct core cooling. The issue was considered resolved for all reactor types in the mid-1990s and the OECD/NEA/CSNI published report NEA/CSNI/R(95)11 in 1996 to document the state of knowledge of ECCS performance

  18. Knowledge-based requirements analysis for automating software development

    Science.gov (United States)

    Markosian, Lawrence Z.

    1988-01-01

    We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.

  19. A knowledge base architecture for distributed knowledge agents

    Science.gov (United States)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  20. Design of Knowledge Bases for Plant Gene Regulatory Networks.

    Science.gov (United States)

    Mukundi, Eric; Gomez-Cano, Fabio; Ouma, Wilberforce Zachary; Grotewold, Erich

    2017-01-01

    Developing a knowledge base that contains all the information necessary for the researcher studying gene regulation in a particular organism can be accomplished in four stages. This begins with defining the data scope. We describe here the necessary information and resources, and outline the methods for obtaining data. The second stage consists of designing the schema, which involves defining the entire arrangement of the database in a systematic plan. The third stage is the implementation, defined by actualization of the database by using software according to a predefined schema. The final stage is development, where the database is made available to users in a web-accessible system. The result is a knowledgebase that integrates all the information pertaining to gene regulation, and which is easily expandable and transferable.

  1. TMS for Instantiating a Knowledge Base With Incomplete Data

    Science.gov (United States)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  2. A knowledge-based assistant for valve maintenance planning

    International Nuclear Information System (INIS)

    Winter, M.J.; Danofsky, R.A.; Spinrad, B.I.; Howard, K.

    1987-01-01

    A knowledge-based program is being developed to assist engineers in maintenance planning for safety related, motor-operated valves at a boiling water reactor. The purpose of this project is to develop the general framework for a prototype system that demonstrates the capabilities for diagnosing valve symptoms and prescribing corrective maintenance, completing a portion of the Corrective Maintenance Action Request (CMAR) form which must be prepared for each job, and managing an interactive valve data base. Minimizing user input and providing output in a form that is familiar to the maintenance planning engineer are important goals for the program. This paper describes the present features of the valve maintenance advisory system which is currently being tested

  3. Knowledge-Based Systems in Biomedicine and Computational Life Science

    CERN Document Server

    Jain, Lakhmi

    2013-01-01

    This book presents a sample of research on knowledge-based systems in biomedicine and computational life science. The contributions include: ·         personalized stress diagnosis system ·         image analysis system for breast cancer diagnosis ·         analysis of neuronal cell images ·         structure prediction of protein ·         relationship between two mental disorders ·         detection of cardiac abnormalities ·         holistic medicine based treatment ·         analysis of life-science data  

  4. OWL2 benchmarking for the evaluation of knowledge based systems.

    Directory of Open Access Journals (Sweden)

    Sher Afgun Khan

    Full Text Available OWL2 semantics are becoming increasingly popular for the real domain applications like Gene engineering and health MIS. The present work identifies the research gap that negligible attention has been paid to the performance evaluation of Knowledge Base Systems (KBS using OWL2 semantics. To fulfil this identified research gap, an OWL2 benchmark for the evaluation of KBS is proposed. The proposed benchmark addresses the foundational blocks of an ontology benchmark i.e. data schema, workload and performance metrics. The proposed benchmark is tested on memory based, file based, relational database and graph based KBS for performance and scalability measures. The results show that the proposed benchmark is able to evaluate the behaviour of different state of the art KBS on OWL2 semantics. On the basis of the results, the end users (i.e. domain expert would be able to select a suitable KBS appropriate for his domain.

  5. KBERG: KnowledgeBase for Estrogen Responsive Genes

    DEFF Research Database (Denmark)

    Tang, Suisheng; Zhang, Zhuo; Tan, Sin Lam

    2007-01-01

    Estrogen has a profound impact on human physiology affecting transcription of numerous genes. To decipher functional characteristics of estrogen responsive genes, we developed KnowledgeBase for Estrogen Responsive Genes (KBERG). Genes in KBERG were derived from Estrogen Responsive Gene Database...... (ERGDB) and were analyzed from multiple aspects. We explored the possible transcription regulation mechanism by capturing highly conserved promoter motifs across orthologous genes, using promoter regions that cover the range of [-1200, +500] relative to the transcription start sites. The motif detection...... is based on ab initio discovery of common cis-elements from the orthologous gene cluster from human, mouse and rat, thus reflecting a degree of promoter sequence preservation during evolution. The identified motifs are linked to transcription factor binding sites based on the TRANSFAC database. In addition...

  6. [Artificial intelligence--the knowledge base applied to nephrology].

    Science.gov (United States)

    Sancipriano, G P

    2005-01-01

    The idea that efficacy efficiency, and quality in medicine could not be reached without sorting the huge knowledge of medical and nursing science is very common. Engineers and computer scientists have developed medical software with great prospects for success, but currently these software applications are not so useful in clinical practice. The medical doctor and the trained nurse live the 'information age' in many daily activities, but the main benefits are not so widespread in working activities. Artificial intelligence and, particularly, export systems charm health staff because of their potential. The first part of this paper summarizes the characteristics of 'weak artificial intelligence' and of expert systems important in clinical practice. The second part discusses medical doctors' requirements and the current nephrologic knowledge bases available for artificial intelligence development.

  7. HINT-KB: The human interactome knowledge base

    KAUST Repository

    Theofilatos, Konstantinos A.

    2012-01-01

    Proteins and their interactions are considered to play a significant role in many cellular processes. The identification of Protein-Protein interactions (PPIs) in human is an open research area. Many Databases, which contain information about experimentally and computationally detected human PPIs as well as their corresponding annotation data, have been developed. However, these databases contain many false positive interactions, are partial and only a few of them incorporate data from various sources. To overcome these limitations, we have developed HINT-KB (http://150.140.142.24:84/Default.aspx) which is a knowledge base that integrates data from various sources, provides a user-friendly interface for their retrieval, estimates a set of features of interest and computes a confidence score for every candidate protein interaction using a modern computational hybrid methodology. © 2012 IFIP International Federation for Information Processing.

  8. CGKB: an annotation knowledge base for cowpea (Vigna unguiculata L. methylation filtered genomic genespace sequences

    Directory of Open Access Journals (Sweden)

    Spraggins Thomas A

    2007-04-01

    Full Text Available Abstract Background Cowpea [Vigna unguiculata (L. Walp.] is one of the most important food and forage legumes in the semi-arid tropics because of its ability to tolerate drought and grow on poor soils. It is cultivated mostly by poor farmers in developing countries, with 80% of production taking place in the dry savannah of tropical West and Central Africa. Cowpea is largely an underexploited crop with relatively little genomic information available for use in applied plant breeding. The goal of the Cowpea Genomics Initiative (CGI, funded by the Kirkhouse Trust, a UK-based charitable organization, is to leverage modern molecular genetic tools for gene discovery and cowpea improvement. One aspect of the initiative is the sequencing of the gene-rich region of the cowpea genome (termed the genespace recovered using methylation filtration technology and providing annotation and analysis of the sequence data. Description CGKB, Cowpea Genespace/Genomics Knowledge Base, is an annotation knowledge base developed under the CGI. The database is based on information derived from 298,848 cowpea genespace sequences (GSS isolated by methylation filtering of genomic DNA. The CGKB consists of three knowledge bases: GSS annotation and comparative genomics knowledge base, GSS enzyme and metabolic pathway knowledge base, and GSS simple sequence repeats (SSRs knowledge base for molecular marker discovery. A homology-based approach was applied for annotations of the GSS, mainly using BLASTX against four public FASTA formatted protein databases (NCBI GenBank Proteins, UniProtKB-Swiss-Prot, UniprotKB-PIR (Protein Information Resource, and UniProtKB-TrEMBL. Comparative genome analysis was done by BLASTX searches of the cowpea GSS against four plant proteomes from Arabidopsis thaliana, Oryza sativa, Medicago truncatula, and Populus trichocarpa. The possible exons and introns on each cowpea GSS were predicted using the HMM-based Genscan gene predication program and the

  9. Provenance trials of larch in Siberia

    Energy Technology Data Exchange (ETDEWEB)

    Milyutin, L.I. [V.N. Sukachev Inst. of Forest SB RAS, Krasnoyarsk (Russian Federation)

    1995-12-31

    Some results of provenance trials of larch in Siberia are given. These provenance trials were established in the last thirty years by efforts of V.N. Sukaczev Inst. of Forest. Provenances and species of larch were tested in some field trials distributed over Siberia between Lat. N 52 deg and 66 deg, Long. E 88 deg and 113 deg: near Krasnoyarsk, in Republic Khakasia (an altitudes of 800 and 1200 metres), in the Lower Yenisei near Turukhansk, in the west and south regions of Krasnoyarsk territory, in the Upper Lena, near Chita. 2 refs

  10. Archives and societal provenance Australian essays

    CERN Document Server

    Piggott, Michael

    2012-01-01

    Records and archival arrangements in Australia are globally relevant because Australia's indigenous people represent the oldest living culture in the world, and because modern Australia is an ex-colonial society now heavily multicultural in outlook. Archives and Societal Provenance explores this distinctiveness using the theoretical concept of societal provenance as propounded by Canadian archival scholars led by Dr Tom Nesmith. The book's seventeen essays blend new writing and re-workings of earlier work, comprising the fi rst text to apply a societal provenance perspective to a national sett

  11. Provenance trials of larch in Siberia

    Energy Technology Data Exchange (ETDEWEB)

    Milyutin, L I [V.N. Sukachev Inst. of Forest SB RAS, Krasnoyarsk (Russian Federation)

    1996-12-31

    Some results of provenance trials of larch in Siberia are given. These provenance trials were established in the last thirty years by efforts of V.N. Sukaczev Inst. of Forest. Provenances and species of larch were tested in some field trials distributed over Siberia between Lat. N 52 deg and 66 deg, Long. E 88 deg and 113 deg: near Krasnoyarsk, in Republic Khakasia (an altitudes of 800 and 1200 metres), in the Lower Yenisei near Turukhansk, in the west and south regions of Krasnoyarsk territory, in the Upper Lena, near Chita. 2 refs

  12. System for inspection and quality assurance of software - A knowledge-based experiment with code understanding

    International Nuclear Information System (INIS)

    Das, B.K.

    1989-01-01

    This paper describes a knowledge-based prototype that inspects and quality-assures software components. The prototype model, which offers a singular representation of these components, is used to automate both the mechanical and nonmechanical activities in the quality assurance (QA) process. It is shown that the prototype, in addition to automating the QA process, provides a novel approach to understanding code. These approaches are compared with recent approaches to code understanding. The paper also presents the results of an experiment with several classes of nonsyntactic bugs. It is argued that a structured environment, as facilitated by this unique architecture, along with software development standards used in the QA process, is essential for meaningful analysis of code. 8 refs

  13. The applicability of knowledge-based scheduling to the utilities industry

    International Nuclear Information System (INIS)

    Yoshimoto, G.; Gargan, R. Jr.; Duggan, P.

    1992-01-01

    The Electric Power Research Institute (EPRI), Nuclear Power Division, has identified the three major goals of high technology applications for nuclear power plants. These goals are to enhance power production through increasing power generation efficiency, to increase productivity of the operations, and to reduce the threats to the safety of the plant. Our project responds to the second goal by demonstrating that significant productivity increases can be achieved for outage maintenance operations based on existing knowledge-based scheduling technology. Its use can also mitigate threats to potential safety problems by means of the integration of risk assessment features into the scheduler. The scheduling approach uses advanced techniques enabling the automation of the routine scheduling decision process that previously was handled by people. The process of removing conflicts in scheduling is automated. This is achieved by providing activity representations that allow schedulers to express a variety of different scheduling constraints and by implementing scheduling mechanisms that simulate kinds of processes that humans use to find better solutions from a large number of possible solutions. This approach allows schedulers to express detailed constraints between activities and other activities, resources (material and personnel), and requirements that certain states exist for their execution. Our scheduler has already demonstrated its benefit to improving the shuttle processing flow management at Kennedy Space Center. Knowledge-based scheduling techniques should be examined by utilities industry researchers, developers, operators and management for application to utilities planning problems because of its great cost benefit potential. 4 refs., 4 figs

  14. A Provenance Tracking Model for Data Updates

    Directory of Open Access Journals (Sweden)

    Gabriel Ciobanu

    2012-08-01

    Full Text Available For data-centric systems, provenance tracking is particularly important when the system is open and decentralised, such as the Web of Linked Data. In this paper, a concise but expressive calculus which models data updates is presented. The calculus is used to provide an operational semantics for a system where data and updates interact concurrently. The operational semantics of the calculus also tracks the provenance of data with respect to updates. This provides a new formal semantics extending provenance diagrams which takes into account the execution of processes in a concurrent setting. Moreover, a sound and complete model for the calculus based on ideals of series-parallel DAGs is provided. The notion of provenance introduced can be used as a subjective indicator of the quality of data in concurrent interacting systems.

  15. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    International Nuclear Information System (INIS)

    Hipp, James R.; Moore, Susan G.; Myers, Stephen C.; Schultz, Craig A.; Shepherd, Ellen; Young, Christopher J.

    1999-01-01

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation

  16. Finding gene regulatory network candidates using the gene expression knowledge base.

    Science.gov (United States)

    Venkatesan, Aravind; Tripathi, Sushil; Sanz de Galdeano, Alejandro; Blondé, Ward; Lægreid, Astrid; Mironov, Vladimir; Kuiper, Martin

    2014-12-10

    Network-based approaches for the analysis of large-scale genomics data have become well established. Biological networks provide a knowledge scaffold against which the patterns and dynamics of 'omics' data can be interpreted. The background information required for the construction of such networks is often dispersed across a multitude of knowledge bases in a variety of formats. The seamless integration of this information is one of the main challenges in bioinformatics. The Semantic Web offers powerful technologies for the assembly of integrated knowledge bases that are computationally comprehensible, thereby providing a potentially powerful resource for constructing biological networks and network-based analysis. We have developed the Gene eXpression Knowledge Base (GeXKB), a semantic web technology based resource that contains integrated knowledge about gene expression regulation. To affirm the utility of GeXKB we demonstrate how this resource can be exploited for the identification of candidate regulatory network proteins. We present four use cases that were designed from a biological perspective in order to find candidate members relevant for the gastrin hormone signaling network model. We show how a combination of specific query definitions and additional selection criteria derived from gene expression data and prior knowledge concerning candidate proteins can be used to retrieve a set of proteins that constitute valid candidates for regulatory network extensions. Semantic web technologies provide the means for processing and integrating various heterogeneous information sources. The GeXKB offers biologists such an integrated knowledge resource, allowing them to address complex biological questions pertaining to gene expression. This work illustrates how GeXKB can be used in combination with gene expression results and literature information to identify new potential candidates that may be considered for extending a gene regulatory network.

  17. The COPD Knowledge Base: enabling data analysis and computational simulation in translational COPD research.

    Science.gov (United States)

    Cano, Isaac; Tényi, Ákos; Schueller, Christine; Wolff, Martin; Huertas Migueláñez, M Mercedes; Gomez-Cabrero, David; Antczak, Philipp; Roca, Josep; Cascante, Marta; Falciani, Francesco; Maier, Dieter

    2014-11-28

    Previously we generated a chronic obstructive pulmonary disease (COPD) specific knowledge base (http://www.copdknowledgebase.eu) from clinical and experimental data, text-mining results and public databases. This knowledge base allowed the retrieval of specific molecular networks together with integrated clinical and experimental data. The COPDKB has now been extended to integrate over 40 public data sources on functional interaction (e.g. signal transduction, transcriptional regulation, protein-protein interaction, gene-disease association). In addition we integrated COPD-specific expression and co-morbidity networks connecting over 6 000 genes/proteins with physiological parameters and disease states. Three mathematical models describing different aspects of systemic effects of COPD were connected to clinical and experimental data. We have completely redesigned the technical architecture of the user interface and now provide html and web browser-based access and form-based searches. A network search enables the use of interconnecting information and the generation of disease-specific sub-networks from general knowledge. Integration with the Synergy-COPD Simulation Environment enables multi-scale integrated simulation of individual computational models while integration with a Clinical Decision Support System allows delivery into clinical practice. The COPD Knowledge Base is the only publicly available knowledge resource dedicated to COPD and combining genetic information with molecular, physiological and clinical data as well as mathematical modelling. Its integrated analysis functions provide overviews about clinical trends and connections while its semantically mapped content enables complex analysis approaches. We plan to further extend the COPDKB by offering it as a repository to publish and semantically integrate data from relevant clinical trials. The COPDKB is freely available after registration at http://www.copdknowledgebase.eu.

  18. Multi-analytical approach applied to the provenance study of marbles used as covering slabs in the archaeological submerged site of Baia (Naples, Italy): The case of the “Villa con ingresso a protiro”

    International Nuclear Information System (INIS)

    Ricca, Michela; Belfiore, Cristina Maria; Ruffolo, Silvestro Antonio; Barca, Donatella; De Buergo, Monica Alvarez; Crisci, Gino Mirocle; La Russa, Mauro Francesco

    2015-01-01

    Highlights: • Archaeometric investigations of ancient marbles from underwater environment. • Distinguish the different variety of marbles by using minero-petrographic and geochemical-isotopic investigations. • Compare the results with literature data allowing to broaden the existing database. - Abstract: This paper is focused on archaeometric investigations of white marbles taken from the submerged archaeological site of Baia (Naples). The marine area includes the ruins of this ancient Roman city, whose structures range from luxurious maritime villas and imperial buildings with private thermae and tabernae, to more simple and modest houses. Analyses were carried out on fifty marble fragments of covering slabs, belonging to several pavements of the monumental villa, called the Villa con ingresso a protiro, in order to ascertain their provenance. The most distinctive properties of marbles are their variety of textural property especially regarding grain size (MGS), associated with the Mn content and the variation of stable isotopes. These features, supported by the contribution of other variables and studies, establish the basis for the correct identification of the marbles. For this purpose, minero-petrographic and geochemical techniques were used. Results were compared with literature data of white marbles commonly used in antiquity, especially in the Mediterranean basin and showed that a variety of precious marbles from Carrara, Docimium (Afyon), Thasos-D, Aphrodisias, Proconnesos (Marmara), Paros and Pentelicon were used in the ancient roman city of Baia, confirming the importance of the submerged archaeological site and also allowing researchers to broaden the existing database.

  19. Multi-analytical approach applied to the provenance study of marbles used as covering slabs in the archaeological submerged site of Baia (Naples, Italy): The case of the “Villa con ingresso a protiro”

    Energy Technology Data Exchange (ETDEWEB)

    Ricca, Michela, E-mail: michela.ricca@unical.it [Dipartimento di Biologia, Ecologia e Scienze della Terra (DiBEST), University of Calabria, Via Pietro Bucci, 87036 Arcavacata di Rende (CS) (Italy); Belfiore, Cristina Maria [Dipartimento di Scienze Biologiche, Geologiche e Ambientali, University of Catania, Corso Italia 57, 95129 Catania (Italy); Ruffolo, Silvestro Antonio; Barca, Donatella [Dipartimento di Biologia, Ecologia e Scienze della Terra (DiBEST), University of Calabria, Via Pietro Bucci, 87036 Arcavacata di Rende (CS) (Italy); De Buergo, Monica Alvarez [Instituto de Geociencias (CSIC-UCM), Facultad de Ciencias Geológicas, planta 7, despacho 17.4c/José Antonio Nováis 12, 28040 Madrid (Spain); Crisci, Gino Mirocle; La Russa, Mauro Francesco [Dipartimento di Biologia, Ecologia e Scienze della Terra (DiBEST), University of Calabria, Via Pietro Bucci, 87036 Arcavacata di Rende (CS) (Italy)

    2015-12-01

    Highlights: • Archaeometric investigations of ancient marbles from underwater environment. • Distinguish the different variety of marbles by using minero-petrographic and geochemical-isotopic investigations. • Compare the results with literature data allowing to broaden the existing database. - Abstract: This paper is focused on archaeometric investigations of white marbles taken from the submerged archaeological site of Baia (Naples). The marine area includes the ruins of this ancient Roman city, whose structures range from luxurious maritime villas and imperial buildings with private thermae and tabernae, to more simple and modest houses. Analyses were carried out on fifty marble fragments of covering slabs, belonging to several pavements of the monumental villa, called the Villa con ingresso a protiro, in order to ascertain their provenance. The most distinctive properties of marbles are their variety of textural property especially regarding grain size (MGS), associated with the Mn content and the variation of stable isotopes. These features, supported by the contribution of other variables and studies, establish the basis for the correct identification of the marbles. For this purpose, minero-petrographic and geochemical techniques were used. Results were compared with literature data of white marbles commonly used in antiquity, especially in the Mediterranean basin and showed that a variety of precious marbles from Carrara, Docimium (Afyon), Thasos-D, Aphrodisias, Proconnesos (Marmara), Paros and Pentelicon were used in the ancient roman city of Baia, confirming the importance of the submerged archaeological site and also allowing researchers to broaden the existing database.

  20. The Relationships among Early Childhood Educators' Beliefs, Knowledge Bases, and Practices Related to Early Literacy.

    Science.gov (United States)

    Islam, Chhanda

    A study was conducted to determine and compare the literacy beliefs, knowledge bases, and practices of early childhood educators who espouse emergent literacy and reading readiness philosophies; to explore the relationship among beliefs, knowledge bases, and practices; and to examine the degree to which beliefs, knowledge bases, and practices were…

  1. NetWeaver for EMDS user guide (version 1.1): a knowledge base development system.

    Science.gov (United States)

    Keith M. Reynolds

    1999-01-01

    The guide describes use of the NetWeaver knowledge base development system. Knowledge representation in NetWeaver is based on object-oriented fuzzy-logic networks that offer several significant advantages over the more traditional rulebased representation. Compared to rule-based knowledge bases, NetWeaver knowledge bases are easier to build, test, and maintain because...

  2. Knowledge-based model of competition in restaurant industry: a qualitative study about culinary competence, creativity, and innovation in five full-service restaurants in Jakarta

    OpenAIRE

    NAPITUPULU JOSHUA H.; ASTUTI ENDANG SITI; HAMID DJAMHUR; RAHARDJO KUSDI

    2016-01-01

    The purpose of the study is to have an in-depth description in the form of the analysis of culinary competence, creativity and innovation that develops knowledge-based model of competence in full-service restaurant business. Studies on restaurant generally focused on customers more particularly customer’s satisfaction and loyalty, and very few studies discussed internal competitive factors in restaurant business. The study aims at filling out the research gap, using knowledge-based approach t...

  3. Knowledge-based engineering of a PLC controlled telescope

    Science.gov (United States)

    Pessemier, Wim; Raskin, Gert; Saey, Philippe; Van Winckel, Hans; Deconinck, Geert

    2016-08-01

    As the new control system of the Mercator Telescope is being finalized, we can review some technologies and design methodologies that are advantageous, despite their relative uncommonness in astronomical instrumentation. Particular for the Mercator Telescope is that it is controlled by a single high-end soft-PLC (Programmable Logic Controller). Using off-the-shelf components only, our distributed embedded system controls all subsystems of the telescope such as the pneumatic primary mirror support, the hydrostatic bearing, the telescope axes, the dome, the safety system, and so on. We show how real-time application logic can be written conveniently in typical PLC languages (IEC 61131-3) and in C++ (to implement the pointing kernel) using the commercial TwinCAT 3 programming environment. This software processes the inputs and outputs of the distributed system in real-time via an observatory-wide EtherCAT network, which is synchronized with high precision to an IEEE 1588 (PTP, Precision Time Protocol) time reference clock. Taking full advantage of the ability of soft-PLCs to run both real-time and non real-time software, the same device also hosts the most important user interfaces (HMIs or Human Machine Interfaces) and communication servers (OPC UA for process data, FTP for XML configuration data, and VNC for remote control). To manage the complexity of the system and to streamline the development process, we show how most of the software, electronics and systems engineering aspects of the control system have been modeled as a set of scripts written in a Domain Specific Language (DSL). When executed, these scripts populate a Knowledge Base (KB) which can be queried to retrieve specific information. By feeding the results of those queries to a template system, we were able to generate very detailed "browsable" web-based documentation about the system, but also PLC software code, Python client code, model verification reports, etc. The aim of this paper is to

  4. Building organisational cyber resilience: A strategic knowledge-based view of cyber security management.

    Science.gov (United States)

    Ferdinand, Jason

    The concept of cyber resilience has emerged in recent years in response to the recognition that cyber security is more than just risk management. Cyber resilience is the goal of organisations, institutions and governments across the world and yet the emerging literature is somewhat fragmented due to the lack of a common approach to the subject. This limits the possibility of effective collaboration across public, private and governmental actors in their efforts to build and maintain cyber resilience. In response to this limitation, and to calls for a more strategically focused approach, this paper offers a knowledge-based view of cyber security management that explains how an organisation can build, assess, and maintain cyber resilience.

  5. Integrated Knowledge Based Expert System for Disease Diagnosis System

    Science.gov (United States)

    Arbaiy, Nureize; Sulaiman, Shafiza Eliza; Hassan, Norlida; Afizah Afip, Zehan

    2017-08-01

    The role and importance of healthcare systems to improve quality of life and social welfare in a society have been well recognized. Attention should be given to raise awareness and implementing appropriate measures to improve health care. Therefore, a computer based system is developed to serve as an alternative for people to self-diagnose their health status based on given symptoms. This strategy should be emphasized so that people can utilize the information correctly as a reference to enjoy healthier life. Hence, a Web-based Community Center for Healthcare Diagnosis system is developed based on expert system technique. Expert system reasoning technique is employed in the system to enable information about treatment and prevention of the diseases based on given symptoms. At present, three diseases are included which are arthritis, thalassemia and pneumococcal. Sets of rule and fact are managed in the knowledge based system. Web based technology is used as a platform to disseminate the information to users in order for them to optimize the information appropriately. This system will benefit people who wish to increase health awareness and seek expert knowledge on the diseases by performing self-diagnosis for early disease detection.

  6. Compiling knowledge-based systems from KEE to Ada

    Science.gov (United States)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  7. Knowledge-Based operation planning system for boiling water reactors

    International Nuclear Information System (INIS)

    Tatsuya Iwamoto; Shungo Sakurai; Hitoshi Uematsu; Makoto Tsuiki

    1987-01-01

    A knowledge-Based Boiling Water Reactor operation planning system was developed to support core operators or core management engineers in making core operation plans, by automatically generating suboptimum core operation procedures. The procedures are obtained by searching a branching tree of the possible core status (nodes) and the elementary operations to change the core status (branches). A path that ends at the target node, and contains only operationally feasible nodes can be a candidate of the solution. The core eigenvalue, the power distribution and the thermal limit parameters at key points are calculated by running a three-dimensional (3-D) BWR core physics simulator to examine the feasibility of the nodes and the performance of candidates. To obtain a practically acceptable solution within a reasonable time rather than making a time-consuming effort to get the optimum one, the Depth-First-Search method, together with the heuristic branch-bounding, was used to search the branching tree. The system was applied to actual operation plannings with real plant data, and gave satisfactory results. It can be concluded that the system can be applied to generate core operation procedures as a substitute for core management experts

  8. Research on the construction of knowledge base for institutes

    International Nuclear Information System (INIS)

    Yang Ru

    2014-01-01

    Knowledge base (KB in short) is very important for institutes. It can train employees to improve their ability of work. It can supply more information to directors for making right decisions, and can help constructing learning organization to promote innovation. Institutes possess several information systems, but there are some problems such as inadequate use of documents, and connotative knowledge isn't described and communicated. KB of institute is based on programs. It stress integrity, secrecy of programs and authorized access. Libraries have abilities to construct KB, since it's the center of information for the institute. KB of institute includes: KB of training, communion of technique issues, KB of department, personal KB, KB of specialists. Because of low cost many institutes adopt softwares of free codes such as: DSpace, EPrints, Fedodra, CDSware, Greenstone. KB systems are choosed by institutes, depending on types of knowledge, ability of technique, fund and so on. KB is constructed by collecting, sorting, describing key knowledge, connecting, accessing, updating and innovating. Program KB of different places and majors will unite in the future. (author)

  9. Nuclear reactions video (knowledge base on low energy nuclear physics)

    International Nuclear Information System (INIS)

    Zagrebaev, V.; Kozhin, A.

    1999-01-01

    The NRV (nuclear reactions video) is an open and permanently extended global system of management and graphical representation of nuclear data and video-graphic computer simulation of low energy nuclear dynamics. It consists of a complete and renewed nuclear database and well known theoretical models of low energy nuclear reactions altogether forming the 'low energy nuclear knowledge base'. The NRV solves two main problems: 1) fast and visualized obtaining and processing experimental data on nuclear structure and nuclear reactions; 2) possibility for any inexperienced user to analyze experimental data within reliable commonly used models of nuclear dynamics. The system is based on the realization of the following principal things: the net and code compatibility with the main existing nuclear databases; maximal simplicity in handling: extended menu, friendly graphical interface, hypertext description of the models, and so on; maximal visualization of input data, dynamics of studied processes and final results by means of real three-dimensional images, plots, tables and formulas and a three-dimensional animation. All the codes are composed as the real Windows applications and work under Windows 95/NT

  10. Knowledge Based Help desk System in Nuclear Malaysia

    International Nuclear Information System (INIS)

    Mohamad Safuan Sulaiman; Abdul Muin Abdul Rahman; Norzalina Nasirudin; Khairiel Adyani Abdul Ghani; Abdul Aziz Mhd Ramli; Mohd Ashhar Khalid

    2012-01-01

    Knowledge based (K-based) Help desk system is a knowledge oriented web based system that provides support to business process of the technical service providers. It is a multi-centric system which focuses on end-users, technical workers and higher level management through utilization of knowledge which resides and grows within the system. The objectives of the system are to be a user-friendly, capture technical knowledge for efficient performance and educating users for self reliance. These were achieved through the improvement of the help desk business process and better management of technical knowledge. This system has been tested and implemented in Information Technology Center (IT), Engineering Division (BKJ) and Instrumentation and Automation Center (IAC) at the Malaysian Nuclear Agency (Nuclear Malaysia). Higher levels of user satisfaction and faster growth in technical knowledge repository have been recorded in the system. This paper describes the help desk system in the perspective of management of its technical knowledge contributing to strengthening organizational knowledge asset of Nuclear Malaysia as national nuclear research institution. (Author)

  11. Dynamic reasoning in a knowledge-based system

    Science.gov (United States)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  12. How Quality Improvement Practice Evidence Can Advance the Knowledge Base.

    Science.gov (United States)

    OʼRourke, Hannah M; Fraser, Kimberly D

    2016-01-01

    Recommendations for the evaluation of quality improvement interventions have been made in order to improve the evidence base of whether, to what extent, and why quality improvement interventions affect chosen outcomes. The purpose of this article is to articulate why these recommendations are appropriate to improve the rigor of quality improvement intervention evaluation as a research endeavor, but inappropriate for the purposes of everyday quality improvement practice. To support our claim, we describe the differences between quality improvement interventions that occur for the purpose of practice as compared to research. We then carefully consider how feasibility, ethics, and the aims of evaluation each impact how quality improvement interventions that occur in practice, as opposed to research, can or should be evaluated. Recommendations that fit the evaluative goals of practice-based quality improvement interventions are needed to support fair appraisal of the distinct evidence they produce. We describe a current debate on the nature of evidence to assist in reenvisioning how quality improvement evidence generated from practice might complement that generated from research, and contribute in a value-added way to the knowledge base.

  13. MO-D-BRC-03: Knowledge-Based Planning

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Q. [Duke University Medical Center (United States)

    2016-06-15

    Treatment planning is a central part of radiation therapy, including delineation in tumor volumes and critical organs, setting treatment goals of prescription doses to the tumor targets and tolerance doses to the critical organs, and finally generation of treatment plans to meet the treatment goals. National groups like RTOG have led the effort to standardize treatment goals of the prescription doses to the tumor targets and tolerance doses to the critical organs based on accumulated knowledge from decades of abundant clinical trial experience. The challenge for each clinical department is how to achieve or surpass these set goals within the time constraints of clinical practice. Using fifteen testing cases from different treatment sites such as head and neck, prostate with and without pelvic lymph nodes, SBRT spine, we will present clinically utility of advanced planning tools, including knowledge based, automatic based, and multiple criteria based tools that are clinically implemented. The objectives of this session are: Understand differences among these three advanced planning tools Provide clinical assessments on the utility of the advanced planning tools Discuss clinical challenges of treatment planning with large variations in tumor volumes and their relationships with adjacent critical organs. Ping Xia received research grant from Philips. Jackie Wu received research grant from Varian; P. Xia, Research support by Philips and Varian; Q. Wu, NIH, Varian Medical.

  14. A Knowledge-based Environment for Software Process Performance Analysis

    Directory of Open Access Journals (Sweden)

    Natália Chaves Lessa Schots

    2015-08-01

    Full Text Available Background: Process performance analysis is a key step for implementing continuous improvement in software organizations. However, the knowledge to execute such analysis is not trivial and the person responsible to executing it must be provided with appropriate support. Aim: This paper presents a knowledge-based environment, named SPEAKER, proposed for supporting software organizations during the execution of process performance analysis. SPEAKER comprises a body of knowledge and a set of activities and tasks for software process performance analysis along with supporting tools to executing these activities and tasks. Method: We conducted an informal literature reviews and a systematic mapping study, which provided basic requirements for the proposed environment. We implemented the SPEAKER environment integrating supporting tools for the execution of activities and tasks of performance analysis and the knowledge necessary to execute them, in order to meet the variability presented by the characteristics of these activities. Results: In this paper, we describe each SPEAKER module and the individual evaluations of these modules, and also present an example of use comprising how the environment can guide the user through a specific performance analysis activity. Conclusion: Although we only conducted individual evaluations of SPEAKER’s modules, the example of use indicates the feasibility of the proposed environment. Therefore, the environment as a whole will be further evaluated to verify if it attains its goal of assisting in the execution of process performance analysis by non-specialist people.

  15. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.

    Science.gov (United States)

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo

    2016-12-13

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.

  16. A knowledge based system for creep-fatigue assessment

    International Nuclear Information System (INIS)

    Holdsworth, S.R.

    1999-01-01

    A knowledge based system was developed in the BRITE-EURAM C-FAT project to store the material property information necessary to perform complex creep-fatigue assessments and to thereby improve the effectiveness of data retrieval for such purposes. The C-FAT KBS incorporates a multi-level database which is structured to contain not only 'reduced' deformation and fracture test data, but also to enable ready access to the derived parameter constants for the constitutive and model equations used in a range of assessment procedures. The data management scheme is reviewed. The C-FAT KBS also has a dynamic worked example module which allows the sensitivity of predicted lifetimes to material property input data to be evaluated by a number of procedures. Complex cycle creep-fatigue endurance predictions are particularly sensitive to the creep property data used in assessment, and this is demonstrated with reference to the results of a number of large single edge notched bend specimen feature tests performed on a 1CrMoV turbine casting steel at 550 C. (orig.)

  17. Knowledge-based on-line vibration monitoring diagnose

    International Nuclear Information System (INIS)

    Johansson, L.G.; Karlsson, A.; Noeremark, A.

    1990-01-01

    ABB STAL developed some years ago a knowledge-based on-line vibration analysis system (working-name KOVA). KOVA is intended to work together with some type of vibration monitoring system, at present it is adapted to TVM 300. KOVA has no controlling function. It will only diagnose the actual situation and give the user explanations and proposals for actions to be taken. During the developing work, great experience has been gained of the features this type of system demands. This paper will present the outlines of the application and also discuss how to make diagnoses based both on general rules as well as on historical vibration cases for that particular unit (or identical units9. Another subject that this paper will outline, is the representation and evaluation of knowledge. KOVA serves as a decision-support system for the operator. Since KOVA will often give the operator more than one possible diagnosis as the cause of a fault, it is of great importance to give the operator comprehensive explanations and as many facts as possible. It is also important to rank the suggested diagnoses in some way. In KOVA these demands are effectively supported. The models and tools used to realize this functionality will be described in this paper

  18. The Enterprise’ Performance in the Knowledge Based Society

    Directory of Open Access Journals (Sweden)

    Nicoleta Barbuta-Misu

    2008-01-01

    Full Text Available As in the traditional enterprise, the performance of the enterprises in theknowledge based society is expressed through the same well-known financialindicators: return on equity, the profit margin, return on assets, gross margin, assetturnover, inventory turnover, the collection period, days’ sales in cash, payableperiod, fixed-asset turnover, balance sheet rations, coverage rations, market valueleverage rations, liquidity ratios, return on invested capital and many others. But,the differences that appear are in the way of acquiring at this performance in theenterprises. The actual knowledge based society is promoting the methods andmodels of the rational management that will lead to performance acquiring by theenterprises. Although as a first step, the reference to financial character as incomestatement, balance sheet, schedules to a balance sheet started to include referencesto the brain capital that is considered the success key in the businesses. In this paperI intend to present the effects on enterprise’ financial performance of the maincomponents of the brain capital: the human capital characterised through theemployees’ competences and skills; organizational capital that defines the internalstructures of the enterprises, inclusively the informatics structure and social capital,related to the enterprise relations with thirds (investors, banks, customers, suppliersetc.. The brain capital mustn’t be looked as a present vogue but as a necessity of itsconsideration and evaluation thus to the old economic-financial rules used indecision making to be added and the knowledge/information decision.

  19. THE ROLE OF INTELLECTUAL CAPITAL IN KNOWLEDGE - BASED SOCIETY

    Directory of Open Access Journals (Sweden)

    Denisa-Elena Parpandel

    2013-01-01

    Full Text Available In a knowledge - based society, organizations undergo permanent changes and transformations, and the key factor of such changes is intellectual capital regarded as one of the most critical, yet most strategic values an organization might own. Analyzing intelectual capital and knowledge society over the last decades has primarily emerged in private companies, whereas at present there is an increasing concern in all the fields of activity. The goal of this paper is to emphasize the importance of intellectual capital as a source of innovation and novelty used to create competitive advantages for organizations in the era of knowledge where man must rely on intellect, intuition and creativity. The present paper is an exploratory endeavour based on the qualitative method as various information sources are resorted to in order to conceptualize the terms of intellectual capital and knowledge society: specialty literature, case studies, mass-media articles, reports of in-field organizations etc. Organizations should use all the tangible or non-tangible resources they have in order to secure their success and also to build a knowledge society which involves going a long way, based on an ample, complex process where innovation has a major role and a global nature.

  20. MO-D-BRC-03: Knowledge-Based Planning

    International Nuclear Information System (INIS)

    Wu, Q.

    2016-01-01

    Treatment planning is a central part of radiation therapy, including delineation in tumor volumes and critical organs, setting treatment goals of prescription doses to the tumor targets and tolerance doses to the critical organs, and finally generation of treatment plans to meet the treatment goals. National groups like RTOG have led the effort to standardize treatment goals of the prescription doses to the tumor targets and tolerance doses to the critical organs based on accumulated knowledge from decades of abundant clinical trial experience. The challenge for each clinical department is how to achieve or surpass these set goals within the time constraints of clinical practice. Using fifteen testing cases from different treatment sites such as head and neck, prostate with and without pelvic lymph nodes, SBRT spine, we will present clinically utility of advanced planning tools, including knowledge based, automatic based, and multiple criteria based tools that are clinically implemented. The objectives of this session are: Understand differences among these three advanced planning tools Provide clinical assessments on the utility of the advanced planning tools Discuss clinical challenges of treatment planning with large variations in tumor volumes and their relationships with adjacent critical organs. Ping Xia received research grant from Philips. Jackie Wu received research grant from Varian; P. Xia, Research support by Philips and Varian; Q. Wu, NIH, Varian Medical

  1. Qualitative processing of uncertainty, conflicts and redundancy in knowledge bases

    International Nuclear Information System (INIS)

    Zbytovsky, V.

    1994-01-01

    This paper describes two techniques, created and implemented in the course of development of the real-time on-line expert system Recon at the Nuclear Research Institute at Rez, Czech Republic. The first of them is the qualitative processing of uncertainty, which is based on the introduction of the third logic value to logic data objects, and the credibility flag to arithmetic data objects. The treatment of the third value and credibility flags during the inference, the explanation method based on the graphic representation and the uncertainty processing during the explanation are also mentioned. The second technique, is a semantic checking of knowledge bases, which enables us to recover parts of the bases, that are meaningless, either because of an error during their implementation into a base, or because they are redundant. The paper includes the explanation of basic terms of this method, such as so called conflicts, K-group and K-situation. The two types of the conflict (dead-end and bubble) are also discussed. The paper also offers the complete mathematical apparatus, which the checking method is based on. (author). 4 refs, tabs

  2. KNOWLEDGE-BASED ROBOT VISION SYSTEM FOR AUTOMATED PART HANDLING

    Directory of Open Access Journals (Sweden)

    J. Wang

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This paper discusses an algorithm incorporating a knowledge-based vision system into an industrial robot system for handling parts intelligently. A continuous fuzzy controller was employed to extract boundary information in a computationally efficient way. The developed algorithm for on-line part recognition using fuzzy logic is shown to be an effective solution to extract the geometric features of objects. The proposed edge vector representation method provides enough geometric information and facilitates the object geometric reconstruction for gripping planning. Furthermore, a part-handling model was created by extracting the grasp features from the geometric features.

    AFRIKAANSE OPSOMMING: Hierdie artikel beskryf ‘n kennis-gebaseerde visiesisteemalgoritme wat in ’n industriёle robotsisteem ingesluit word om sodoende intelligente komponenthantering te bewerkstellig. ’n Kontinue wasige beheerder is gebruik om allerlei objekinligting deur middel van ’n effektiewe berekeningsmetode te bepaal. Die ontwikkelde algoritme vir aan-lyn komponentherkenning maak gebruik van wasige logika en word bewys as ’n effektiewe metode om geometriese inligting van objekte te bepaal. Die voorgestelde grensvektormetode verskaf voldoende inligting en maak geometriese rekonstruksie van die objek moontlik om greepbeplanning te kan doen. Voorts is ’n komponenthanteringsmodel ontwikkel deur die grypkenmerke af te lei uit die geometriese eienskappe.

  3. Real-time application of knowledge-based systems

    Science.gov (United States)

    Brumbaugh, Randal W.; Duke, Eugene L.

    1989-01-01

    The Rapid Prototyping Facility (RPF) was developed to meet a need for a facility which allows flight systems concepts to be prototyped in a manner which allows for real-time flight test experience with a prototype system. This need was focused during the development and demonstration of the expert system flight status monitor (ESFSM). The ESFSM was a prototype system developed on a LISP machine, but lack of a method for progressive testing and problem identification led to an impractical system. The RPF concept was developed, and the ATMS designed to exercise its capabilities. The ATMS Phase 1 demonstration provided a practical vehicle for testing the RPF, as well as a useful tool. ATMS Phase 2 development continues. A dedicated F-18 is expected to be assigned for facility use in late 1988, with RAV modifications. A knowledge-based autopilot is being developed using the RPF. This is a system which provides elementary autopilot functions and is intended as a vehicle for testing expert system verification and validation methods. An expert system propulsion monitor is being prototyped. This system provides real-time assistance to an engineer monitoring a propulsion system during a flight.

  4. Structural design systems using knowledge-based techniques

    International Nuclear Information System (INIS)

    Orsborn, K.

    1993-01-01

    Engineering information management and the corresponding information systems are of a strategic importance for industrial enterprises. This thesis treats the interdisciplinary field of designing computing systems for structural design and analysis using knowledge-based techniques. Specific conceptual models have been designed for representing the structure and the process of objects and activities in a structural design and analysis domain. In this thesis, it is shown how domain knowledge can be structured along several classification principles in order to reduce complexity and increase flexibility. By increasing the conceptual level of the problem description and representation of the domain knowledge in a declarative form, it is possible to enhance the development, maintenance and use of software for mechanical engineering. This will result in a corresponding increase of the efficiency of the mechanical engineering design process. These ideas together with the rule-based control point out the leverage of declarative knowledge representation within this domain. Used appropriately, a declarative knowledge representation preserves information better, is more problem-oriented and change-tolerant than procedural representations. 74 refs

  5. Integrating prediction, provenance, and optimization into high energy workflows

    Energy Technology Data Exchange (ETDEWEB)

    Schram, M.; Bansal, V.; Friese, R. D.; Tallent, N. R.; Yin, J.; Barker, K. J.; Stephan, E.; Halappanavar, M.; Kerbyson, D. J.

    2017-10-01

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  6. VIP: A knowledge-based design aid for the engineering of space systems

    Science.gov (United States)

    Lewis, Steven M.; Bellman, Kirstie L.

    1990-01-01

    The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.

  7. Use of Occupancy Models to Evaluate Expert Knowledge-based Species-Habitat Relationships

    Directory of Open Access Journals (Sweden)

    Monica N. Iglecia

    2012-12-01

    Full Text Available Expert knowledge-based species-habitat relationships are used extensively to guide conservation planning, particularly when data are scarce. Purported relationships describe the initial state of knowledge, but are rarely tested. We assessed support in the data for suitability rankings of vegetation types based on expert knowledge for three terrestrial avian species in the South Atlantic Coastal Plain of the United States. Experts used published studies, natural history, survey data, and field experience to rank vegetation types as optimal, suitable, and marginal. We used single-season occupancy models, coupled with land cover and Breeding Bird Survey data, to examine the hypothesis that patterns of occupancy conformed to species-habitat suitability rankings purported by experts. Purported habitat suitability was validated for two of three species. As predicted for the Eastern Wood-Pewee (Contopus virens and Brown-headed Nuthatch (Sitta pusilla, occupancy was strongly influenced by vegetation types classified as "optimal habitat" by the species suitability rankings for nuthatches and wood-pewees. Contrary to predictions, Red-headed Woodpecker (Melanerpes erythrocephalus models that included vegetation types as covariates received similar support by the data as models without vegetation types. For all three species, occupancy was also related to sampling latitude. Our results suggest that covariates representing other habitat requirements might be necessary to model occurrence of generalist species like the woodpecker. The modeling approach described herein provides a means to test expert knowledge-based species-habitat relationships, and hence, help guide conservation planning.

  8. Developing an ontological explosion knowledge base for business continuity planning purposes.

    Science.gov (United States)

    Mohammadfam, Iraj; Kalatpour, Omid; Golmohammadi, Rostam; Khotanlou, Hasan

    2013-01-01

    Industrial accidents are among the most known challenges to business continuity. Many organisations have lost their reputation following devastating accidents. To manage the risks of such accidents, it is necessary to accumulate sufficient knowledge regarding their roots, causes and preventive techniques. The required knowledge might be obtained through various approaches, including databases. Unfortunately, many databases are hampered by (among other things) static data presentations, a lack of semantic features, and the inability to present accident knowledge as discrete domains. This paper proposes the use of Protégé software to develop a knowledge base for the domain of explosion accidents. Such a structure has a higher capability to improve information retrieval compared with common accident databases. To accomplish this goal, a knowledge management process model was followed. The ontological explosion knowledge base (EKB) was built for further applications, including process accident knowledge retrieval and risk management. The paper will show how the EKB has a semantic feature that enables users to overcome some of the search constraints of existing accident databases.

  9. Development of a Prototype Model-Form Uncertainty Knowledge Base

    Science.gov (United States)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  10. A distributed knowledge-based system for the optimum utilisation of South African wool

    Directory of Open Access Journals (Sweden)

    Nomusa Dlodlo

    2009-09-01

    particular end-products they manufacture or could manufacture. To achieve this and ensure accessibility to such continuously updated information, it is essential to develop an integrated computer-based system. It is with the above in mind that a knowledge-based system for the optimum utilisation of South African wool has been developed, which is described here. This paper reviews relevant work in this fi eld and covers wool production statistics in South Africa, the end uses of the wool fibre versus the diameter of the fibre, the advantages of distributed architectures, and the flow of processes in a wool utilization system. It then sets out the concept and development of the proposed system, including the architecture of the proposed expert system, the associated analysis and finally the conclusions. The components of the expert system, namely the knowledge base, inference engine, knowledge acquisition component, and explanation system are described. The architecture of the system incorporates the concept of distributed systems and the related advantages incorporated in its general architecture and within its internal components. It marries both expert and general knowledge-based systems, consisting of a combination of an ordinary knowledge-based system (KBS that can be queried for information and an expert system that provides advice to users. The distributed system developed involves collection of autonomous components that are interconnected, which enables these components to coordinate their activities and share resources of the system, so that users perceive the system as a single integrated facility. There are a number of advantages of such a distributed system and these are articulated in the paper. This approach allows not only incremental development of the system, but also facilitates sharing of data and information. The distributed nature of the architecture of the system developed, consists of three main elements: The expert system to advise on the

  11. Bridging the provenance gap: opportunities and challenges tracking in and ex silico provenance in sUAS workflows

    Science.gov (United States)

    Thomer, A.

    2017-12-01

    Data provenance - the record of the varied processes that went into the creation of a dataset, as well as the relationships between resulting data objects - is necessary to support the reusability, reproducibility and reliability of earth science data. In sUAS-based research, capturing provenance can be particularly challenging because of the breadth and distributed nature of the many platforms used to collect, process and analyze data. In any given project, multiple drones, controllers, computers, software systems, sensors, cameras, imaging processing algorithms and data processing workflows are used over sometimes long periods of time. These platforms and processing result in dozens - if not hundreds - of data products in varying stages of readiness-for-analysis and sharing. Provenance tracking mechanisms are needed to make the relationships between these many data products explicit, and therefore more reusable and shareable. In this talk, I discuss opportunities and challenges in tracking provenance in sUAS-based research, and identify gaps in current workflow-capture technologies. I draw on prior work conducted as part of the IMLS-funded Site-Based Data Curation project in which we developed methods of documenting in and ex silico (that is, computational and non-computation) workflows, and demonstrate this approaches applicability to research with sUASes. I conclude with a discussion of ontologies and other semantic technologies that have potential application in sUAS research.

  12. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support

    OpenAIRE

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    Background Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians? experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mech...

  13. Utilizing Provenance in Reusable Research Objects

    Directory of Open Access Journals (Sweden)

    Zhihao Yuan

    2018-03-01

    Full Text Available Science is conducted collaboratively, often requiring the sharing of knowledge about computational experiments. When experiments include only datasets, they can be shared using Uniform Resource Identifiers (URIs or Digital Object Identifiers (DOIs. An experiment, however, seldom includes only datasets, but more often includes software, its past execution, provenance, and associated documentation. The Research Object has recently emerged as a comprehensive and systematic method for aggregation and identification of diverse elements of computational experiments. While a necessary method, mere aggregation is not sufficient for the sharing of computational experiments. Other users must be able to easily recompute on these shared research objects. Computational provenance is often the key to enable such reuse. In this paper, we show how reusable research objects can utilize provenance to correctly repeat a previous reference execution, to construct a subset of a research object for partial reuse, and to reuse existing contents of a research object for modified reuse. We describe two methods to summarize provenance that aid in understanding the contents and past executions of a research object. The first method obtains a process-view by collapsing low-level system information, and the second method obtains a summary graph by grouping related nodes and edges with the goal to obtain a graph view similar to application workflow. Through detailed experiments, we show the efficacy and efficiency of our algorithms.

  14. Model-based Abstraction of Data Provenance

    NARCIS (Netherlands)

    Probst, Christian W.; Hansen, René Rydhof

    Identifying provenance of data provides insights to the origin of data and intermediate results, and has recently gained increased interest due to data-centric applications. In this work we extend a data-centric system view with actors handling the data and policies restricting actions. This

  15. Creative economy and knowledge-based society. Perspectives for Romania

    Directory of Open Access Journals (Sweden)

    Istudor Laura Gabriela

    2017-07-01

    Full Text Available Creative economy is a rather new concept that started developing during the last decade, being currently applied to a variety of activities and professions. It has become an important sector of the global economy, being sustained and promoted by the European Union, especially in the context of an innovative and knowledge-based society. Within this new type of economy, creativity, innovation and knowledge management are essential factors that lead to a smart, sustainable and inclusive development in regard to the creation of new jobs and to the social inclusion requirements. According to John Howkins (2001, the creative industries / sectors include art, research, advertising, movies, theatre, software, with the possibility of the concept of creative economy to be extended to other non-artistic and IT related fields, where improvements are expected to arise through innovation and creativity. The Global Creativity Index (GCI and the European Innovation Scoreboard (EIS, are two benchmarking tools that measure the creativity and innovation degree of the countries in the European Union, placing Romania within the last positions, especially with respect to intellectual property rights and entrepreneurship. The research methodology consists of both qualitative and quantitative methods, while the research questions to be answered are What is the degree of innovation in Romania compared to other states? What can be done in order to increase the level of innovation in Romania? In this viewpoint, the paper analyzes the development of the creative industries / sectors in Romania, in the context of creative economy and innovation. The objective of the paper is to analyze the extent to which the concept of creative economy can be promoted and implemented in Romania, given its increasing importance at the international level, with countries such as the United Kingdom that already adopted strategies to sustain this kind of economy in the past years. In order to

  16. "THE KNOWLEDGE TRIANGLE" IN A KNOWLEDGE-BASED SOCIETY

    Directory of Open Access Journals (Sweden)

    Rus Mircea-Iosif

    2013-07-01

    Full Text Available The knowledge-based society is the stage where mankind is found and aims to raise the living standards of population but also to increase the level of knowledge. To achieve this latter goal, the states of the world, and especially those in the European Union, must ensure an adequate funding for its realization, and therefore in 2011 it was decided at EU level the achievement of an Innovation Union, in which are to be involved all the European countries, while to stimulate and finance research and innovation the Horizon 2020 program was proposed. The results of the Program, an ”Innovation Union” have begun to be felt, so in 2011, the major companies headquartered in the European Union increased their investments in R&D by 8.9% compared to 6.1% in 2010. This increase was almost equal to that of the U.S.A. companies (9%, higher than the world average (7.6% and superior to Japanese companies (1.5%. The sectors that used the research-development activity have tended to have increases in employment above average. I believe this information highlights the fact that the European Union may become attractive for research-development and innovation investments even for businesses outside the UE, and this can result in jobs creation and increasing competitivenees of this field of the states of the European Union. In the introductory part of the article, I have briefly presented general notions of the three component activities of the ”knowledge triangle”, in the second part I presented the knowledge society with several features, in the third part, I showed some provisions of the program to stimulate research and innovation Horizon 2020, in the fourth part, I presented an innovation activity connection to private enterprise and entrepreneurial initiative stimulation in the field innovation, and the conclusions shows that research does not stop with achieving the objectives and finding the outcomes research, but is it the background for further

  17. IMPORTANCE OF THE HUMAN FACTOR IN THE KNOWLEDGE-BASED SOCIETY

    Directory of Open Access Journals (Sweden)

    Angela BRETCU

    2016-02-01

    Full Text Available The paper approaches the paradigmatic changes of the current economic situation, in the context of post-modernism and its challenges, which reconsiders human society according to new criteria. One of the characteristics of post-modernism is the development of information and communication techniques, which allowed the occurrence of the „knowledge-based society” whose consequence is the effervescence of fast barrier-free knowledge, absolute freedom of debates, equality of opportunities before the virtual space, but also the relativisation of information, the increases of the danger of manipulation, misleading, or even falsification of the truth. New challenges occur thus in social life, and especially in the economic one, where the human factor becomes increasingly important for the evolution of society. In this context, education seems to play the decisive part, but an education focused on real values, where the truth and the ethics prevail before efficiency and performance.  

  18. ADEpedia: a scalable and standardized knowledge base of Adverse Drug Events using semantic web technology.

    Science.gov (United States)

    Jiang, Guoqian; Solbrig, Harold R; Chute, Christopher G

    2011-01-01

    A source of semantically coded Adverse Drug Event (ADE) data can be useful for identifying common phenotypes related to ADEs. We proposed a comprehensive framework for building a standardized ADE knowledge base (called ADEpedia) through combining ontology-based approach with semantic web technology. The framework comprises four primary modules: 1) an XML2RDF transformation module; 2) a data normalization module based on NCBO Open Biomedical Annotator; 3) a RDF store based persistence module; and 4) a front-end module based on a Semantic Wiki for the review and curation. A prototype is successfully implemented to demonstrate the capability of the system to integrate multiple drug data and ontology resources and open web services for the ADE data standardization. A preliminary evaluation is performed to demonstrate the usefulness of the system, including the performance of the NCBO annotator. In conclusion, the semantic web technology provides a highly scalable framework for ADE data source integration and standard query service.

  19. A Survey on Portuguese Lexical Knowledge Bases: Contents, Comparison and Combination

    Directory of Open Access Journals (Sweden)

    Hugo Gonçalo Oliveira

    2018-02-01

    Full Text Available In the last decade, several lexical-semantic knowledge bases (LKBs were developed for Portuguese, by different teams and following different approaches. Most of them are open and freely available for the community. Those LKBs are briefly analysed here, with a focus on size, structure, and overlapping contents. However, we go further and exploit all of the analysed LKBs in the creation of new LKBs, based on the redundant contents. Both original and redundancy-based LKBs are then compared, indirectly, based on the performance of automatic procedures that exploit them for solving four different semantic analysis tasks. In addition to conclusions on the performance of the original LKBs, results show that, instead of selecting a single LKB to use, it is generally worth combining the contents of all the open Portuguese LKBs, towards better results.

  20. Integrating design and production planning with knowledge-based inspection planning system

    International Nuclear Information System (INIS)

    Abbasi, Ghaleb Y.; Ketan, Hussein S.; Adil, Mazen B.

    2005-01-01

    In this paper an intelligent environment to integrate design and inspection earlier to the design stage. A hybrid knowledge-based approach integrating computer-aided design (CAD) and computer-aided inspection planning (CAIP) was developed, thereafter called computer-aided design and inspection planning (CADIP). CADIP was adopted for automated dimensional inspection planning. Critical functional features were screened based on certain attributes for part features for inspection planning application. Testing the model resulted in minimizing the number of probing vectors associated with the most important features in the inspected prismatic part, significant reduction in inspection costs and release of human labor. In totality, this tends to increase customer satisfaction as a final goal of the developed system. (author)

  1. Augmentation of Explicit Spatial Configurations by Knowledge-Based Inference on Geometric Fields

    Directory of Open Access Journals (Sweden)

    Dan Tappan

    2009-04-01

    Full Text Available A spatial configuration of a rudimentary, static, realworld scene with known objects (animals and properties (positions and orientations contains a wealth of syntactic and semantic spatial information that can contribute to a computational understanding far beyond what its quantitative details alone convey. This work presents an approach that (1 quantitatively represents what a configuration explicitly states, (2 integrates this information with implicit, commonsense background knowledge of its objects and properties, (3 infers additional, contextually appropriate, commonsense spatial information from and about their interrelationships, and (4 augments the original representation with this combined information. A semantic network represents explicit, quantitative information in a configuration. An inheritance-based knowledge base of relevant concepts supplies implicit, qualitative background knowledge to support semantic interpretation. Together, these structures provide a simple, nondeductive, constraint-based, geometric logical formalism to infer substantial implicit knowledge for intrinsic and deictic frames of spatial reference.

  2. The PBase Scientific Workflow Provenance Repository

    Directory of Open Access Journals (Sweden)

    Víctor Cuevas-Vicenttín

    2014-10-01

    Full Text Available Scientific workflows and their supporting systems are becoming increasingly popular for compute-intensive and data-intensive scientific experiments. The advantages scientific workflows offer include rapid and easy workflow design, software and data reuse, scalable execution, sharing and collaboration, and other advantages that altogether facilitate “reproducible science”. In this context, provenance – information about the origin, context, derivation, ownership, or history of some artifact – plays a key role, since scientists are interested in examining and auditing the results of scientific experiments. However, in order to perform such analyses on scientific results as part of extended research collaborations, an adequate environment and tools are required. Concretely, the need arises for a repository that will facilitate the sharing of scientific workflows and their associated execution traces in an interoperable manner, also enabling querying and visualization. Furthermore, such functionality should be supported while taking performance and scalability into account. With this purpose in mind, we introduce PBase: a scientific workflow provenance repository implementing the ProvONE proposed standard, which extends the emerging W3C PROV standard for provenance data with workflow specific concepts. PBase is built on the Neo4j graph database, thus offering capabilities such as declarative and efficient querying. Our experiences demonstrate the power gained by supporting various types of queries for provenance data. In addition, PBase is equipped with a user friendly interface tailored for the visualization of scientific workflow provenance data, making the specification of queries and the interpretation of their results easier and more effective.

  3. Data Provenance for Agent-Based Models in a Distributed Memory

    Directory of Open Access Journals (Sweden)

    Delmar B. Davis

    2018-04-01

    Full Text Available Agent-Based Models (ABMs assist with studying emergent collective behavior of individual entities in social, biological, economic, network, and physical systems. Data provenance can support ABM by explaining individual agent behavior. However, there is no provenance support for ABMs in a distributed setting. The Multi-Agent Spatial Simulation (MASS library provides a framework for simulating ABMs at fine granularity, where agents and spatial data are shared application resources in a distributed memory. We introduce a novel approach to capture ABM provenance in a distributed memory, called ProvMASS. We evaluate our technique with traditional data provenance queries and performance measures. Our results indicate that a configurable approach can capture provenance that explains coordination of distributed shared resources, simulation logic, and agent behavior while limiting performance overhead. We also show the ability to support practical analyses (e.g., agent tracking and storage requirements for different capture configurations.

  4. A knowledge-base verification of NPP expert systems using extended Petri nets

    International Nuclear Information System (INIS)

    Kwon, Il Won; Seong, Poong Hyun

    1995-01-01

    The verification phase of knowledge base is an important part for developing reliable expert systems, especially in nuclear industry. Although several strategies or tools have been developed to perform potential error checking, they often neglect the reliability of verification methods. Because a Petri net provides a uniform mathematical formalization of knowledge base, it has been employed for knowledge base verification. In this work, we devise and suggest an automated tool, called COKEP (Checker Of Knowledge base using Extended Petri net), for detecting incorrectness, inconsistency, and incompleteness in a knowledge base. The scope of the verification problem is expanded to chained errors, unlike previous studies that assumed error incidence to be limited to rule pairs only. In addition, we consider certainty factor in checking, because most of knowledge bases have certainty factors

  5. Study on the knowledge base system for the identification of typical target

    International Nuclear Information System (INIS)

    Qin Kai; Zhao Yingjun

    2008-01-01

    Based on the research on target knowledge base, target database, texture analysis, shape analysis, this paper proposed a new knowledge based method for typical target identification from remote sensing image. By extracting the texture characters and shape characters, joining with spatial analysis in GIS, reasoning according to the prior knowledge in the knowledge base, this method can identify and ex- tract typical target from remote sensing images. (authors)

  6. Ontological Knowledge Base of Physical and Technical Effects for Conceptual Design of Sensors

    International Nuclear Information System (INIS)

    ASTRAKHAN CIVIL ENGINEERING INSTITUTE, Astrakhan (Russian Federation))" data-affiliation=" (Department of CAD Systems, State Autonomous Educational Institution of Astrakhan Region of Higher Professional Education ASTRAKHAN CIVIL ENGINEERING INSTITUTE, Astrakhan (Russian Federation))" >Zaripova, V M; ASTRAKHAN CIVIL ENGINEERING INSTITUTE, Astrakhan (Russian Federation))" data-affiliation=" (Department of CAD Systems, State Autonomous Educational Institution of Astrakhan Region of Higher Professional Education ASTRAKHAN CIVIL ENGINEERING INSTITUTE, Astrakhan (Russian Federation))" >Petrova, I Yu

    2015-01-01

    This article discusses design of the knowledge base of physical phenomena based on domain-specific ontology. Classification of various physical phenomena in the knowledge base is based on energy-information model of circuits (EIMC) suggested by the authors. This model is specially aimed at design of new operating principles of sensing elements (sensors). Such a knowledge base can be used to train intended engineers, specialists in sensors design

  7. Development of a Knowledge Base for Enduser Consultation of AAL-Systems.

    Science.gov (United States)

    Röll, Natalie; Stork, Wilhelm; Rosales, Bruno; Stephan, René; Knaup, Petra

    2016-01-01

    Manufacturer information, user experiences and product availability of assistive living technologies are usually not known to citizens or consultation centers. The different knowledge levels concerning the availability of technology shows the need for building up a knowledge base. The aim of this contribution is the definition of requirements in the development of knowledge bases for AAL consultations. The major requirements, such as a maintainable and easy to use structure were implemented into a web based knowledge base, which went productive in ~3700 consulting interviews of municipal technology information centers. Within this field phase the implementation of the requirements for a knowledge base in the field of AAL consulting was evaluated and further developed.

  8. An ontological knowledge based system for selection of process monitoring and analysis tools

    DEFF Research Database (Denmark)

    Singh, Ravendra; Gernaey, Krist; Gani, Rafiqul

    2010-01-01

    monitoring and analysis tools for a wide range of operations has made their selection a difficult, time consuming and challenging task. Therefore, an efficient and systematic knowledge base coupled with an inference system is necessary to support the optimal selection of process monitoring and analysis tools......, satisfying the process and user constraints. A knowledge base consisting of the process knowledge as well as knowledge on measurement methods and tools has been developed. An ontology has been designed for knowledge representation and management. The developed knowledge base has a dual feature. On the one...... procedures has been developed to retrieve the data/information stored in the knowledge base....

  9. Development of knowledge-based operator support system for steam generator water leak events in FBR plants

    International Nuclear Information System (INIS)

    Arikawa, Hiroshi; Ida, Toshio; Matsumoto, Hiroyuki; Kishida, Masako

    1991-01-01

    A knowledge engineering approach to operation support system would be useful in maintaining safe and steady operation in nuclear plants. This paper describes a knowledge-based operation support system which assists the operators during steam generator water leak events in FBR plants. We have developed a real-time expert system. The expert system adopts hierarchical knowledge representation corresponding to the 'plant abnormality model'. A technique of signal validation which uses knowledge of symptom propagation are applied to diagnosis. In order to verify the knowledge base concerning steam generator water leak events in FBR plants, a simulator is linked to the expert system. It is revealed that diagnosis based on 'plant abnormality model' and signal validation using knowledge of symptom propagation could work successfully. Also, it is suggested that the expert system could be useful in supporting FBR plants operations. (author)

  10. Obsidian provenance research in the Americas.

    Science.gov (United States)

    Glascock, Michael D

    2002-08-01

    The characterization of archaeological materials to support provenance research has grown rapidly over the past few decades. Volcanic obsidian has several unique properties that make it the ideal archaeological material for studying prehistoric trade and exchange. This Account describes our laboratory's development of a systematic methodology for the characterization of obsidian sources and artifacts from Mesoamerica and other regions of North and South America in support of archaeological research.

  11. Provenance Representation in the Global Change Information System (GCIS)

    Science.gov (United States)

    Tilmes, Curt

    2012-01-01

    Global climate change is a topic that has become very controversial despite strong support within the scientific community. It is common for agencies releasing information about climate change to be served with Freedom of Information Act (FOIA) requests for everything that led to that conclusion. Capturing and presenting the provenance, linking to the research papers, data sets, models, analyses, observation instruments and satellites, etc. supporting key findings has the potential to mitigate skepticism in this domain. The U.S. Global Change Research Program (USGCRP) is now coordinating the production of a National Climate Assessment (NCA) that presents our best understanding of global change. We are now developing a Global Change Information System (GCIS) that will present the content of that report and its provenance, including the scientific support for the findings of the assessment. We are using an approach that will present this information both through a human accessible web site as well as a machine readable interface for automated mining of the provenance graph. We plan to use the developing W3C PROV Data Model and Ontology for this system.

  12. The stones and historic mortars of the Santissima Trinità di Saccargia Romanesque Basilica (Sardinia, Italy): a multi-analytical techniques' approach for the study of their features and provenance

    Science.gov (United States)

    Columbu, Stefano; Palomba, Marcella; Sitzia, Fabio

    2015-04-01

    A research project devoted to the study of building materials of the Romanesque churches in Sardinia is currently underway. One of the objectives of the project is to focus the mineral, chemical-physical and petrographic characterisation of the construction materials, as well as the alteration processes. To make a contribution to the preservation of Sardinian monuments, we suggests a new approach to define the different alteration-modes of rocks in function of their local exposure to the weather, studying: 1) the changes of physical properties on surface of stone (porosity, water absorption, micro-morphology) determined through laboratory tests and photogrammetry observations, 2) the alteration phases present on surface (e.g., secondary minerals, soluble salts) determined by mineralogical and chemical investigations. This methodological approach will allow to select appropriate, suitable and compatible materials for replacing the original altered one's, and to plan appropriate strategies devoted to the restoration work. In this paper the geomaterials used for construct the Santissima Trinità di Saccargia Basilica have been investigated. The church, finished in 1116 over the ruins of a pre-existing monastery, is the most important Romanesque site in the island. Have been studied the chemical alterations and physical decay of two different stones, as volcanic rocks (i.e., basalt) and sedimentary rocks (i.e., limestones) used in bichromy on the Basilica. The main purpose is to observe the different modes of alteration of these two lithologies with different petrophysical characteristics, placed in the same conditions of weathering. Macroscopic evidences show that the limestones, while not having a high porosity, they were strongly affected by alteration phenomena, especially in the outer surface of ashlars, due to the solubilization of the carbonate matrix. The basalt rocks show no obvious physical alteration. Occasionally, in some ashlar located in basal zone of the

  13. Impact of database quality in knowledge-based treatment planning for prostate cancer.

    Science.gov (United States)

    Wall, Phillip D H; Carver, Robert L; Fontenot, Jonas D

    2018-03-13

    This article investigates dose-volume prediction improvements in a common knowledge-based planning (KBP) method using a Pareto plan database compared with using a conventional, clinical plan database. Two plan databases were created using retrospective, anonymized data of 124 volumetric modulated arc therapy (VMAT) prostate cancer patients. The clinical plan database (CPD) contained planning data from each patient's clinically treated VMAT plan, which were manually optimized by various planners. The multicriteria optimization database (MCOD) contained Pareto-optimal plan data from VMAT plans created using a standardized multicriteria optimization protocol. Overlap volume histograms, incorporating fractional organ at risk volumes only within the treatment fields, were computed for each patient and used to match new patient anatomy to similar database patients. For each database patient, CPD and MCOD KBP predictions were generated for D 10 , D 30 , D 50 , D 65 , and D 80 of the bladder and rectum in a leave-one-out manner. Prediction achievability was evaluated through a replanning study on a subset of 31 randomly selected database patients using the best KBP predictions, regardless of plan database origin, as planning goals. MCOD predictions were significantly lower than CPD predictions for all 5 bladder dose-volumes and rectum D 50 (P = .004) and D 65 (P databases affects the performance and achievability of dose-volume predictions from a common knowledge-based planning approach for prostate cancer. Bladder and rectum dose-volume predictions derived from a database of standardized Pareto-optimal plans were compared with those derived from clinical plans manually designed by various planners. Dose-volume predictions from the Pareto plan database were significantly lower overall than those from the clinical plan database, without compromising achievability. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. A Natural Logic for Natural-Language Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Styltsvig, Henrik Bulskov; Jensen, Per Anker

    2017-01-01

    We describe a natural logic for computational reasoning with a regimented fragment of natural language. The natural logic comes with intuitive inference rules enabling deductions and with an internal graph representation facilitating conceptual path finding between pairs of terms as an approach t...

  15. A Natural Logic for Natural-language Knowledge Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Jensen, Per Anker

    2017-01-01

    We describe a natural logic for computational reasoning with a regimented fragment of natural language. The natural logic comes with intuitive inference rules enabling deductions and with an internal graph representation facilitating conceptual path finding between pairs of terms as an approach t...

  16. Knowledge based query expansion in complex multimedia event detection

    NARCIS (Netherlands)

    Boer, M. de; Schutte, K.; Kraaij, W.

    2016-01-01

    A common approach in content based video information retrieval is to perform automatic shot annotation with semantic labels using pre-trained classifiers. The visual vocabulary of state-of-the-art automatic annotation systems is limited to a few thousand concepts, which creates a semantic gap

  17. Knowledge-Base Semantic Gap Analysis for the Vulnerability Detection

    Science.gov (United States)

    Wu, Raymond; Seki, Keisuke; Sakamoto, Ryusuke; Hisada, Masayuki

    Web security became an alert in internet computing. To cope with ever-rising security complexity, semantic analysis is proposed to fill-in the gap that the current approaches fail to commit. Conventional methods limit their focus to the physical source codes instead of the abstraction of semantics. It bypasses new types of vulnerability and causes tremendous business loss.

  18. Knowledge-based systems programming for knowledge intensive teaching

    NARCIS (Netherlands)

    Achten, H.H.; Dijkstra, J.; Oxman, R.M.; Colajanni, B.; Pellitteri, G.

    1995-01-01

    Typological design implies extensive knowledge of building types in order to design a building belonging to a building type. It facilitates the design process, which can be considered as a sequence of decisions. The paper gives an outline of a new approach in a course teaching typological knowledge

  19. Knowledge-based systems programming for knowledge intensive teaching

    NARCIS (Netherlands)

    Achten, H.H.; Dijkstra, J.; Oxman, R.M.; Bax, M.F.T.

    1994-01-01

    Typological design implies extensive knowledge of building types in order to design a building belonging to a building type. It facilitates the design process, which can be considered as a sequence of decisions. The paper gives an outline of a new approach in a course teaching typological knowledge

  20. Knowledge based query expansion in complex multimedia event detection

    NARCIS (Netherlands)

    Boer, M.H.T. de; Schutte, K.; Kraaij, W.

    2015-01-01

    A common approach in content based video information retrieval is to perform automatic shot annotation with semantic labels using pre-trained classifiers. The visual vocabulary of state-of-the-art automatic annotation systems is limited to a few thousand concepts, which creates a semantic gap

  1. Knowledge-based verification of clinical guidelines by detection of anomalies.

    Science.gov (United States)

    Duftschmid, G; Miksch, S

    2001-04-01

    As shown in numerous studies, a significant part of published clinical guidelines is tainted with different types of semantical errors that interfere with their practical application. The adaptation of generic guidelines, necessitated by circumstances such as resource limitations within the applying organization or unexpected events arising in the course of patient care, further promotes the introduction of defects. Still, most current approaches for the automation of clinical guidelines are lacking mechanisms, which check the overall correctness of their output. In the domain of software engineering in general and in the domain of knowledge-based systems (KBS) in particular, a common strategy to examine a system for potential defects consists in its verification. The focus of this work is to present an approach, which helps to ensure the semantical correctness of clinical guidelines in a three-step process. We use a particular guideline specification language called Asbru to demonstrate our verification mechanism. A scenario-based evaluation of our method is provided based on a guideline for the artificial ventilation of newborn infants. The described approach is kept sufficiently general in order to allow its application to several other guideline representation formats.

  2. Upgrading of Symbolic and Synthetic Knowledge Bases: Evidence from the Chinese Automotive and Construction Industries

    NARCIS (Netherlands)

    E. van Tuijl (Erwin); K. Dittrich (Koen); J. van der Borg (Jan)

    2016-01-01

    textabstractThis paper deals with the question of how upgrading of the symbolic and synthetic knowledge bases takes place and, by doing so, we contribute to the upgrading literature by linking upgrading with the concept of the differentiated knowledge bases. We discern a number of upgrading

  3. The Knowledge Base as an Extension of Distance Learning Reference Service

    Science.gov (United States)

    Casey, Anne Marie

    2012-01-01

    This study explores knowledge bases as extension of reference services for distance learners. Through a survey and follow-up interviews with distance learning librarians, this paper discusses their interest in creating and maintaining a knowledge base as a resource for reference services to distance learners. It also investigates their perceptions…

  4. A Model to Assess the Behavioral Impacts of Consultative Knowledge Based Systems.

    Science.gov (United States)

    Mak, Brenda; Lyytinen, Kalle

    1997-01-01

    This research model studies the behavioral impacts of consultative knowledge based systems (KBS). A study of graduate students explored to what extent their decisions were affected by user participation in updating the knowledge base; ambiguity of decision setting; routinization of usage; and source credibility of the expertise embedded in the…

  5. Conceptual Pathway Querying of Natural Logic Knowledge Bases from Text Bases

    DEFF Research Database (Denmark)

    Andreasen, Troels; Bulskov, Henrik; Nilsson, Jørgen Fischer

    2013-01-01

    language than predicate logic. Natural logic accommodates a variety of scientific parlance, ontologies and domain models. It also supports a semantic net or graph view of the knowledge base. This admits computation of relationships between concepts simultaneously through pathfinding in the knowledge base...

  6. GUIDON-WATCH: A Graphic Interface for Viewing a Knowledge-Based System. Technical Report #14.

    Science.gov (United States)

    Richer, Mark H.; Clancey, William J.

    This paper describes GUIDON-WATCH, a graphic interface that uses multiple windows and a mouse to allow a student to browse a knowledge base and view reasoning processes during diagnostic problem solving. The GUIDON project at Stanford University is investigating how knowledge-based systems can provide the basis for teaching programs, and this…

  7. In Pursuit of Natural Logics for Ontology-Structured Knowledge Bases

    DEFF Research Database (Denmark)

    Nilsson, Jørgen Fischer

    2015-01-01

    We argue for adopting a form of natural logic for ontology-structured knowledge bases with complex sentences. This serves to ease reading of knowledge base for domain experts and to make reasoning and querying and path-finding more comprehensible. We explain natural logic as a development from tr...

  8. Large-scale computer networks and the future of legal knowledge-based systems

    NARCIS (Netherlands)

    Leenes, R.E.; Svensson, Jorgen S.; Hage, J.C.; Bench-Capon, T.J.M.; Cohen, M.J.; van den Herik, H.J.

    1995-01-01

    In this paper we investigate the relation between legal knowledge-based systems and large-scale computer networks such as the Internet. On the one hand, researchers of legal knowledge-based systems have claimed huge possibilities, but despite the efforts over the last twenty years, the number of

  9. A knowledge-based diagnosis system for welding machine problem solving

    International Nuclear Information System (INIS)

    Bonnieres, P. de; Boutes, J.L.; Calas, M.A.; Para, S.

    1986-06-01

    This paper presents a knowledge-based diagnosis system which can be a valuable aid in resolving malfunctions and failures encountered using the automatic hot-wire TIG weld cladding process. This knowledge-based system is currently under evaluation by welding operators at the Framatome heavy fabricating facility. Extension to other welding processes is being considered

  10. ROMANIA AND THE KNOWLEDGE-BASED ECONOMY: INNOVATION THE SOURCE OF ECONOMIC GROWTH

    OpenAIRE

    Holban (Oncioiu) Ionica; Oncioiu Florin Razvan

    2008-01-01

    The is already a vast literature on the role of knowledge in economic growth butthere is need to clarify the meaning and scope of this term and define the Romanianperspective on the relationship between knowledge-based economy and growth. This paper focuses on innovation systems in Romania as the key challenge and meanfor embracing growth based on knowledge-based economy.

  11. Provenance metadata gathering and cataloguing of EFIT++ code execution

    International Nuclear Information System (INIS)

    Lupelli, I.; Muir, D.G.; Appel, L.; Akers, R.; Carr, M.; Abreu, P.

    2015-01-01

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  12. Provenance metadata gathering and cataloguing of EFIT++ code execution

    Energy Technology Data Exchange (ETDEWEB)

    Lupelli, I., E-mail: ivan.lupelli@ccfe.ac.uk [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Muir, D.G.; Appel, L.; Akers, R.; Carr, M. [CCFE, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Abreu, P. [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, 1049-001 Lisboa (Portugal)

    2015-10-15

    Highlights: • An approach for automatic gathering of provenance metadata has been presented. • A provenance metadata catalogue has been created. • The overhead in the code runtime is less than 10%. • The metadata/data size ratio is about ∼20%. • A visualization interface based on Gephi, has been presented. - Abstract: Journal publications, as the final product of research activity, are the result of an extensive complex modeling and data analysis effort. It is of paramount importance, therefore, to capture the origins and derivation of the published data in order to achieve high levels of scientific reproducibility, transparency, internal and external data reuse and dissemination. The consequence of the modern research paradigm is that high performance computing and data management systems, together with metadata cataloguing, have become crucial elements within the nuclear fusion scientific data lifecycle. This paper describes an approach to the task of automatically gathering and cataloguing provenance metadata, currently under development and testing at Culham Center for Fusion Energy. The approach is being applied to a machine-agnostic code that calculates the axisymmetric equilibrium force balance in tokamaks, EFIT++, as a proof of principle test. The proposed approach avoids any code instrumentation or modification. It is based on the observation and monitoring of input preparation, workflow and code execution, system calls, log file data collection and interaction with the version control system. Pre-processing, post-processing, and data export and storage are monitored during the code runtime. Input data signals are captured using a data distribution platform called IDAM. The final objective of the catalogue is to create a complete description of the modeling activity, including user comments, and the relationship between data output, the main experimental database and the execution environment. For an intershot or post-pulse analysis (∼1000

  13. USE OF ONTOLOGIES FOR KNOWLEDGE BASES CREATION TUTORING COMPUTER SYSTEMS

    OpenAIRE

    Cheremisina Lyubov

    2014-01-01

    This paper deals with the use of ontology for the use and development of intelligent tutoring systems. We consider the shortcomings of educational software and distance learning systems and the advantages of using ontology’s in their design. Actuality creates educational computer systems based on systematic knowledge. We consider classification of properties, use and benefits of ontology’s. Characterized approaches to the problem of ontology mapping, the first of which – manual mapping, the s...

  14. Querying Provenance Information: Basic Notions and an Example from Paleoclimate Reconstruction

    Science.gov (United States)

    Stodden, V.; Ludaescher, B.; Bocinsky, K.; Kintigh, K.; Kohler, T.; McPhillips, T.; Rush, J.

    2016-12-01

    Computational models are used to reconstruct and explain past environments and to predict likely future environments. For example, Bocinsky and Kohler have performed a 2,000-year reconstruction of the rain-fed maize agricultural niche in the US Southwest. The resulting academic publications not only contain traditional method descriptions, figures, etc. but also links to code and data for basic transparency and reproducibility. Examples include ResearchCompendia.org and the new project "Merging Science and Cyberinfrastructure Pathways: The Whole Tale." Provenance information provides a further critical element to understand a published study and to possibly extend or challenge the findings of the original authors. We present different notions and uses of provenance information using a computational archaeology example, e.g., the common use of "provenance for others" (for transparency and reproducibility), but also the more elusive but equally important use of "provenance for self". To this end, we distinguish prospective provenance (a.k.a. workflow) from retrospective provenance (a.k.a. data lineage) and show how combinations of both forms of provenance can be used to answer different kinds of important questions about a workflow and its execution. Since many workflows are developed using scripting or special purpose languages such as Python and R, we employ an approach and toolkit called YesWorkflow that brings provenance modeling, capture, and querying into the realm of scripting. YesWorkflow employs the basic W3C PROV standard, as well as the ProvONE extension for sharing and exchanging retrospective and prospective provenance information, respectively. Finally, we argue that the utility of provenance information should be maximized by developing different kinds provenance questions and queries during the early phases of computational workflow design and implementation.

  15. Terminological reference of a knowledge-based system: the data dictionary.

    Science.gov (United States)

    Stausberg, J; Wormek, A; Kraut, U

    1995-01-01

    The development of open and integrated knowledge bases makes new demands on the definition of the used terminology. The definition should be realized in a data dictionary separated from the knowledge base. Within the works done at a reference model of medical knowledge, a data dictionary has been developed and used in different applications: a term definition shell, a documentation tool and a knowledge base. The data dictionary includes that part of terminology, which is largely independent of a certain knowledge model. For that reason, the data dictionary can be used as a basis for integrating knowledge bases into information systems, for knowledge sharing and reuse and for modular development of knowledge-based systems.

  16. KBTAC: EPRI's center to assist the nuclear industry to apply the knowledge-based technology

    International Nuclear Information System (INIS)

    Lin, Chinglu; Naser, J.A.; Sun, B.K.H.

    1993-01-01

    The nuclear utility industry's complex engineering and procedure systems offer many opportunities for use of the knowledge-based technology such as expert systems and neural networks. The ability of expert systems to enhance human experts makes them an important tool in the areas of engineering, operations and maintenance. However, many current industry applications are research projects or turnkey systems supplied by vendors. These often do not impart to utility technical staff a clear understanding of the capabilities of knowledge-based systems (KBS). More importantly, simply using completed applications does not meet utilities' need to acquire the capabilities to build their own knowledge-based systems. Thus, EPRI is supporting its member utilities utilization of knowledge-based technology for power plant engineering, operations, and maintenance applications through the establishment of the Knowledge-Based Technology Application Center (KBTAC)

  17. Different types of power reactors and provenness

    International Nuclear Information System (INIS)

    Goodman, E.I.

    1977-01-01

    The lecture guides the potential buyer in the selection of a reactor type. Recommended criteria regarding provenness, licensability, and contractual arrangements are defined and discussed. Tabular data summarizing operating experience and commercial availability of units are presented and discussed. The status of small and medium power reactors which are of interest to many developing countries is presented. It is stressed that each prospective buyer will have to establish his own criteria based on specific conditions which will be applied to reactor selection. In all cases it will be found that selection, either pre-selection of bidders or final selection of supplier, will be a fairly complex evaluation. (orig.) [de

  18. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    Science.gov (United States)

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Total quality: A proven approach for magnet manufacture

    International Nuclear Information System (INIS)

    Owen, C.E.; Malone, K.A.

    1992-01-01

    The Westinghouse Magnet Systems Division (WMSD) was formed in late 1990 when the Superconducting Super Collider Laboratory (SSCL) awarded WMSD the Follower portion of the contract for manufacturing collider dipole magnets. The Division's small cadre of management start-up personnel moved into its office and manufacturing facility in Round Rock, Texas in August of 1991. In January 1992, WMSD won a second SSCL contract to design and build the High Energy Booster dipole magnets. These contracts presented a rare opportunity: the chance to start with a clean slate and to build, from the bottom up, a whole new product, with all its required manufacturing processes and all its management systems. This was the opportunity to open-quotes do it right the first time.close quotes With these two contracts, doing it right the first time is the only way that WMSD can succeed. By mid-1994, WMSD will start its delivery of collider dipole magnets for installation in the Super Collider tunnel. Each of the magnets, from the first delivery through the last, requires a calculated reliability of 99.99954 percent. This is six sigma performance the first time with little or no opportunity for trial and error, false starts, and continuous improvement. In late 1994, the SSCL will award the full production contract for collider dipole magnets to one of the two preproduction (or a combination of both) subcontractors, based on performance during the preproduction and low rate initial production phases. There will be no second chance in the quality, reliability, delivery and cost competition for this contract. This paper describes the Westinghouse's quality assurance program. This program has twelve conditions of excellance: customer orientation, participation, development, motivation, products and services, process and procedures, information, suppliers, culture, planning, communications, and accountability

  20. Proven approaches to emission control at 200 MW power plants

    International Nuclear Information System (INIS)

    Lilja, M.; Moilanen, E.; Bacalum, A.

    1999-01-01

    Due to the tendency fir stricter norms for emission, Eastern European power plants have committed themselves to for low NO x modifications and flu gas desulphurization (FGD) plants for the existing boiler plants. Fortum Engineering has gained experience in low NO x and FGD retrofit projects in Finland, Poland and Czech Republic. The presentation concentrates in two projects: low NO x combustion modifications Jawornzno III Power Plant, Poland and FGD retrofit for Chvaletice Power Station, Czech Republic. The aim of the first contract is to keep NO x emissions of the boilers under 170 mg/MJ after the modification. The project has been successfully completed during the year 1995. Key technology is the application of the newest generation NR-LCC low NO x burners and over firing (OFA) system to the existing boilers with minimum modifications and the auxiliary equipment. As a result during the first half of a year of operation after take-over the NO x emission has been continuously between 120 and 150 mg/MJ and unburned carbon in fly ash has been under 5%. There has been no increased slagging in the furnace. The Chvaltice Power Station burning brown coal had big problems with sulphur oxides in the flue gases. The aim of the project in the station was to reduce SO 2 emissions from 7000 mg/m 3 n. The project has been completed in 1998. Desulphurization in Chvaletice is performed by wet limestone-gypsum method. Flue gases outgoing from electrostatic precipitators are washed in spray absorbers by limestone slurry to remove gaseous sulphur dioxides in flue gases. The process is optimized to achieve the required 94% desulphurization. The aim to decrease SO 2 emissions under 400 mg/m 3 n had been achieved

  1. USE OF ONTOLOGIES FOR KNOWLEDGE BASES CREATION TUTORING COMPUTER SYSTEMS

    Directory of Open Access Journals (Sweden)

    Cheremisina Lyubov

    2014-11-01

    Full Text Available This paper deals with the use of ontology for the use and development of intelligent tutoring systems. We consider the shortcomings of educational software and distance learning systems and the advantages of using ontology’s in their design. Actuality creates educational computer systems based on systematic knowledge. We consider classification of properties, use and benefits of ontology’s. Characterized approaches to the problem of ontology mapping, the first of which – manual mapping, the second – a comparison of the names of concepts based on their lexical similarity and using special dictionaries. The analysis of languages available for the formal description of ontology. Considered a formal mathematical model of ontology’s and ontology consistency problem, which is that different developers for the same domain ontology can be created, syntactically or semantically heterogeneous, and their use requires a compatible broadcast or display. An algorithm combining ontology’s. The characteristic of the practical value of developing an ontology for electronic educational resources and recommendations for further research and development, such as implementation of other components of the system integration, formalization of the processes of integration and development of a universal expansion algorithms ontology’s software

  2. Provenance and recycling of Arabian desert sand

    Science.gov (United States)

    Garzanti, Eduardo; Vermeesch, Pieter; Andò, Sergio; Vezzoli, Giovanni; Valagussa, Manuel; Allen, Kate; Kadi, Khalid A.; Al-Juboury, Ali I. A.

    2013-05-01

    This study seeks to determine the ultimate origin of aeolian sand in Arabian deserts by high-resolution petrographic and heavy-mineral techniques combined with zircon U-Pb geochronology. Point-counting is used here as the sole method by which unbiased volume percentages of heavy minerals can be obtained. A comprehensive analysis of river and wadi sands from the Red Sea to the Bitlis-Zagros orogen allowed us to characterize all potential sediment sources, and thus to quantitatively constrain provenance of Arabian dune fields. Two main types of aeolian sand can be distinguished. Quartzose sands with very poor heavy-mineral suites including zircon occupy most of the region comprising the Great Nafud and Rub' al-Khali Sand Seas, and are largely recycled from thick Lower Palaeozoic quartzarenites with very minor first-cycle contributions from Precambrian basement, Mesozoic carbonate rocks, or Neogene basalts. Instead, carbonaticlastic sands with richer lithic and heavy-mineral populations characterize coastal dunes bordering the Arabian Gulf from the Jafurah Sand Sea of Saudi Arabia to the United Arab Emirates. The similarity with detritus carried by the axial Tigris-Euphrates system and by transverse rivers draining carbonate rocks of the Zagros indicates that Arabian coastal dunes largely consist of far-travelled sand, deposited on the exposed floor of the Gulf during Pleistocene lowstands and blown inland by dominant Shamal northerly winds. A dataset of detrital zircon U-Pb ages measured on twelve dune samples and two Lower Palaeozoic sandstones yielded fourteen identical age spectra. The age distributions all show a major Neoproterozoic peak corresponding to the Pan-African magmatic and tectonic events by which the Arabian Shield was assembled, with minor late Palaeoproterozoic and Neoarchean peaks. A similar U-Pb signature characterizes also Jafurah dune sands, suggesting that zircons are dominantly derived from interior Arabia, possibly deflated from the Wadi al

  3. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    Energy Technology Data Exchange (ETDEWEB)

    Williams, K.E.; Kotnour, T.

    1991-01-01

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  4. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    Energy Technology Data Exchange (ETDEWEB)

    Williams, K.E.; Kotnour, T.

    1991-12-31

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  5. Verification of product design using regulation knowledge base and Web services

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ik June [KAERI, Daejeon (Korea, Republic of); Lee, Jae Chul; Mun Du Hwan [Kyungpook National University, Daegu (Korea, Republic of); Kim, Byung Chul [Dong-A University, Busan (Korea, Republic of); Hwang, Jin Sang [PartDB Co., Ltd., Daejeom (Korea, Republic of); Lim, Chae Ho [Korea Institute of Industrial Technology, Incheon (Korea, Republic of)

    2015-11-15

    Since product regulations contain important rules or codes that manufacturers must follow, automatic verification of product design with the regulations related to a product is necessary. For this, this study presents a new method for the verification of product design using regulation knowledge base and Web services. Regulation knowledge base consisting of product ontology and rules was built with a hybrid technique combining ontology and programming languages. Web service for design verification was developed ensuring the flexible extension of knowledge base. By virtue of two technical features, design verification is served to various products while the change of system architecture is minimized.

  6. Verification of product design using regulation knowledge base and Web services

    International Nuclear Information System (INIS)

    Kim, Ik June; Lee, Jae Chul; Mun Du Hwan; Kim, Byung Chul; Hwang, Jin Sang; Lim, Chae Ho

    2015-01-01

    Since product regulations contain important rules or codes that manufacturers must follow, automatic verification of product design with the regulations related to a product is necessary. For this, this study presents a new method for the verification of product design using regulation knowledge base and Web services. Regulation knowledge base consisting of product ontology and rules was built with a hybrid technique combining ontology and programming languages. Web service for design verification was developed ensuring the flexible extension of knowledge base. By virtue of two technical features, design verification is served to various products while the change of system architecture is minimized.

  7. Biopsy-proven childhood glomerulonephritis in Johor.

    Science.gov (United States)

    Khoo, J J; Pee, S; Thevarajah, B; Yap, Y C; Chin, C K

    2004-06-01

    There has been no published study of biopsy-proven childhood glomerulonephritis in Malaysia. To determine the pattern of childhood glomerulonephritis in Johor, Malaysia from a histopathological perspective and the various indications used for renal biopsy in children. Retrospective study was done of all renal biopsies from children under 16 years of age, received in Sultanah Aminah Hospital, Johor between 1994 and 2001. The histopathological findings were reviewed to determine the pattern of biopsy-proven glomerulonephritis. The indications for biopsy, mode of therapy given after biopsy and the clinical outcome were studied. 122 adequate biopsies were received, 9 children had repeat biopsies. Of the 113 biopsies, minimal change disease formed the most common histopathological diagnosis (40.7%) while lupus nephritis formed the most common secondary glomerulonephritis (23.0%). The main indications for biopsy were nephrotic syndrome (50.8%), lupus nephritis (25.4%) and renal impairment (13.1%). The mode of therapy was changed in 59.8% of the children. Of 106 patients followed-up, 84 children were found to have normal renal function in remission or on treatment. 4 patients developed chronic renal impairment and 16 reached end stage renal disease. Five of the 16 children with end stage disease had since died while 11 were on renal replacement therapy. Another 2 patients died of other complications. The pattern of childhood GN in our study tended to reflect the more severe renal parenchymal diseases in children and those requiring more aggressive treatment. This was because of our criteria of selection (indication) for renal biopsy. Renal biopsy where performed appropriately in selected children may not only be a useful investigative tool for histological diagnosis and prognosis but may help clinicians plan the optimal therapy for these children.

  8. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    Science.gov (United States)

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object

  9. Personal profile of medical students selected through a knowledge-based exam only: are we missing suitable students?

    Directory of Open Access Journals (Sweden)

    Milena Abbiati

    2016-04-01

    Full Text Available Introduction: A consistent body of literature highlights the importance of a broader approach to select medical school candidates both assessing cognitive capacity and individual characteristics. However, selection in a great number of medical schools worldwide is still based on knowledge exams, a procedure that might neglect students with needed personal characteristics for future medical practice. We investigated whether the personal profile of students selected through a knowledge-based exam differed from those not selected. Methods: Students applying for medical school (N=311 completed questionnaires assessing motivations for becoming a doctor, learning approaches, personality traits, empathy, and coping styles. Selection was based on the results of MCQ tests. Principal component analysis was used to draw a profile of the students. Differences between selected and non-selected students were examined by Multivariate ANOVAs, and their impact on selection by logistic regression analysis. Results: Students demonstrating a profile of diligence with higher conscientiousness, deep learning approach, and task-focused coping were more frequently selected (p=0.01. Other personal characteristics such as motivation, sociability, and empathy did not significantly differ, comparing selected and non-selected students. Conclusion: Selection through a knowledge-based exam privileged diligent students. It did neither advantage nor preclude candidates with a more humane profile.

  10. Business Intelligence & Knowledge Management - Technological Support for Strategic Management in the Knowledge Based Economy

    Directory of Open Access Journals (Sweden)

    Dorel PARASCHIV

    2008-01-01

    Full Text Available The viability and success of modern enterprises are subject to the increasing dynamic of the economic environment, so they need to adjust rapidly their policies and strategies in order to respond to sophistication of competitors, customers and suppliers, globalization of business, international competition. Perhaps the most critical component for success of the modern enterprise is its ability to take advantage of all available information - both internal and external. Making sense of all this information, gaining value and competitive advantage through represents real challenges for the enterprise. The IT solutions designed to address these challenges have been developed in two different approaches: structured data management (Business Intelligence and unstructured content management (Knowledge Management. Integrating Business Intelligence and Knowledge Management in new software applications designated not only to store highly structured data and exploit it in real time but also to interpret the results and communicate them to decision factors provides real technological support for Strategic Management. Integrating Business Intelligence and Knowledge Management in order to respond to the challenges the modern enterprise has to deal with represents not only a "new trend" in IT, but a necessity in the emerging knowledge based economy. These hybrid technologies are already widely known in both scientific and practice communities as Competitive Intelligence. In the end of paper,a competitive datawarehouse design is proposed, in an attempt to apply business intelligence technologies to economic environment analysis making use of romanian public data sources.

  11. Planning and design of a knowledge based system for green manufacturing management

    Science.gov (United States)

    Kamal Mohd Nawawi, Mohd; Mohd Zuki Nik Mohamed, Nik; Shariff Adli Aminuddin, Adam

    2013-12-01

    This paper presents a conceptual design approach to the development of a hybrid Knowledge Based (KB) system for Green Manufacturing Management (GMM) at the planning and design stages. The research concentrates on the GMM by using a hybrid KB system, which is a blend of KB system and Gauging Absences of Pre-requisites (GAP). The hybrid KB/GAP system identifies all potentials elements of green manufacturing management issues throughout the development of this system. The KB system used in the planning and design stages analyses the gap between the existing and the benchmark organizations for an effective implementation through the GAP analysis technique. The proposed KBGMM model at the design stage explores two components, namely Competitive Priority and Lean Environment modules. Through the simulated results, the KBGMM System has identified, for each modules and sub-module, the problem categories in a prioritized manner. The System finalized all the Bad Points (BP) that need to be improved to achieve benchmark implementation of GMM at the design stage. The System provides valuable decision making information for the planning and design a GMM in term of business organization.

  12. MARKETING AND INNOVATION IN ENVIRONMENT BANKING FINANCIAL - REQUIREMENTS IN A KNOWLEDGE-BASED SOCIETY AND TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    MIRCEA VALERIA ARINA

    2015-04-01

    Full Text Available In the context of knowledge-based economy and society has acquired a connotation marketing role vital for all fields. Evolution of social, cultural, political and economic, information, design and conduct of marketing activities contribute to increasing the efficiency of any institution. Evolution of marketing over time provoked the great researchers who have tried to define the concept of their views, but only surprising aspects of this vast and important field. The definitions are different as shown in the article approach, the essence is the same. In the banking and financial role of marketing is to continually improve the quality of customer services and products offered by formulating appropriate marketing strategies so as to be able to influence The consumer buying behavior. Customer focus, his loyalty and not least an innovative marketing that starts at the client key aspects FEATURES today. The emphasis on innovation and ingenuity in order to: create new banking services and products, ways to attract customers; loyalty of existing ones, defining marketing and communication strategies lead to appropriate strategies to maximize the results of innovative marketing campaigns. Referring to work in the banking environment we can say that innovation is the key to success BANK and are based on: product and service innovations, process innovations, organizational innovations, and not least of marketing innovations.

  13. Advanced piloted aircraft flight control system design methodology. Volume 1: Knowledge base

    Science.gov (United States)

    Mcruer, Duane T.; Myers, Thomas T.

    1988-01-01

    The development of a comprehensive and electric methodology for conceptual and preliminary design of flight control systems is presented and illustrated. The methodology is focused on the design stages starting with the layout of system requirements and ending when some viable competing system architectures (feedback control structures) are defined. The approach is centered on the human pilot and the aircraft as both the sources of, and the keys to the solution of, many flight control problems. The methodology relies heavily on computational procedures which are highly interactive with the design engineer. To maximize effectiveness, these techniques, as selected and modified to be used together in the methodology, form a cadre of computational tools specifically tailored for integrated flight control system preliminary design purposes. While theory and associated computational means are an important aspect of the design methodology, the lore, knowledge and experience elements, which guide and govern applications are critical features. This material is presented as summary tables, outlines, recipes, empirical data, lists, etc., which encapsulate a great deal of expert knowledge. Much of this is presented in topical knowledge summaries which are attached as Supplements. The composite of the supplements and the main body elements constitutes a first cut at a a Mark 1 Knowledge Base for manned-aircraft flight control.

  14. The fault monitoring and diagnosis knowledge-based system for space power systems: AMPERES, phase 1

    Science.gov (United States)

    Lee, S. C.

    1989-01-01

    The objective is to develop a real time fault monitoring and diagnosis knowledge-based system (KBS) for space power systems which can save costly operational manpower and can achieve more reliable space power system operation. The proposed KBS was developed using the Autonomously Managed Power System (AMPS) test facility currently installed at NASA Marshall Space Flight Center (MSFC), but the basic approach taken for this project could be applicable for other space power systems. The proposed KBS is entitled Autonomously Managed Power-System Extendible Real-time Expert System (AMPERES). In Phase 1 the emphasis was put on the design of the overall KBS, the identification of the basic research required, the initial performance of the research, and the development of a prototype KBS. In Phase 2, emphasis is put on the completion of the research initiated in Phase 1, and the enhancement of the prototype KBS developed in Phase 1. This enhancement is intended to achieve a working real time KBS incorporated with the NASA space power system test facilities. Three major research areas were identified and progress was made in each area. These areas are real time data acquisition and its supporting data structure; sensor value validations; development of inference scheme for effective fault monitoring and diagnosis, and its supporting knowledge representation scheme.

  15. Planning and design of a knowledge based system for green manufacturing management

    International Nuclear Information System (INIS)

    Nawawi, Mohd Kamal Mohd; Mohamed, Nik Mohd Zuki Nik; Aminuddin, Adam Shariff Adli

    2013-01-01

    This paper presents a conceptual design approach to the development of a hybrid Knowledge Based (KB) system for Green Manufacturing Management (GMM) at the planning and design stages. The research concentrates on the GMM by using a hybrid KB system, which is a blend of KB system and Gauging Absences of Pre-requisites (GAP). The hybrid KB/GAP system identifies all potentials elements of green manufacturing management issues throughout the development of this system. The KB system used in the planning and design stages analyses the gap between the existing and the benchmark organizations for an effective implementation through the GAP analysis technique. The proposed KBGMM model at the design stage explores two components, namely Competitive Priority and Lean Environment modules. Through the simulated results, the KBGMM System has identified, for each modules and sub-module, the problem categories in a prioritized manner. The System finalized all the Bad Points (BP) that need to be improved to achieve benchmark implementation of GMM at the design stage. The System provides valuable decision making information for the planning and design a GMM in term of business organization

  16. Jointly learning word embeddings using a corpus and a knowledge base

    Science.gov (United States)

    Bollegala, Danushka; Maehara, Takanori; Kawarabayashi, Ken-ichi

    2018-01-01

    Methods for representing the meaning of words in vector spaces purely using the information distributed in text corpora have proved to be very valuable in various text mining and natural language processing (NLP) tasks. However, these methods still disregard the valuable semantic relational structure between words in co-occurring contexts. These beneficial semantic relational structures are contained in manually-created knowledge bases (KBs) such as ontologies and semantic lexicons, where the meanings of words are represented by defining the various relationships that exist among those words. We combine the knowledge in both a corpus and a KB to learn better word embeddings. Specifically, we propose a joint word representation learning method that uses the knowledge in the KBs, and simultaneously predicts the co-occurrences of two words in a corpus context. In particular, we use the corpus to define our objective function subject to the relational constrains derived from the KB. We further utilise the corpus co-occurrence statistics to propose two novel approaches, Nearest Neighbour Expansion (NNE) and Hedged Nearest Neighbour Expansion (HNE), that dynamically expand the KB and therefore derive more constraints that guide the optimisation process. Our experimental results over a wide-range of benchmark tasks demonstrate that the proposed method statistically significantly improves the accuracy of the word embeddings learnt. It outperforms a corpus-only baseline and reports an improvement of a number of previously proposed methods that incorporate corpora and KBs in both semantic similarity prediction and word analogy detection tasks. PMID:29529052

  17. Integrating movement in academic classrooms: understanding, applying and advancing the knowledge base.

    Science.gov (United States)

    Webster, C A; Russ, L; Vazou, S; Goh, T L; Erwin, H

    2015-08-01

    In the context of comprehensive and coordinated approaches to school health, academic classrooms have gained attention as a promising setting for increasing physical activity and reducing sedentary time among children. The aims of this paper are to review the rationale and knowledge base related to movement integration in academic classrooms, consider the practical applications of current knowledge to interventions and teacher education, and suggest directions for future research. Specifically, this paper (i) situates movement integration amid policy and research related to children's health and the school as a health-promoting environment; (ii) highlights the benefits of movement integration; (iii) summarizes movement integration programs and interventions; (iv) examines factors associated with classroom teachers' movement integration; (v) offers strategies for translating research to practice and (vi) forwards recommendations for future inquiry related to the effectiveness and sustainability of efforts to integrate movement into classroom routines. This paper provides a comprehensive resource for developing state-of-the-art initiatives to maximize children's movement in academic classrooms as a key strategy for important goals in both education and public health. © 2015 World Obesity.

  18. A Knowledge-Base for a Personalized Infectious Disease Risk Prediction System.

    Science.gov (United States)

    Vinarti, Retno; Hederman, Lucy

    2018-01-01

    We present a knowledge-base to represent collated infectious disease risk (IDR) knowledge. The knowledge is about personal and contextual risk of contracting an infectious disease obtained from declarative sources (e.g. Atlas of Human Infectious Diseases). Automated prediction requires encoding this knowledge in a form that can produce risk probabilities (e.g. Bayesian Network - BN). The knowledge-base presented in this paper feeds an algorithm that can auto-generate the BN. The knowledge from 234 infectious diseases was compiled. From this compilation, we designed an ontology and five rule types for modelling IDR knowledge in general. The evaluation aims to assess whether the knowledge-base structure, and its application to three disease-country contexts, meets the needs of personalized IDR prediction system. From the evaluation results, the knowledge-base conforms to the system's purpose: personalization of infectious disease risk.

  19. Constructing regional advantage: platform policies based on related variety and differentiated knowledge bases.

    NARCIS (Netherlands)

    Asheim, B.T.; Boschma, R.A.; Cooke, P.

    2011-01-01

    Constructing regional advantage: platform policies based on related variety and differentiated knowledge bases, Regional Studies. This paper presents a regional innovation policy model based on the idea of constructing regional advantage. This policy model brings together concepts like related

  20. Integrated knowledge base tool for acquisition and verification of NPP alarm systems

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    1998-01-01

    Knowledge acquisition and knowledge base verification are important activities in developing knowledge-based systems such as alarm processing systems. In this work, we developed the integrated tool, for knowledge acquisition and verification of NPP alarm processing systems, by using G2 tool. The tool integrates document analysis method and ECPN matrix analysis method, for knowledge acquisition and knowledge verification, respectively. This tool enables knowledge engineers to perform their tasks from knowledge acquisition to knowledge verification consistently

  1. A NASA/RAE cooperation in the development of a real-time knowledge based autopilot

    Science.gov (United States)

    Daysh, Colin; Corbin, Malcolm; Butler, Geoff; Duke, Eugene L.; Belle, Steven D.; Brumbaugh, Randal W.

    1991-01-01

    As part of a US/UK cooperative aeronautical research program, a joint activity between NASA-Ames and the Royal Aerospace Establishment on Knowledge Based Systems (KBS) was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time KBS. The proposed next stage of the research is described, in which some of the problems of implementing and validating a Knowledge Based Autopilot (KBAP) for a generic high performance aircraft will be studied.

  2. Strategic HRM in Building Micro-Foundations of Organizational Knowledge-Based Performance

    DEFF Research Database (Denmark)

    Minbaeva, Dana

    2013-01-01

    Strategic HRM research has a strong potential to further our understanding of how organizational knowledge processes influence performance at various analytical levels. Drawing on ability–motivation–opportunity research and linking it to knowledge sharing behaviors, we discuss the micro......-foundations in the link between strategic HRM practices and knowledge-based organizational performance. We thus describe a research agenda for future micro-foundational research that links strategic HRM and knowledge-based performance....

  3. Managing Service Quality within the Knowledge-Based Economy: Opportunities and Challenges

    OpenAIRE

    Ion Plumb; Andreea Zamfir

    2009-01-01

    The knowledge-based economy, along with the impact of information society technologies, presents the service organizations and their customers with many potential opportunities as well as challenges. Therefore, this study explores how the knowledge-based economy could influence the quality management of service organizations. The study reveals that the actors within the service sector have vast new opportunities in terms of communication and value co-creation, but in the same time, the requir...

  4. Uncertainty management in knowledge based systems for nondestructive testing-an example from ultrasonic testing

    International Nuclear Information System (INIS)

    Rajagopalan, C.; Kalyanasundaram, P.; Baldev Raj

    1996-01-01

    The use of fuzzy logic, as a framework for uncertainty management, in a knowledge-based system (KBS) for ultrasonic testing of austenitic stainless steels is described. Parameters that may contain uncertain values are identified. Methodologies to handle uncertainty in these parameters using fuzzy logic are detailed. The overall improvement in the performance of the knowledge-based system after incorporating fuzzy logic is discussed. The methodology developed being universal, its extension to other KBS for nondestructive testing and evaluation is highlighted. (author)

  5. Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation

    Science.gov (United States)

    1989-08-01

    1757 I Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation DTIC5 by flELECTE 5David C. Wilkins and Yong...NUMBERSWOKNI PROGRAM RAT TSWOKUI 61153N RR04206 OC 443g-008 11 TITLE (Include Security Classification) Sociopathic Knowledge Bases: Correct Knowledge Can be...probabilistic rules are shown to be sociopathic and so this problem is very widespread. Sociopathicity has important consequences for rule induction

  6. Big Data Provenance: Challenges, State of the Art and Opportunities

    OpenAIRE

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and ut...

  7. A method of knowledge base verification for nuclear power plant expert systems using extended Petri Nets

    International Nuclear Information System (INIS)

    Kwon, I. W.; Seong, P. H.

    1996-01-01

    The adoption of expert systems mainly as operator supporting systems is becoming increasingly popular as the control algorithms of system become more and more sophisticated and complicated. The verification phase of knowledge base is an important part for developing reliable expert systems, especially in nuclear industry. Although several strategies or tools have been developed to perform potential error checking, they often neglect the reliability of verification methods. Because a Petri net provides a uniform mathematical formalization of knowledge base, it has been employed for knowledge base verification. In this work, we devise and suggest an automated tool, called COKEP(Checker of Knowledge base using Extended Petri net), for detecting incorrectness, inconsistency, and incompleteness in a knowledge base. The scope of the verification problem is expended to chained errors, unlike previous studies that assume error incidence to be limited to rule pairs only. In addition, we consider certainty factor in checking, because most of knowledge bases have certainly factors. 8 refs,. 2 figs,. 4 tabs. (author)

  8. Irradiation sensibility of different provenances of Jatropha curcas L. seeds

    International Nuclear Information System (INIS)

    Yang Qing; Xu Congheng; Peng Daiping; Duan Zhubiao; Han Lei; Sun Qixiang; Peng Zhenhua

    2007-01-01

    The irradiation sensibility of 10 provenances of Jatropha curcas L. seeds to 60 Co γ-rays was studied. The results showed that the relationship between relative germination rate of the seeds and the doses of irradiation was negative correlation, and the difference of relative germination rate among different doses treatment was significant at 5% probability level or highly significant at 1% probability level. For seeds of different provenances, the correlation coefficient of linear regression was from -0.89--0.96, and the medial lethal doses (LD 50) of 10 provinces was from 127 Gy to 184 Gy. According to the LD 50, we could divided 10 provinces of J. curcas L. into sensitive provenance, transitional provenances and obtuse provenances. The provenances of Yuanjiang , Yunan (184 Gy) belonged to sensitive provenance; the provenances of Zhenfeng, Guizhou (127 Gy) and the provenances of Yuedong, Hainan (141 Gy) belonged to obtuse provenance; other 7 provenances belonged to transitional provenances. The results provided important experiment basis for germ plasma resources innovation of J. curcas L. (authors)

  9. Provenance research: investigation of genetic diversity associated with geography

    Science.gov (United States)

    Robert Z. Callaham

    1963-01-01

    Provenance in forestry refers to the population of trees growing at n particular place of origin. Provenance research defines the genetic and environmental components of phenotypic variation associated with geographic source. Information on provenance is important in assuring sources of seed to give well-adapted, productive trees and in directing breeding of...

  10. Fagus sylvatica L. provenances maintain different leaf metabolic profiles and functional response

    Science.gov (United States)

    Aranda, Ismael; Sánchez-Gómez, David; de Miguel, Marina; Mancha, Jose Antonio; Guevara, María Angeles; Cadahía, Estrella; Fernández de Simón, María Brígida

    2017-07-01

    Most temperate forest tree species will suffer important environmental changes as result of the climate change. Adaptiveness to local conditions could change at different sites in the future. In this context, the study of intra-specific variability is important to clarify the singularity of different local populations. Phenotypic differentiation between three beech provenances covering a wide latitudinal range (Spain/ES, Germany/DE and Sweden/SE), was studied in a greenhouse experiment. Non-target leaf metabolite profiles and ecophysiological response was analyzed in well-watered and water stressed seedlings. There was a provenance-specific pattern in the relative concentrations of some leaf metabolites regardless watering treatment. The DE and SE from the center and north of the distribution area of the species showed a clear differentiation from the ES provenance in the relative concentration of some metabolites. Thus the ES provenance from the south maintained larger relative concentration of some organic and amino acids (e.g. fumaric and succinic acids or valine and isoleucine), and in some secondary metabolites (e.g. kaempferol, caffeic and ferulic acids). The ecophysiological response to mild water stress was similar among the three provenances as a consequence of the moderate water stress applied to seedlings, although leaf N isotope composition (δ15N) and leaf C:N ratio were higher and lower respectively in DE than in the other two provenances. This would suggest potential differences in the capacity to uptake and post-process nitrogen according to provenance. An important focus of the study was to address for the first time inter-provenance leaf metabolic diversity in beech from a non-targeted metabolic profiling approach that allowed differentiation of the three studied provenances.

  11. KNOWLEDGE-BASED MIGRATION AND MOBILITY: THE ECONOMIC 'GAMBLE' OF THE EASTERN NEIGHBOURHOOD

    Directory of Open Access Journals (Sweden)

    G. Kharlamova

    2016-10-01

    Full Text Available To what extent can the scientific migration and mobility, and remittances impact the economic development of the donor and recipient states? How significant are they as a resource for the enhancement of the Eastern Partnership? The policy brief provides the results of the quantitative assessment of the costs and benefits of "smart" labour migration in the Eastern Partnership (EaP countries and proposes some policy recommendations to enhance the benefits stemming from knowledge-based migration and mobility flows. We received the proof of mutual causality between human development indicator of donor-state and most significant performance indicators of EaP migration in the EU ("smart mobility". This means that HDI of a donor-state is flexible to the internal situation in the country, and so the positive effect of smart mobility and remittance inflows can be easily absorbed inside the EaP. The same we observed for gross national income of EaP donor-states. However, our approach does not provide the answer: what is exactly the effect or the result. The convergence effect of scientific migration in the EU and the Eastern Partnership region is considered on the ground of the calculative assessment. We considered "fi-convergence" approach, stating that it occurs when the EaP mobility rate grows faster than the EU ones. As for o-convergence, we defined it as a reduction of future rates of variation (inequality, differentiation in the levels of migration of regions (countries. We can conclude that there is the convergence between the EU & EaP in the scientific migration in the years of the EaP initiation, but no results in the process of its fulfilment.

  12. Knowledge based word-concept model estimation and refinement for biomedical text mining.

    Science.gov (United States)

    Jimeno Yepes, Antonio; Berlanga, Rafael

    2015-02-01

    Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. A knowledge-based integrated approach for discovering and repairing declare maps

    NARCIS (Netherlands)

    Maggi, F.M.; Jagadeesh Chandra Bose, R.P.; Aalst, van der W.M.P.; Salinesi, C.; Norrie, M.C.; Pastor, O.

    2013-01-01

    Process mining techniques can be used to discover process models from event data. Often the resulting models are complex due to the variability of the underlying process. Therefore, we aim at discovering declarative process models that can deal with such variability. However, for real-life event

  14. A Comparative Study of Knowledge-Based Approaches for Cross-Language

    National Research Council Canada - National Science Library

    Oard, D; Dorr, B; Hackett, P; Katsova, M

    1998-01-01

    .... We have implemented six query translation techniques that use bilingual dictionaries, one based on lexical-semantic analysis, and one based on direct use of the translation output from an existing...

  15. Managing visitor sites in Svalbard: from a precautionary approach towards knowledge-based management

    Directory of Open Access Journals (Sweden)

    Kirstin Fangel

    2012-05-01

    Full Text Available Increased tourism in the Arctic calls for more knowledge to meet management challenges. This paper reviews existing knowledge of the effects of human use on vegetation, fauna and cultural heritage in Svalbard, and it addresses the need for site-specific knowledge for improved management. This paper draws upon scientific studies, knowledge held by management authorities and local people, the Governor's database on visitors and visited sites and our own data from landing sites we visited. There is a certain level of basic knowledge available, allowing us to roughly grade the vulnerability of sites. However, there is a thorough lack of site-specific data related to the management of single locations or groups of similar locations. Future research needs to address specific on-site challenges in the management of visitor sites. Relevant management models and measures are discussed. We contend that a shift away from a blanket application of the precautionary principle and towards a more integrated, site-specific and evidence-based management plan will contribute to more trusted and reliable, and thereby acceptable among stakeholders, decisions in the management of growing tourism activity in Svalbard.

  16. Knowledge Base of Mathematics Teacher Educators: A Goals-Knowledge-Practice Approach

    Science.gov (United States)

    Veselovsky, Aleksandra

    2017-01-01

    Critical analysis of the literature reveals that many questions about the knowledge and practice of mathematics teacher educators (MTEs) remain in need of further research: how do they know what to teach; how do they learn how to teach teachers; how do they prepare to teach their courses; how does the research on teacher education inform their…

  17. Automatic segmentation of coronary vessels from digital subtracted angiograms: a knowledge-based approach

    International Nuclear Information System (INIS)

    Stansfield, S.A.

    1986-01-01

    This paper presents a rule-based expert system for identifying and isolating coronary vessels in digital angiograms. The system is written in OPS5 and LISP and uses low level processors written in C. The system embodies both stages of the vision hierarchy: The low level image processing stage works concurrently with edges (or lines) and regions to segment the input image. Its knowledge is that of segmentation, grouping, and shape analysis. The high level stage then uses its knowledge of cardiac anatomy and physiology to interpret the result and to eliminate those structures not desired in the output. (Auth.)

  18. Painterly rendered portraits from photographs using a knowledge-based approach

    Science.gov (United States)

    DiPaola, Steve

    2007-02-01

    Portrait artists using oils, acrylics or pastels use a specific but open human vision methodology to create a painterly portrait of a live sitter. When they must use a photograph as source, artists augment their process, since photographs have: different focusing - everything is in focus or focused in vertical planes; value clumping - the camera darkens the shadows and lightens the bright areas; as well as color and perspective distortion. In general, artistic methodology attempts the following: from the photograph, the painting must 'simplify, compose and leave out what's irrelevant, emphasizing what's important'. While seemingly a qualitative goal, artists use known techniques such as relying on source tone over color to indirect into a semantic color temperature model, use brush and tonal "sharpness" to create a center of interest, lost and found edges to move the viewers gaze through the image towards the center of interest as well as other techniques to filter and emphasize. Our work attempts to create a knowledge domain of the portrait painter process and incorporate this knowledge into a multi-space parameterized system that can create an array of NPR painterly rendering output by analyzing the photographic-based input which informs the semantic knowledge rules.

  19. Model-based Rational and Systematic Protein Purification Process Development : A Knowledge-based Approach

    NARCIS (Netherlands)

    Kungah Nfor, B.

    2011-01-01

    The increasing market and regulatory (quality and safety) demands on therapeutic proteins calls for radical improvement in their manufacturing processes. Addressing these challenges requires the adoption of strategies and tools that enable faster and more efficient process development. This thesis

  20. A knowledge-based system approach for sensor fault modeling, detection and mitigation

    Data.gov (United States)

    National Aeronautics and Space Administration — Sensors are vital components for control and advanced health management techniques. However, sensors continue to be considered the weak link in many engineering...

  1. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    Directory of Open Access Journals (Sweden)

    Shin Sook-Il

    2011-01-01

    Full Text Available Abstract Background Metabolic reconstructions (MRs are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Results Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i development and implementation of a community-based workflow for MR annotation and reconciliation; ii incorporation of thermodynamic information; and iii use of the consensus MR to identify potential multi-target drug therapy approaches. Conclusion Taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  2. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2.

    Science.gov (United States)

    Thiele, Ines; Hyduke, Daniel R; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan M T; Hsiung, Chao A; De Keersmaecker, Sigrid C J; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L; Shin, Sook-il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M; Zengler, Karsten; Palsson, Bernhard O; Adkins, Joshua N; Bumann, Dirk

    2011-01-18

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  3. THE METHDOLOGICAL WAYS OF FORM OF THE KNOWLEDGE BASE OF THE AUTOMATIC SYSTEM DIAGNOSTICS OF THE COMPLEX AIRCRAFT OBJECT

    Directory of Open Access Journals (Sweden)

    Ю. Чоха

    2012-04-01

    Full Text Available Development of the Systems provides reception of the multitude of information and improvement of theiranalysis for diagnostics of aviation techniques. However theoretical bases deficiently are motivated forstructure and analysis of information. On modern stage of evolution of the artificial intelligence the trend istracked the outrun of technological (practical of the facilities of the development of the intellectual systemscomparatively their theoretical developments. In this connection in article the idea is emphasized thatclassical approaches to the analytical bases of the cybernetics have grown old. Accordingly by the base forensuring of functioning of the automatic diagnostics systems requisite to consider the ways (the strategies ofdecompositions and creature structure of the knowledge base in relation to of the concrete aviation object.However use of the syntheses of the deductive and of inductive strategy shaping the structure of theknowledge’s can be insufficient in some cases of making of the diagnostics system of the complex object ofthe aviation techniques with depth diagnosis at the constructive node. For this case on each of levels ofstructurization of the knowledge base, authors offer to apply also strategy of parallel (horizontaldecomposition of object of diagnosing concerning its behaviour at transition from one stationary operationalregimen on another. As a base paradigm of methodology of the structural analysis and formation of a field ofknowledge by authors are proffered to use generalised objective - the structural approach, which developedto technological and program realisation.

  4. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    Energy Technology Data Exchange (ETDEWEB)

    Thiele, Ines; Hyduke, Daniel R.; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K.; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan MT; Hsiung, Chao A.; De Keersmaecker, Sigrid CJ; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L.; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L.; Shin, Sook-Il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M.; Zengler, Karsten; Palsson, Bernhard O.; Adkins, Joshua N.; Bumann, Dirk

    2011-01-01

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Finally, taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  5. Analysis of Russia's biofuel knowledge base: A comparison with Germany and China

    International Nuclear Information System (INIS)

    Kang, Jin-Su; Kholod, Tetyana; Downing, Stephen

    2015-01-01

    This study assesses the evolutionary trajectory of the knowledge base of Russian biofuel technology compared to that of Germany, one of the successful leaders in adopting renewable energy, and China, an aggressive latecomer at promoting renewable energy. A total of 1797 patents filed in Russia, 8282 in Germany and 20,549 in China were retrieved from the European Patent Office database through 2012. We identify four collectively representative measures of a knowledge base (size, growth, cumulativeness, and interdependence), which are observable from biofuel patent citations. Furthermore, we define the exploratory–exploitative index, which enables us to identify the nature of learning embedded in the knowledge base structure. Our citation network analysis of the biofuel knowledge base trajectory by country, in conjunction with policy milestones, shows that Russia's biofuel knowledge base lacks both the increasing technological specialization of that in Germany and the accelerated growth rate of that in China. The German biofuel citation network shows a well-established knowledge base with increasing connectivity, while China's has grown exceptionally fast but with a sparseness of citations reflecting limited connections to preceding, foundational technologies. We conclude by addressing policy implications as well as limitations of the study and potential topics to explore in future research. -- Highlights: •Biofuel knowledge base (KB) of Russia is compared to those of Germany and China. •Citations network analysis measures KB size, growth, cumulativeness, and interdependence. •Russian KB lacks the increasing technological specialization of German KB. •Russia KB lacks the accelerated growth rate of Chinese KB. •Russia KB evolution reflects the poor institutional framework

  6. Big Data Provenance: Challenges, State of the Art and Opportunities.

    Science.gov (United States)

    Wang, Jianwu; Crawl, Daniel; Purawat, Shweta; Nguyen, Mai; Altintas, Ilkay

    2015-01-01

    Ability to track provenance is a key feature of scientific workflows to support data lineage and reproducibility. The challenges that are introduced by the volume, variety and velocity of Big Data, also pose related challenges for provenance and quality of Big Data, defined as veracity. The increasing size and variety of distributed Big Data provenance information bring new technical challenges and opportunities throughout the provenance lifecycle including recording, querying, sharing and utilization. This paper discusses the challenges and opportunities of Big Data provenance related to the veracity of the datasets themselves and the provenance of the analytical processes that analyze these datasets. It also explains our current efforts towards tracking and utilizing Big Data provenance using workflows as a programming model to analyze Big Data.

  7. Data Provenance Architecture for the Geosciences

    Science.gov (United States)

    Murphy, F.; Irving, D. H.

    2012-12-01

    The pace at which geoscientific insights inform societal development quickens with time and these insights drive decisions and actions of ever-increasing human and economic significance. Until recently academic, commercial and government bodies have maintained distinct bodies of knowledge to support scientific enquiry as well as societal development. However, it has become clear that the curation of the body of data is an activity of equal or higher social and commercial value. We address the community challenges in the curation of, access to, and analysis of scientific data including: the tensions between creators, providers and users; incentives and barriers to sharing; ownership and crediting. We also discuss the technical and financial challenges in maximising the return on the effort made in generating geoscientific data. To illustrate how these challenges might be addressed in the broader geoscientific domain, we describe the high-level data governance and analytical architecture in the upstream Oil Industry. This domain is heavily dependent on costly and highly diverse geodatasets collected and assimilated over timeframes varying from seconds to decades. These data must support both operational decisions at the minute-hour timefame and strategic and economic decisions of enterprise or national scale, and yet be sufficiently robust to last the life of a producing field. We develop three themes around data provenance, data ownership and business models for data curation. 1/ The overarching aspiration is to ensure that data provenance and quality is maintained along the analytical workflow. Hence if data on which a publication or report changes, the report and its publishers can be notified and we describe a mechanism by which dependent knowledge products can be flagged. 2/ From a cost and management point of view we look at who "owns" data especially in cases where the cost of curation and stewardship is significant compared to the cost of acquiring the data

  8. DECK: Distance and environment-dependent, coarse-grained, knowledge-based potentials for protein-protein docking

    Directory of Open Access Journals (Sweden)

    Vakser Ilya A

    2011-07-01

    Full Text Available Abstract Background Computational approaches to protein-protein docking typically include scoring aimed at improving the rank of the near-native structure relative to the false-positive matches. Knowledge-based potentials improve modeling of protein complexes by taking advantage of the rapidly increasing amount of experimentally derived information on protein-protein association. An essential element of knowledge-based potentials is defining the reference state for an optimal description of the residue-residue (or atom-atom pairs in the non-interaction state. Results The study presents a new Distance- and Environment-dependent, Coarse-grained, Knowledge-based (DECK potential for scoring of protein-protein docking predictions. Training sets of protein-protein matches were generated based on bound and unbound forms of proteins taken from the DOCKGROUND resource. Each residue was represented by a pseudo-atom in the geometric center of the side chain. To capture the long-range and the multi-body interactions, residues in different secondary structure elements at protein-protein interfaces were considered as different residue types. Five reference states for the potentials were defined and tested. The optimal reference state was selected and the cutoff effect on the distance-dependent potentials investigated. The potentials were validated on the docking decoys sets, showing better performance than the existing potentials used in scoring of protein-protein docking results. Conclusions A novel residue-based statistical potential for protein-protein docking was developed and validated on docking decoy sets. The results show that the scoring function DECK can successfully identify near-native protein-protein matches and thus is useful in protein docking. In addition to the practical application of the potentials, the study provides insights into the relative utility of the reference states, the scope of the distance dependence, and the coarse-graining of

  9. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    Science.gov (United States)

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  10. The Particularities of the Economic Crisis in the Knowledge-Based Society

    Directory of Open Access Journals (Sweden)

    Ion IVAN

    2011-02-01

    Full Text Available The paper presents the characteristics of the informational society and the requirements for the transition towards the knowledge based society. The informational society and the advantages it brings are described. The concepts of informatics system and informational system are described. Their roles and functions are identified. The functioning principles of the informatics systems are presented. The factors that lead to the evolution from the informational society towards the knowledge based one are identified and analyzed. The main trends in the knowledge based society are identified. For each of these trends we identify the short and long run effects and the implied consumptions of resources. Knowledge is seen in the new society as the main source of competitive advantage. Ways of transmitting knowledge are presented. The efficiency and applicability are discussed. In the knowledge based society the end-users have higher expectations regarding the products and services than before. The paper discusses the factors that lead to the structural re-equilibration in the knowledge based society. Research concepts are developed in the direction of adapting informatics applications to the target group exigencies. The paper details the quality characteristics of computer applications that are tailored to the requirements of the target group and a quality system for such applications is presented. The paper examines the benefits obtained by adapting computer applications to the target group exigencies.

  11. Data provenance assurance in the cloud using blockchain

    Science.gov (United States)

    Shetty, Sachin; Red, Val; Kamhoua, Charles; Kwiat, Kevin; Njilla, Laurent

    2017-05-01

    Ever increasing adoption of cloud technology scales up the activities like creation, exchange, and alteration of cloud data objects, which create challenges to track malicious activities and security violations. Addressing this issue requires implementation of data provenance framework so that each data object in the federated cloud environment can be tracked and recorded but cannot be modified. The blockchain technology gives a promising decentralized platform to build tamper-proof systems. Its incorruptible distributed ledger/blockchain complements the need of maintaining cloud data provenance. In this paper, we present a cloud based data provenance framework using block chain which traces data record operations and generates provenance data. We anchor provenance data records into block chain transactions, which provide validation on provenance data and preserve user privacy at the same time. Once the provenance data is uploaded to the global block chain network, it is extremely challenging to tamper the provenance data. Besides, the provenance data uses hashed user identifiers prior to uploading so the blockchain nodes cannot link the operations to a particular user. The framework ensures that the privacy is preserved. We implemented the architecture on ownCloud, uploaded records to blockchain network, stored records in a provenance database and developed a prototype in form of a web service.

  12. Mapping the knowledge base for maritime health: 4 safety and performance at sea.

    Science.gov (United States)

    Carter, Tim

    2011-01-01

    There is very little recent investigative work on the contribution of health related impairment and disability to either accident risks or to reduced performance at sea, the only exception being studies on fatigue and parallel data on sleep related incidents. Incidents where health related impairment, other than fatigue, has contributed are very rarely found in reports of maritime accident investigations. This may either indicate the irrelevance of these forms of impairment to accidents or alternatively point to the effectiveness of existing control measures. The main approach to risk reduction is by the application of fitness criteria to seafarers during medical examinations. Where there is a knowledge base it is either, as in the case of vision, a very old one that relates to patterns of visual task that differ markedly from those in modern shipping or, as with hearing, is based on untested assumptions about the levels of impairment that will prevent effective communications at sea. There are practical limitations to the assessment of cognitive functions as these encompass such a wide range of impairments from those associated with fatigue, medication, or substance abuse to those relating to age or to the risks of sudden incapacitation from a pre-existing illness. Physical capability can be assessed but only in limited ways in the course of a medical examination. In the absence of clear evidence of accident risks associated with health-related impairments or disabilities it is unlikely that there will be pressure to update criteria that appear to be providing satisfactory protection. As capability is related to the tasks performed, investigations need to integrate information on ergonomic and organizational aspects with that on health and impairment. Criteria that may select seafarers with health- -related impairment need to be reviewed wherever the task demands in modern shipping have changed, in order to relax or modify them where indicated in order to reduce

  13. Multidimensional segmentation of coronary intravascular ultrasound images using knowledge-based methods

    Science.gov (United States)

    Olszewski, Mark E.; Wahle, Andreas; Vigmostad, Sarah C.; Sonka, Milan

    2005-04-01

    In vivo studies of the relationships that exist among vascular geometry, plaque morphology, and hemodynamics have recently been made possible through the development of a system that accurately reconstructs coronary arteries imaged by x-ray angiography and intravascular ultrasound (IVUS) in three dimensions. Currently, the bottleneck of the system is the segmentation of the IVUS images. It is well known that IVUS images contain numerous artifacts from various sources. Previous attempts to create automated IVUS segmentation systems have suffered from either a cost function that does not include enough information, or from a non-optimal segmentation algorithm. The approach presented in this paper seeks to strengthen both of those weaknesses -- first by building a robust, knowledge-based cost function, and then by using a fully optimal, three-dimensional segmentation algorithm. The cost function contains three categories of information: a compendium of learned border patterns, information theoretic and statistical properties related to the imaging physics, and local image features. By combining these criteria in an optimal way, weaknesses associated with cost functions that only try to optimize a single criterion are minimized. This cost function is then used as the input to a fully optimal, three-dimensional, graph search-based segmentation algorithm. The resulting system has been validated against a set of manually traced IVUS image sets. Results did not show any bias, with a mean unsigned luminal border positioning error of 0.180 +/- 0.027 mm and an adventitial border positioning error of 0.200 +/- 0.069 mm.

  14. The provenance of Taklamakan desert sand

    Science.gov (United States)

    Rittner, Martin; Vermeesch, Pieter; Carter, Andrew; Bird, Anna; Stevens, Thomas; Garzanti, Eduardo; Andò, Sergio; Vezzoli, Giovanni; Dutt, Ripul; Xu, Zhiwei; Lu, Huayu

    2016-03-01

    Sand migration in the vast Taklamakan desert within the Tarim Basin (Xinjiang Uyghur Autonomous region, PR China) is governed by two competing transport agents: wind and water, which work in diametrically opposed directions. Net aeolian transport is from northeast to south, while fluvial transport occurs from the south to the north and then west to east at the northern rim, due to a gradual northward slope of the underlying topography. We here present the first comprehensive provenance study of Taklamakan desert sand with the aim to characterise the interplay of these two transport mechanisms and their roles in the formation of the sand sea, and to consider the potential of the Tarim Basin as a contributing source to the Chinese Loess Plateau (CLP). Our dataset comprises 39 aeolian and fluvial samples, which were characterised by detrital-zircon U-Pb geochronology, heavy-mineral, and bulk-petrography analyses. Although the inter-sample differences of all three datasets are subtle, a multivariate statistical analysis using multidimensional scaling (MDS) clearly shows that Tarim desert sand is most similar in composition to rivers draining the Kunlun Shan (south) and the Pamirs (west), and is distinctly different from sediment sources in the Tian Shan (north). A small set of samples from the Junggar Basin (north of the Tian Shan) yields different detrital compositions and age spectra than anywhere in the Tarim Basin, indicating that aeolian sediment exchange between the two basins is minimal. Although river transport dominates delivery of sand into the Tarim Basin, wind remobilises and reworks the sediment in the central sand sea. Characteristic signatures of main rivers can be traced from entrance into the basin to the terminus of the Tarim River, and those crossing the desert from the south to north can seasonally bypass sediment through the sand sea. Smaller ephemeral rivers from the Kunlun Shan end in the desert and discharge their sediment there. Both river run

  15. Provenance-aware optimization of workload for distributed data production

    Science.gov (United States)

    Makatun, Dzmitry; Lauret, Jérôme; Rudová, Hana; Šumbera, Michal

    2017-10-01

    Distributed data processing in High Energy and Nuclear Physics (HENP) is a prominent example of big data analysis. Having petabytes of data being processed at tens of computational sites with thousands of CPUs, standard job scheduling approaches either do not address well the problem complexity or are dedicated to one specific aspect of the problem only (CPU, network or storage). Previously we have developed a new job scheduling approach dedicated to distributed data production - an essential part of data processing in HENP (preprocessing in big data terminology). In this contribution, we discuss the load balancing with multiple data sources and data replication, present recent improvements made to our planner and provide results of simulations which demonstrate the advantage against standard scheduling policies for the new use case. Multi-source or provenance is common in computing models of many applications whereas the data may be copied to several destinations. The initial input data set would hence be already partially replicated to multiple locations and the task of the scheduler is to maximize overall computational throughput considering possible data movements and CPU allocation. The studies have shown that our approach can provide a significant gain in overall computational performance in a wide scope of simulations considering realistic size of computational Grid and various input data distribution.

  16. On the Implications of Knowledge Bases for Regional Innovation Policies in Germany

    Directory of Open Access Journals (Sweden)

    Hassink Robert

    2014-12-01

    Full Text Available Regional innovation policies have been criticised for being too standardised, one-size-fits-all and place-neutral in character. Embedded in these debates, this paper has two aims: first, to analyse whether industries with different knowledge bases in regions in Germany have different needs for regional innovation policies, and secondly, to investigate whether knowledge bases can contribute to the fine-tuning of regional innovation policies in particular and to a modern, tailor-made, place-based regional innovation policy in general. It concludes that although needs differ due to differences in knowledge bases, those bases are useful only to a limited extent in fine-tuning regional innovation policies

  17. The research on construction and application of machining process knowledge base

    Science.gov (United States)

    Zhao, Tan; Qiao, Lihong; Qie, Yifan; Guo, Kai

    2018-03-01

    In order to realize the application of knowledge in machining process design, from the perspective of knowledge in the application of computer aided process planning(CAPP), a hierarchical structure of knowledge classification is established according to the characteristics of mechanical engineering field. The expression of machining process knowledge is structured by means of production rules and the object-oriented methods. Three kinds of knowledge base models are constructed according to the representation of machining process knowledge. In this paper, the definition and classification of machining process knowledge, knowledge model, and the application flow of the process design based on the knowledge base are given, and the main steps of the design decision of the machine tool are carried out as an application by using the knowledge base.

  18. SIIA: a knowledge-based assistant for the SAFT ultrasonic inspection system(a)

    International Nuclear Information System (INIS)

    Melton, R.B.; Doctor, S.R.; Taylor, T.T.; Badalamente, R.V.

    1987-01-01

    SIIA(b) is a knowledge-based system designed to assist in making the operation of the Synthetic Aperture Focussing Technique (SAFT) Ultrasonic Inspection System more reliable and efficient. This paper reports on their effort to develop a prototype version of SIIA to demonstrate the feasibility of using knowledge-based systems in nondestructive evaluation (NDE). The first section of the paper describes the structure of the problem and their conceptual design of the knowledge-based system. The next section describes the current state of the prototype SIIA system and relates some of their experiences in developing the system. The final section discusses their plans for future development of SIIA and the implications of this type of system for other NDE techniques and applications

  19. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support.

    Science.gov (United States)

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians' experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mechanisms apply patterns from human thought processes, such as generalization, similarity and interpolation, based on attributional, hierarchical, and relational knowledge. Plausible reasoning mechanisms include inductive reasoning , which generalizes the commonalities among the data to induce new rules, and analogical reasoning , which is guided by data similarities to infer new facts. By further leveraging rich, biomedical Semantic Web ontologies to represent medical knowledge, both known and tentative, we increase the accuracy and expressivity of plausible reasoning, and cope with issues such as data heterogeneity, inconsistency and interoperability. In this paper, we present a Semantic Web-based, multi-strategy reasoning approach, which integrates deductive and plausible reasoning and exploits Semantic Web technology to solve complex clinical decision support queries. We evaluated our system using a real-world medical dataset of patients with hepatitis, from which we randomly removed different percentages of data (5%, 10%, 15%, and 20%) to reflect scenarios with increasing amounts of incomplete medical knowledge. To increase the reliability of the results, we generated 5 independent datasets for each percentage of missing values, which resulted in 20 experimental datasets (in addition to the original dataset). The results show that plausibly inferred knowledge extends the coverage of the knowledge base by, on average, 2%, 7%, 12%, and 16% for datasets with, respectively, 5%, 10%, 15

  20. Knowledge-Based Economy in Argentina, Costa Rica and Mexico: A Comparative Analysis from the Bio-Economy Perspective

    Directory of Open Access Journals (Sweden)

    Ana Barbara MUNGARAY-MOCTEZUMA

    2015-06-01

    Full Text Available The objective of this article is to determine the necessary institutional characteristics of technology and human capital in Argentina, Costa Rica and Mexico in order to evolve towards a knowledge-based economy, addressing the importance of institutions for their development. In particular, the knowledge-based economy is analyzed from the perspective of bioeconomics. Based on the Knowledge Economy Index (KEI which considers 148 indicators, in the following categories: a economic performance and institutional regime; b education and human resources, c innovation, and d information and communication technologies, we selected 13 indicators. We aim to identify the strengths and opportunities for these countries in order to meet the challenges that arise from the paradoxes of technological progress and globalization. In this sense, bioeconomy is approached as part of the economy. This analysis shows, among other things, that Argentina has greater potential to compete in an economy sustained in the creation and dissemination of knowledge, while Costa Rica has an institutional and regulatory environment that is more conducive to the development of business activities, and Mexico faces significant challenges regarding its institutional structure, economic performance and human resources.